Humans write, machines mimic

Writing is a distinctly human activity.

Using the word “writing” when describing what some artificial intelligence (AI) tools can do may be widely accepted by now, but any good editor should reject it every time.

Writers who respect themselves and their craft should avoid it.

AI tools can’t write anything, no matter what their marketers and enthusiasts might say. 

We need a better term for what AI tools do. At best, they can quickly mimic actual writing by humans, plagiarize text written by humans, and even distort the work of humans. The resulting content is all too easily spit out for consumption by the public regardless of its truth or accuracy.

Don’t call it writing.

B.J.

Stay real: You don’t have to buy into AI “writing” hype

The hype about generative AI and its various and pseudo-AI imitators seems to reach new heights of ridiculous every day.

Take a deep, deep breath.

Predictions that every business will be using it in a few months or years are hyperbole or wishful thinking by some who want that future. If the promoters and enthusiasts try to tell you that you need to adopt it fast or be a loser, they are wrong.

It’s good to be aware of the potential risks and rewards of using AI technology, but most of us would do well to ask this question before we jump in:

“What’s my risk if I don’t rush into this?”

Particularly for small businesses, the likely answer is “nothing.”

If that’s where you are, you have little to gain by investing time, energy and money to churn out AI-generated web content of questionable accuracy, quality or value. There’s already too much of that.

Some of the content advances false narratives. Nearly all of the content features bland language and repetitive phrases, hallmarks of artificial intelligence.

“Rise of the Newsbots,” NewsGuard

Don’t underestimate what it could take to mitigate the risk of using AI technology responsibly. The actually intelligent thing to do right now is trust your fellow humans to write for other humans.

Be real.

B.J.

“AI did it” is no excuse

Let us suppose you use an artificial-intelligence app to write something for you and you publish it.

Later you find out it contained an error, or multiple mistakes, or worse. Maybe someone calls you out, somebody else wants to sue you for damages, or your employer fires you for incompetence.

Don’t blame AI.

Cartoon robot working frantically at a computer.
Image by Richard Duijnstee from Pixabay

You’re the one who trusted the artificial thing, the one who failed to check its output for truth and accuracy, and the one who turned your mess loose on others.

I read somewhere that you reap what you sow. I’m pretty sure that applies in this scenario.

What do you think?

B.J.