Forget AI Imitating Us, We Are Imitating AI

“The real danger is not that computers will begin to think like men, but that men will begin to think like computers.” – Sydney J. Harris

These days at LinkedIn, it’s hard to miss the feeling that a certain post looks AI-generated. Even from people who were writers long before ChatGPT. Forget writing, it feels the same even when people speak.

It’s hard to pin it down. It’s probably the suffocating politeness, over-fluency when a laconic message would suffice, or overuse of words that gained currency only in recent times.

Not just those em-dashes, it’s also words I scarcely saw being used previously. Like “meticulous,” “delve,” “realm,” and “adept.” Or the subtler, but more omnipresent phrasing: “It’s not X, it’s Y”.

A recent study by the Max Planck Institute analyzed over 740,000 hours of transcribed speech, ranging from academic YouTube lectures to podcast discussions. Researchers used causal inference techniques to examine changes in word usage before and after the public release of ChatGPT.

A report by The Verge observes: “One word, in particular, stood out to researchers as a kind of linguistic watermark. “Delve” has become an academic shibboleth, a neon sign in the middle of every conversation flashing ChatGPT was here. “We internalize this virtual vocabulary into daily communication,” says Hiromu Yakura, the study’s lead author and a postdoctoral researcher at the Max Planck Institute of Human Development.”

In fact, Yakura asserts that the individuals in the clips are unaware they’ve begun speaking like ChatGPT, describing it as a case of “virtual vocabulary being internalized into daily communication.”


The Feedback Loop: From AI to Humans

As David Samuels, literary editor of Tablet, notes:

“Platforms animated by machine logic turn people into functional subroutines, repeating chunks of language in response to prompts. By doing so, they help the machines to sort categories of information that are unique to human beings into machine-appropriate buckets.”

Today, the writing craft is facing a kind of machine pincer attack: AI tools dumb the writing to an extent it’s high on patterns, low on substance; and SEO gods reward this very writing type and incentivize more of it.

Samuels astutely observes:

“The auto-complete function that is now built into every platform from Google to X is merely one obvious way that machines are breaking down the individuating qualities of language into a more impoverished and grammatically rigid—that is to say, less human—language that they can more easily predict.”

These are just stray instances. Do they influence how writers write in a big way? Samuels thinks so.

“These activities, especially when repeated dozens of times per day, bind their authors, i.e., you, to larger machine-friendly language complexes that train their human users to uncritically accept judgments and prompts that are reinforced by the real-time networked effects of millions of other machine-mediated human users.”


Putting Trust Back into Communications

The rules of writing have changed, evolving with velocity and now accelerating at an unprecedented pace.

Sounds too far-fetched? Consider how old fashioned essay writing appears in comparison to how you write on a social media platform.

We’re encouraged to lead with the punchline. Instead of setting up the story first, we go straight to the point and then supply the context people need to make sense.

Also, when writing was hard, it meant something. It signaled that you cared enough to make an effort. Regardless of (perceived) outcome quality.

This paradox, where AI enhances communication while also creating suspicion, reflects a deeper breakdown of trust, according to Mor Naaman, professor of Information Science at Cornell Tech. He identifies three layers of human signals that are being lost as AI becomes part of our conversations.

“The first level is that of basic humanity signals, cues that speak to our authenticity as a human being like moments of vulnerability or personal rituals, which say to others, “This is me, I’m human.”

“The second level consists of attention and effort signals that prove “I cared enough to write this myself.”

“And the third level is ability signals which show our sense of humor, our competence, and our real selves to others. It’s the difference between texting someone, “I’m sorry you’re upset” versus “Hey sorry I freaked at dinner, I probably shouldn’t have skipped therapy this week.” One sounds flat; the other sounds human.”

I think most people using GenAI already recognize this. Hence, they make a conscious attempt to personalize and humanize their writing. But to build back trust, we need to reach a point where the communication is unmistakably, unambiguously human.

Leave a comment

Blog at WordPress.com.

Up ↑