
In a recent post, I wondered if GenAI has depressed the demand for human writers. A few insightful interactions ensued.
A senior content marketer pointed out that many media firms have already set a tolerance limit for AI use.
But he conceded this is not enough, “I can use a LLM to be trained in the writing style of Pulitzer award winners of the last 10 years and generate an AI-content that reads interesting and human-generated. In fact, these days online tools are available to humanize AI-generated content, making it more difficult for the media firms to practice what they state.”
He grimly concludes that these very firms, “would not know where to draw the exact line or how to measure AI content.”
Is AI capable of producing content that’s indistinguishable from human-authored ones?
This made me think. Can AI feel emotions?
Writers and artists are fond of reiterating that since AI cannot feel emotions, its output will not feel human too.
But how true is this claim?
Let’s flip the question a bit.
- From: Can AI feel emotions?
- To: Can AI outwardly behave in a way that feels human to the people?
As a casual glance on the world around us would confirm: Yes. AI can produce human-like work.
Sure, there still are giveaways of AI-produced content. For instance, the em dash (—) allegedly exposes AI-generated text. But, that’s not the point. What’s more relevant here is that the AI platforms are already making note of this perception and may soon reduce its usage to the extent it feels human.
AI learns. Continuously. Relentlessly. Copious amounts of data. Data that’s unimaginably enormous for a human mind to comprehend.
Unlike what the creative professionals think, AI is increasingly getting better at interacting with people with emotions, empathy, and understanding.
- Yes. AI cannot feel these human traits when it interacts with people.
- But AI can behave in a manner that people perceive as human-like.
A writer may be a phlegmatic person in personal life. But if his writings invoke emotions in the reader, he has done his job well. What the writer feels inside is anyway inaccessible to the reader. The reader can perceive the writer only through parts (writings) that the writer chose to externalize.
- So, it doesn’t matter if AI cannot feel emotions.
- That AI can generate believably human responses is sufficient.
The movie Her (2013) took the man-machine relation to an absurdist high: of love.
If love is about communication, connection, understanding without being judgmental, what the movie depicts is love. Even if it’s between a man and an artificially intelligent operating system that speaks in a woman’s voice.
Sounds too futuristic?
Realbotix, an American firm, aims to “create robots that are indistinguishable from humans.” The firm can sell you a robot girlfriend — for $175,000.
The firm’s CEO argues that the robot is capable of eliciting a bond as strong as a partner. “We’re taking it to a different level that nobody else is really doing. It can be like a romantic partner. It remembers who you are. It can act as a boyfriend or girlfriend. If you ever saw that movie Her, we’re trying to do that.” Significantly, it can have “conversations of a more intimate nature”.
New York State is already experimenting with ElliQ, “a voice-activated robotic companion powered by artificial intelligence, … to ease the burdens of loneliness among older residents.”
According to a New York Times article, “Many older New Yorkers have embraced the robots, according to Intuition Robotics and the New York State Office for the Aging, the agency that has distributed the devices. In interviews with The New York Times, many users said ElliQ had helped them keep their social skills sharp, stave off boredom and navigate grief.”
The future is here already.
In the industrial revolution so far, there still were places machines couldn’t reach. We were arguing that jobs were not being displaced. They were only moving up the value chain.
This was because machines replaced labor, and then, computation.
It looks like we’ve reached the end of the road: the machines are breaching the final frontier of human cognition. It’s not just the pragmatic activities, like crunching numbers, writing code, etc. Even highly emotional tasks like empathy, caring, reassuring etc. are at risk.
Too paranoid?
In 1950, the famous computer scientist Alan Turing devised the ‘Turing Test’.
The test assesses the machine’s ability to behave as closely to humans as possible. A human evaluator is presented with both machine and human responses and asked to identify which response is from a human. If the machine is able to fool the human evaluator and pass off its responses as coming from a human, it won.
If reports are to be believed, ChatGPT has just passed the Turing test. GPT-4.5 has successfully convinced people it’s human 73% of the time.
When you see a nicely written LinkedIn post today, can you confidently assert it’s human-authored? Would it break your heart to know that your favorite influencer is probably using GenAI for his writings? According to a WIRED article, over 54% of longer English-language posts on LinkedIn are likely AI-generated.
From a philosophical standpoint, we can still argue human experience cannot be imitated, it has to be felt.
However, from a more practical perspective, AI is evolving and is getting better at mimicking human behavior. And, this capability is poised to kill many jobs. Not just the intelligent kind, but the emotional and creative jobs too.
The future will be interesting.
Leave a comment