The End of Pretend Work: When AI Exposes What Was Never There

If you are active on social media, you may have seen self-proclaimed prompt experts revealing the mega prompts that give you consulting-grade reports. All for zero cost (or a tiny, tiny cost) as compared to employing consulting giants that charge in the neighborhood of 500K USD.

I brushed it aside as outpourings of overzealous AI enthusiasts. But it seems like the claims had an element of truth after all.

Deloitte charged  the Australian government $440,000 for a welfare review report. Sounds routine, right? Except the report cited non-existent academic papers, imaginary court cases, and professors who never wrote what they were credited with writing. When errors surfaced, Deloitte admitted to using generative AI and agreed to refund part of the payment ($290,000).

Another AI failure story? But this isn’t about tech any more.

This is an incentive problem that tech has simply made impossible to ignore anymore. When a $440,000 report can theoretically be produced for $4,400 worth of AI credits, someone’s going to take that bet.

Consulting firms have spent decades monetizing the appearance of rigor. Elegant language, graphics, and academic approach (replete with pages of footnotes) represented the value of their work.

Previously, it was difficult for smaller firms to replicate this level of sophistication in their research and analyst reports. Thanks to generative AI, it’s easier to generate reports that look dangerously close to those by consulting giants. For years, this expensive plausibility commanded premium prices. AI has just made plausibility cheaper to manufacture.

Significantly, where’s their skin in the game? Were consulting firms learning at their client’s expense?

The Great Workslop Epidemic

There’s a new term spreading through offices: “workslop”—AI-polished output that looks professional but adds nothing of value.

The numbers are startling. In a survey of over a thousand employees, 40% reported receiving workslop in the past month. That’s about 15% of all workplace content. Each instance costs nearly two hours to fix; approx. $186 per employee monthly.

And workslopped documents breed more workslop. One lazy AI-generated piece spreads errors through teams until entire systems are infested with these errors.

An MIT Media Lab report found that 95% of organizations report no measurable return on their $30-40 billion investment in enterprise AI. Why? Workers are using AI to avoid thinking at all, instead of using AI to sharpen their thinking. No doubt, Google’s former Chief Decision Scientist calls AI “the great thoughtlessness enabler.” A media producer goes further and describes AI as “the ultimate lazy person’s dream.

When Everyone Can Optimize, Nobody Stands Out

In a previous newsletter, I grappled with the job market’s sudden obsession with referrals. When AI-powered resume tools can churn out a perfectly optimized application in minutes, an optimized resume stops being a meaningful signal.

Likewise, when consulting firms could easily produce reports with impressive formatting, extensive citations, and confident assertions, those things stopped meaning anything. The tools designed to help create professional work have made all professional work identical.

We’re facing the Great Consulting Paradox. The very tech that was supposed to amplify expertise has exposed how much of consulting was academic fluency wrapped in enterprise-grade presentation.

A CIO reportedly commented: “they were learning on our dime”. When consultancies are offering expertise that their clients can access through the same AI tools, why pay the premium?

The bloated, slide-heavy, junior-stacked consulting model ceased to be a valuable output in itself. Because AI has revealed how much of traditional consulting was already replaceable.

What Consulting Firms Actually Bring to the Table

At a session I attended on how AI is impacting jobs, a CEO of a tech startup observed: “Jobs that require accountability will never be impacted by AI”. He noted that they engaged professionals for legal and regulatory compliance because they provided accountability.

It’s not that these tasks couldn’t be simplified or automated via tech. After all, their own firm specialized in infusing AI into org workflows. Some jobs exist primarily to provide a psychological cover.

Leaders need to take bets all the time. When their confidence on a bet is low, it makes sense to hedge their bet with a reputable name. Should things go wrong, they have the cover that shares the heat too. AI can automate insights. But it cannot take accountability.

That emotional insurance has value. But only when paired with expertise beyond report generation. And that’s what’s changing.

What Actually Survives This Mess?

Stuart Winter-Tear, who advises tech companies on an AI Product Strategy and delivering ROI from AI, explains how credibility now has two parts:

  • Visible practice. Use AI in your own shop. Say where it helps and where it doesn’t.
  • Visible judgement. Be clear what humans decide and review. Clients buy outcomes, assurance, and repeatability more than a tool.

Among questions that consulting firms must now answer, he cites: “Why should we pay for humans when a machine can do it faster?”

For credibility, he avers that consulting firms must now act with skin in the game, and demonstrate it as social evidence.

The White Collar Revolution

This goes beyond consulting.

In every profession, AI is forcing a question: when a machine can generate your output, what are you actually contributing?

For decades, white-collar work consisted of generating impressive-looking documents that few people read carefully and even fewer verified. This work felt productive. It kept people busy. It generated revenue. It filled time.

This is why I call AI a white-collar revolution. It’s doing what the industrial revolution did to many blue collar jobs of its time. It changed what constituted value and forced people to add value in new ways.

We’re witnessing a great sorting of the workforce: those who use AI to amplify their expertise versus those who let it replace their judgment. The first group becomes more valuable; the second becomes expendable.

AI has made pretend-work untenable. And there’s no automating your way out of that.

Which brings us to the final question: are you building expertise that AI can’t touch?

Leave a comment

Blog at WordPress.com.

Up ↑