Project Iceberg's Questionable AI Job Impact Study
MIT Project Iceberg research study created headlines last week but lacks veracity.
MIT’s Project Iceberg attempts to forecast AI impacts on human labor.
If you read the AI news last week during the short holiday week, you probably saw the latest MIT AI study was released. The vaunted Project Iceberg, with its AI/human skill overlap “Iceberg Index,” hit 11.7%.
Predictably, the media and influencers had a field day doomsaying the end of work and jobs for humans. There was no better example of this hype-centric trend than CNBC’s tabloidesque headline, “MIT study finds AI can already replace 11.7% of U.S. workforce.”
CNBC reporter MacKenzie Sigalos goes on to say, “Massachusetts Institute of Technology on Wednesday released a study that found that artificial intelligence can already replace 11.7% of the U.S. labor market, or as much as $1.2 trillion in wages across finance, health care, and professional services.”
But does the report say that? I don’t it read it that way. It says there is an 11.7% skill overlap. That doesn’t mean the AI is better than humans at the tasks. It just means AI can perform these tasks to some extent. More on that in a bit.
Of course, MIT doesn’t seem to mind the coverage or even fanning the flames. “Basically, we are creating a digital twin for the U.S. labor market,” said Prasanna Balaprakash, ORNL director and co-leader of the research in the article.
Eight Paragraphs Down
It’s only eight paragraphs into the story that we’re told the Index is not an indicator for actual job loss. Well, TLDR, too late, panic unleashed across the Internet. But hey, I am sure CNBC got the clicks it wanted, and MIT boosted its reputation as a leading AI research university (sort of).
I guess the good news is that the Iceberg Index is prompting participating states, Tennessee, North Carolina, and Utah, to consider investing in AI skills development programs and workforce training. If the primary result from the Iceberg shows that jobs are at risk and that government funding of skills development needs to evolve, then good.
By the way, did anyone bother to ask MIT how 95% of all corporate AI projects fail (another questionable Cambridge, Mass. research study), but the 5% that succeed seemed to have created an 11.7% skills overlap? No?
Of course, the real issue with MIT’s somewhat contradictory study headlines is obvious. It’s that one underestimates success and overestimates failure at the corporate level, while the other examines AI's potential, but misleads the reader into believing there's an imminent risk of job loss. Oopsy.
The Problem with the Overlap
A second study released by Anthropic last week examines labor productivity gains achieved by AI.
I like BinaryVerseAI’s breakdown of the study, which clarifies that 11.7% of overlap does not equate to job replacement. Rather, it’s indicative of job skills at risk.
For example, an LLM can write and code. That overlaps with all writers and software developers. But can an LLM really write well on a high performance level? We know it raises the bar for acceptable communication drafting or coding, such as contact form language or my simple website (vibecoded).
If you are deploying LLMs as your communicators without guardrails and humans in the loop, well, problems are sure to arise. We’ve seen this in the coding space as well. Eventually, the code gets so wasteful that either a second AI/or a human needs to clean up the mess before the hackers do.
So yes, we’ve seen some early job losses in marketing and software development, mostly among entry-level roles. And of course, the same MIT study offers a 2.2% replacement rate in the tech sector. According to Yahoo!, “visible AI adoption concentrated in computing and technology (2.2% of wage value, approximately $211 billion)”, a little less than 1/5th of the at-risk number.
A couple of questions on the phrasing of MIT’s 2.2% number. Are these jobs not hired, or lost? Adding this labor equivalency may actually reflect increased productivity, as Anthropic’s Labor Productivity study illustrates, rather than cause full-on job losses. And the job losses we have seen have mostly been entry-level jobs. Why?
AI Skills Replacement Is Task-Centric
The above chart illustrates time savings by profession for specific tasks on which Anthropic’s Claude is deployed.
One thing the second AI impact study released by Anthropic does a good job of is measuring actual impacts, comparing normal labor hours versus the time executing the same task with an LLM. This is based on actual tasks executed by Anthropic’s LLM Claude.
Going back to our writing/communications example, Anthropic says people use AI to save 87% of the time it would take to write invoices, memos, and other documents. Of course, the time improvement primarily comes from raw drafting.
I assume the remaining 13% is used to develop the prompt and, hopefully, edit. But based on my own experiences using Claude, I’d have to say that’s a very generous number. In instances where I use Claude to draft anything more complex than an email, a proposal for example, I am saving maybe 60% on time to produce. There’s a lot of revisioning and rewriting that has to happen to make the document acceptable.
Overall, this task-specific view of AI overlap provides a better explanation of Iceberg’s 11.7% skills overlap. An interpretation could say that 11.7% of all job tasks could be enhanced, supported, and made more productive using AI. In some cases, the time savings are dramatic, allowing for scale and financial efficiencies. That’s a lot more reasonable and comforting.
Conclusion
These AI-charged task improvements should create “a potential increase in US labor productivity of 1.8% per year—a doubling of the recent rate of labor productivity growth,” says Anthropic. The company goes on to say that this productivity will “cause [economic] growth to double: achieving the rates of the late 1990s, and of the 1960s and 1970s.”
It sure sounds exciting. Hopefully, as the AI revolution continues, we can better come to understand and use it as a tool to fuel innovation rather than slash jobs. Much of the hype is fear-driven, and even the framing of research studies on impact is fear-driven, as evidenced by the Iceberg Index announcement.
Yes, there will be job losses, but more likely, for most professionals, they will face partial task replacement and enhancement by AI. If that’s the 20 +/-% of your job that's written and boring, all the better.
What do you think of Project Iceberg?
P.S. Who releases a major research study on the Wednesday before Thanksgiving?!?




Great post, Geoff! We can always count on you to help us navigate this evolutionary time!