AI - Taking Your Job or Your Pension Money?
Are the claims legit? Do the sums add up?
Note: this post is allegedly too long for email, so please read using the app or a browser
There’s been a considerable amount of hype recently about the capabilities of recent LLM AI. There was the Clawdbot Moltbot OpenClaw hype which mostly demonstrated the credulousness of tech journalists and the lack of security thinking of AI researchers and developers. There’s Matt Shumer’s famous screed - which Ed Zitron poked a few holes in (archive) as does Gary Marcus
So let’s get this out the door now. There’s a lot of hype and exaggeration. LLM AI is not going to take everyone’s job next week or even next decade. In fact as Gary Marcus points out, IBM is citing AI as a reason to increase US hiring 1
But
It is going to take away a lot of routine administrivia jobs and low level technical stuff fairly soon, indeed it already is. Shumer’s claims that Claude AI is programming itself and that lawyers love it more than 100 interns are probably exaggerated, but ES Raymond, Perry Metzger and various other real programmer sorts are using AI to do more things, faster and better than they could before. Likewise there was a big story about accountancy firm KPMG getting a discount from their own auditors because of AI efficiency which has led numerous people to wonder if we’re seeing the end of the billable hour. And that is a legitimate issue for lawyers, accountants and other similar professionals.
Also yes, the last year or so has been a learning process (a real one, not a “learing” one) as we figure out how to work around the AI hallucination problem, the too big for context problem and so on. It looks like we are finding work arounds to those problems, but the openclaw shenanigans have shown that trusting AI to do something without user intervention is still fraught with danger. I predict that there will be considerable opportunity to write AI verification and validator tools and possibly employment opportunities as the person who confirms that the AI output does what it says it does and isn’t a lie. For what, I trust, are obvious reasons, we don’t ask the AI to do final validation of its own output - or even the output of a different AI
I predict that there will be considerable opportunity to write AI verification and validator tools and possibly employment opportunities as the person who confirms that the AI output does what it says it does and isn’t a lie. For what, I trust, are obvious reasons, we don’t ask the AI to do final validation of its own output - or even the output of a different AI
However auditing, routine legal work (e.g. contracts), basic office functions like much of HR are vulnerable to being AIed away. A lot or marketing and sales too. Create this powerpoint, generate that quote/invoice and so on. Your AI friend will do that for you just fine, but be sure to double check that it didn’t get prompted to add an extra discount to the invoice or some extra fingers to the photo in the presentation.
However there’s a catch here. These phony baloney jobs will only be AIed away if the AI option is cheaper than a a human being paid, say, $20/hour as I discussed in my last AI post
How much does it cost?
If you want to write, say, a C compiler then there are reported costs2:
To stress test it, I tasked 16 agents with writing a Rust-based C compiler, from scratch, capable of compiling the Linux kernel. Over nearly 2,000 Claude Code sessions and $20,000 in API costs, the agent team produced a 100,000-line compiler that can build Linux 6.9 on x86, ARM, and RISC-V.
However that $20,000 is almost an enormous understatement of the actual costs involved. For a start there’s the fact that Anthropic spends at least $2 for every $1 of income it gets. I say, at least, because the sums are murky. Ed Zitron’s latest has this:
Even after a year straight of manufacturing consent for Claude Code as the be-all-end-all of software development resulted in putrid results for Anthropic — $4.5 billion of revenue and $5.2 billion of losses before interest, taxes, depreciation and amortization according to The Information — with (per WIRED) Claude Code only accounting for around $1.1 billion in annualized revenue in December, or around $92 million in monthly revenue.
The AI companies are generally speaking giving their product away. Even when they charge for it, the fee they charge is far lower than the cost of providing the service. That is the direct inference cost of that particular user’s usage is more than that user pays and ignores all the compute being given away for free and the costs of training the models used, the data center construction etc.
Let’s talk training for a second. Again quoting from the excellent Ed Z:
[A]arguably the most dishonest part is this word “training.” When you read “training,” you’re meant to think “oh, it’s training for something, this is an R&D cost,” when “training LLMs” is as consistent a cost as inference (the creation of the output) or any other kind of maintenance.
While most people know about pretraining — the shoving of large amounts of data into a model (this is a simplification I realize) — in reality a lot of the current spate of models use post-training, which covers everything from small tweaks to model behavior to full-blown reinforcement learning where experts reward or punish particular responses to prompts.
To be clear, all of this is well-known and documented, but the nomenclature of “training” suggests that it might stop one day, versus the truth: training costs are increasing dramatically, and “training” covers anything from training new models to bug fixes on existing ones. And, more fundamentally, it’s an ongoing cost — something that’s an essential and unavoidable cost of doing business.
As far as I can tell this is 100% correct. OpenAI and Anthropic put AI training in the CapEx budget to flatter their OpEx numbers to make it seem like they are closer to breaking even on a income to COGS basis. They aren’t. Even with this tweakery they are still losing money on every user, but they reduce the apparent loss. It’s kind of like a pottery putting its kilns and the price of fuel to run them as CapEx and just counting the clay and glaze as OpEx.
It is phenomenally difficult to get a straight answer about how much Microsoft, Google, Amazon, Oracle3 etc. are spending on AI operations costs, and how much they get in income from them. Of course OpenAI and Anthropic use these other company’s data centers and pay for them, but this is extremely incestuous as they often have “investments” from the data center companies that they then pay back to them in usage fees. It is very reminiscent of the 2000s and Nortel/Lucent, if not Enron. Money is being spent on power, servers, buildings, wages and so on and the bills are being paid by money from somewhere. But where exactly is mysterious, and that seems to be a deliberate choice by all involved.
Which leads us to the data center costs.
$1T CapEx is planned
As many people have noticed the AI companies are spending telephone number amounts of money to build more data centers and fill them with AI crunching computers. The even cost more than a Somali Day Care Learing Center
In fact they cost so much that companies are doing interesting financial engineering to come up with the money. Alphabet (Google) is planning to sell 100 year bonds, which seems to me to be a stretch. Oracle is seeing problems raising the debt to pay for its Abilene Stargate complex. Meta is doing something weird to finance its data center in Louisiana. And so on.
On that note SpaceX’s purchase of xAI could also be a bailout as Gary Marcus notes:
I think the merger is really a kind of bailout, to give a lot of cash to a company that is otherwise in distress. Fact is, xAI ain’t doing all that great. It’s burning money fast, with no obvious business model or market niche, and has little to show for it. They have also faced a lot of backlash for being reckless and irresponsible; hardly a good brand name. Grok (their main product) doesn’t have the users that ChatGPT has, and it doesn’t have the prestige that Google’s Gemini seems to be racking up. It also doesn’t have the clear corporate focus that Anthropic has. Nor does it have any obvious secret sauce.
And he includes this helpful graphic
Now I wouldn’t bet against Elon, and he mentions other reasons for the merger too. But nothing says that Elon might not want to corner the market in space/lunar data centers AND bail out his AI company, which is kind of where I’m leaning.
Eventually the roughly $1T in current and planned AI data center investment has to be paid back in addition to the ongoing cost of running the gigawatts of data centers that investment has built4. That’s a challenge at $20/month or even $200/month which seems to be about the top end of what the market will currently bear and the use case of replacing low end clerical work.
If pricing needs to go up to $2000/month then now we’re approaching the cost employing a human - an offshored Sanjay providing the needful, to be sure, but a human never the less. The AI tools have to be better and more reliable than they currently appear to be for that to be a broadly applicable worthwhile tradeoff.
That’s not to say they aren’t worth it at that kind of cost in certain situations. The coding/development assistants are absolutely going to pay for themselves for the right kinds of task at $2000/month or even, like the Anthropic C compiler at $20,000 for two weeks of work. Depending on the task they might pay for themselves at even higher amounts such as $50,000 for a week. I can, for example, think of tasks that $dayjob and/or $dayjob’s partners want to do that we/they don’t have the manpower to do but which would pay off a $50,000 spend in a year.
The question is whether there are enough $50,000 tasks to pay for the investment. Annually, as I said in a previous post5 almost year ago, that means a minimum of $100B. When you add actual OpEx and a need to make some kind of profit you need to get an additional $50-100B/year. That’s a LOT of $50,000 tasks required and I suspect most organizations only have a handful. Because they too can do similar sums, that is why many people say AI is a bubble that must pop and much of that $1T will never be paid back.
It is also worth pointing out that a lot of the CapEx spending is on things that depreciate fairly quickly - the Nvidia chips and now custom chips from others. Not only do these chips fail, they look to be no longer useful in 3-5 years. So while the data center building and some of the internal bits may last a decade or more, the servers will need to be replaced multiple times before the building is no longer suitable.
So when I said that “it’s kind of like a pottery putting its kilns and the price of fuel to run them as CapEx and just counting the clay and glaze as OpEx”, it’s actually worse than that because parts of the kilns themselves need to be rebuilt every few years too. A significant fraction of the $1T has to be replaced in 3-5 years which means AI needs to generate at least $200B/year to pay for itself, and quite likely closer to $300B
The Optimistic End State
The optimistic view is that AI will enable entrepreneurs to come up with new things to do that will profitably use the AI data centers in addition to the replacement of grunt work. I think this is almost certainly true. Eventually. For example, Elon Musk’s robots will need masses of AI to tell them what to do in detail as they do enormous amounts of grunt work for humanity. And I’m sure there are plenty of other people who have ideas. Lenny Rachitsky Xeeted a summary of a discussion he had with a lead OpenAI engineer which includes the following:
8. The one-person billion-dollar startup is coming, but with unexpected second-order effects. As AI makes individuals more productive, we’ll see not just billion-dollar solo founders but an explosion of small businesses: hundreds of $100M startups and tens of thousands of $10M startups. This will transform the startup ecosystem and venture capital landscape.
9. Business process automation is an underrated AI opportunity. While Silicon Valley focuses on knowledge work, most of the economy runs on repeatable business processes with standard operating procedures. There’s massive potential to apply AI to these workflows, which are often overlooked by the tech community.
I agree with both of these points and some others too6.
However, in order for the current AI investment frenzy to not generate lots of large bankruptcies, the one-person $10M+ startups replacing repeatable business processes need to be up and running profitably a couple of years from now. Because if they aren’t then the bondholders for the data center debt aren’t going to get paid and that is going to cause financial damage that looks quite a lot like 2008. It may not be quite as bad - sub-prime mortgage debt was around $1.5T just before the wheels came off and we’ve had a fair amount of growth and inflation since. But a debt shock that is a third to half the 2008 shock is not a good thing. Especially since
Countries don’t have the resources to bail out the overstretched this time around thanks to a) not paying off the debt from 2008 and b) getting further in debt in 2020/1 for Wuflu
The Chinese property market is at least one other massive problem and may also soak up resources that would be needed to bail out AI (or vice versa - see how the US sub-prime mortgage crisis ended up causing the southern European Euro crisis)
One of the obvious uses for AI is going to be crime - indeed we’re already seeing AI assists in crime. Specifically fraud and cybercrime. If there is a widespread rise in such crime then laws may get passed that end up stopping much otherwise beneficial AI usage in an attempt to stop the AI powered crime wave7. For example an obvious fraud method is to have an AI impersonate a real person. A ban on AI impersonating humans (as originally requested a couple of years ago) could end up hurting any number of positive usages for human impersonation.
Eventually, AI is going to transform society, positively. For example nothing says that AI’s use in crime can be only on the offense. An obvious use for AI right now is to scan the HHS Medicaid data for fraud, and there is absolutely no doubt that AI will be a wonderful tool to check up and validate applications for government grants and payments in general in the future.
But in the next two or three years it may just impoverish society first.
Fears about Trump smashing Indian H1Bs and offshoring may be other bigger reasons that the company doesn’t want to talk about
Technically this is actually a seriously impressive achievement. Essentially it replicated what has taken probably thousands of man years of developer time in a new computer language. Even if it cost $20,000,000 not $20,000 it is probably worthwhile.
That ignores the lack of compensation for all the copyright data that the AI companies have hoovered up to train their products. I have absolutely no idea whether they are going to get away with it - cynically I suspect they will - but the AI industry as a whole could potentially be on the hook for $lots in fines and/or punitive damages for repeated, mass, deliberate copyright violations.
Point 11 directly contradicts a Matt Shumer claim, which is amusing
And it is unlikely to stop much of the crime because, as we know well with gun crime, criminals don’t obey the law anyway







My own suspicion is that the 'dark matter' in this discussion is... um... precisely what Elon is aiming for: personal/household and even factory bots, and massive automation (which will drop costs of AI data-centers, and speed replacement.) The computing power required to run these 'mundane' automata is IMO orders of magnitude larger than we have. The demand is also vast.
There are other facets to this. My med school once had 12 people teaching English. Seven were native English speakers the other 5 were Japanese and two of these were full time. We now have 5 teachers, two native speakers, three Japanese, 2 of whom are full timers. To be honest, this is not all due to AI and tech, but much of it is.
In the early days of the panic, one of my employers was seriously looking at cutting all we part timers loose and having their full timers either zoom or record all lessons. While that did not happen, a great many schools have reduced their faculty in favor of tech platforms, including AI. For those who have thus far survived the cull, the work load has increased substantially while renumeration has remained stagnant. The increased work load is data entry into the schools system. An ad recently seen and photographed by myself is for an AI English Kaiwa partner for ¥180 per month. How can I compete against that? Small wonder no one has been interested in my Sagasu Sensei Profile of late.
One online resource really saved my bacon the first couple of years of the panic. I had already been a subscriber of their service for in class projections of lectures on medical topics and activities and quizzes related to them. They kindly extended permission to use their resources over zoom during the panic. Sadly, government policy terminated my position and I cancelled my subscription with this online service. I resubscribed for this now ending school year for a course at another school. However, things have changed considerably.
Planning for my classes is no longer possible as AI now governs the platform. The materials I downloaded and printed often do not match the new, AI driven lectures. Even preparation the day before a class cannot be counted upon to reflect what I can access during class time. This is profound. This deprives me of the ability to throughly know the material I will present in class as whatever I prepare may not be what that same lesson projects upon the screen the next day. What benefit do the students receive from my presentation of material that may be as new to me as it is to them if I am no longer allowed to prepare the lesson? How can I answer their questions? What am I doing to earn my pay?
Some on the students’ side. I teach the students of the highest ability in English at this school. This includes dual US/Japanese or UK/Japanese students who have gone through education in English from kindergarten through high school or beyond. Even they cannot write in paragraph format. Their compositions are more akin to bullet point presentations. Further, as a group, they cannot read any full length work and synthesize the information. They just ask AI for a summary and think any suggestion that they actually read the work as strange as if spoken by a Martian. Many do not have the ability to do so anyway.
Everything I say in class is “fact checked” by students with AI. My statement that Japan suffered high death rates during the panic despite all the mitigation measures is refuted because AI states otherwise. I was even fact checked on if there really is a T-shirt that says that “Liver is evil and must be punished.” with a beer mug on it. 90 seconds after I shared that bit of levity with a class of 3rd year med students, one called out “Hontoda!”, and showed her classmates the many versions of this T shirt available for sale on Amazon.
You and I share many beliefs based upon our experiences. I can assure you that next year’s sophomores in college down to 6th grade elementary students in Japan can “disprove” each and every one of these because AI says different. They place an amount of trust in AI that is unhealthy to place upon even the holiest of the clergy. They prefer AI over teachers, AI over human interaction, AI summaries over reading and cannot suffer through any readings that are more than just a few minutes in length. It was once rare that I had even a single student in a class of any size who did not like movies. The reverse is now true. Why? Movies are too long. Youtube and TicTok length is all their tiny attention spans can deal with. Whenever I get groggy and try to enforce the no idiot phone policy in the classroom, at the three minute mark the entire class gets as fidgety as smokers did in boot camp where cigs are not allowed. Few can make it to 5 minutes without checking their idiot phones. These are the doctors who will treating you in your old age.
They are, in a way, much as I was in high school. As a child, I could not understand why I should waste time learning how to do math when we had calculators that could do the work for us. I now know the value of knowing how to do things without our electronic toys, but I did not then. The difference is, today’s youth are not likely to learn that value as gadgets are now omnipresent.
Here is an anecdote that speaks volumes, IMHO, of what lies ahead for the human race dependent upon AI. Seven or 8 years ago we took our son bowling for the first time. Relating the event to my parents via Skype, my mom asked if the score was automatically calculated as it was then in the States. It was. She signed and told me that keeping score for bowling was how she learned to add quickly and I recalled the same for me. Using math in everyday situations builds a level of competency with it that those who rely on devices will never be able to achieve. Same with counting back change.
As I reported in one of my latest posts on substack, my CEO student told me last Thursday that corporate Japan is reportedly reducing the number of new employees they will hire as AI has reduced the need for human employees. This is HUGE.
AI IS taking jobs. It is replacing teachers and tutors. It is replacing human interaction of almost all kinds, even the most intimate. That does not mean it will ever be viable but it need not be to cause havoc and mayhem on scales never before witnessed by mankind.