But I wouldn't hold my breath waiting for introspection from that camp. It seems that AI maximalists, like so many other players these days, see it as end-game time. There are no bounds or rules: pick a side, and go. And then eat the rest.
Sure, not everyone sees it this way. There are highly competent, human actors working in their joy toward a better way forward with all of it. But I don't think you'll find that spirit unbridled inside any profit-seeking corporation of any significant standing (though I would be happy to be proven wrong). If it existed there, it is being choked out by selfishness and survivalism.
And then there's Thiel and ilk waxing eschatological, adding a whole other layer to the scheme.
As someone who only has a passing interest, there isn't anything distilled enough in this article for me to comment on as the central point. Everyone seems to be reporting impossible numbers, and buying dramatically more hardware than they can install in a reasonable timeframe given the pace of the industry.
This is what he said in 2024
-----
The “iPhone moment” wasn’t a result of one thing, but a collection of different bits that formed an obvious whole — one device that did a bunch of things really, really well.
LLMs have no such moment, nor do they have any one thing they do well, let alone really well. LLMs are famous not for their efficacy, but their inconsistency, with even ardent AI cultists warning people not to trust their output
> Isn't it weird how there is no huge industry pushback on all this new AI datacenter power need, as there was about electrifying vehicles?
EV on the other hand does have some obvious industrial adversaries.
Perhaps in a couple of centuries when a tube of nutrient slurry is the standard meal, people will be equally proud of not spending 15% of their salary on food...if salaries even exist by then.
And yes, I can see a world where, if tasteless nutrient slurry was essentially free and perfect nutrition for the body, then people would gladly consume that for most meals, and maybe splurge every now and then on an "old school" meal. I don't really see a problem with that.
You really can't. That price/quality point basically does not exist anymore
What's worse is that we have "designer brands" that charge the higher price point but are the exact same low quality as the lower price point stuff. Actual midrange quality just plain does not exist
Take your yearly clothing expenditure and multiply it by 10. And then, just like people 200 years ago, be content with 2 to 4 compete outfits. And then stop buying clothes yearly and go more on 10+ year cycle, where you use your funds to mend clothes instead of replacing them.
Even if you only spend $300 on clothes per year, doing it the old school way means you can spend about $15,000 on 2-4 outfits and save the other $15,000 for mending and cleaning over the next 10 years.
I guarantee you you can find a high quality custom outfit for $5000.
Lots of countries attribute the clothing industry to increasing standard of living and economic prosperity. Like India, Pakistan.
"Something something uplifted from poverty" is much shorter, quippier and cleaner.
The truth was that the machines produced worse quality goods and were less safe, not that people couldn’t skill up to use them and not that there wasn’t enough demand to keep everyone employed. It was quality and safety.
You should look into the issue further, because I had your opinion too until I soberly looked at what the luddites really were arguing for, it wasn’t the end of looms, it was quality standards and fair advertising to consumers.
Only the workers are getting framed as though self-interest invalidates their position. The Luddites’ arguments about quality standards and consumer fraud were correct on the merits regardless of their motivation for raising them.
And the choice was never mechanisation versus no mechanisation… it was whether the transition would include basic labour and quality standards. With regulation, you’d still have got mechanisation and cheaper clothing in the end… just without the fraudulent goods and wage suppression. Framing it as “society versus a few jobs” is exactly the manufacturer’s argument from the 1810s, which is very effective propaganda reaching through centuries.
To drive the point home even clearer
Parliament made frame-breaking a capital offence to protect manufacturer profits. Saying it all worked out eventually doesn’t justify the process, any more than cheap cotton justified the conditions under which it was produced. And frankly, look at modern fast fashion: cheap clothing that falls apart in weeks, produced under appalling conditions overseas. We’re still living with the consequences of the principle that cheapness trumps everything else.
But on quality: I found this an interesting read https://marginalrevolution.com/marginalrevolution/2025/05/ha...
Making clothing more efficient by employing children in dangerous factories is bad actually (what happened in the original factories and now at fast fashion).
The fossil fuel industry ?
Turns out the market routes right around slow movers.
“I think AI is a bubble and it’s going to collapse”
“Ok, then there are ways to bet against it if you’re really sure”
“Oh I’m sure I’m right, it’s just that the market might take a long time to realize I’m right”
Come on… there was no subtlety in the article at all. He said it’s all a house of cards, it’s all going to collapse, it’s all a grift… surely someone that certain should be willing to put something in…
Contrast that with hyperscalers no longer reporting AI revenue separately, making bold claims about long term growth with no evidence to back it up, and a tech media apparatus that has largely avoided asking founders hard questions.
I know just as well as you how this is all going to turn (which is to say, nobody really knows). But I'll take the person doing the math over the person trying to hide numbers all day long.
The article says 240 Gigawatts of capacity is allocated for AI datacenters.
New York City draws about 10 Gigawatts in the hottest months of the year due to extra load from AC use.
So am I understanding correctly that these people want to foist upon the power grid 24 NYCs?
Yes, that number is absurd, and data centers will certainly need to make do with less, regardless of actual requirements.
Texas is [d]oing its best to build as many datacenters & power plants as possible. They were describing it as "Texas will have more datacenters than anyplace else in the world." This was public radio, but everybody's taking a hit on the ol' AI pipe nowadays.
At cost of 0.01 per kwh would be 21 billion... And electricity generally is not that cheap everything considered...
My current model for understand for how AI will scale out is that we'll move through the following choke points:
AI chip makers -> Data center infra and construction -> regional power companies
Right now we're firmly in the "AI chip makers" part of the expansion, with everything else in the beginning stages. AI is useful, but whether it's hyped or not, it's hard to deny that not being able to build and power data centers will impact how this plays out.
But Ed Zitron is not it. Here's an example [1] of him fumbling on simple arithmetic. He's also perpetually bearish without any sense of principles on his message.
This is what he wrote in 2024 [2]
> You can fight with me on semantics, on claiming valuations are high and how many users ChatGPT has, but look at the products and tell me any of this is really the future.
I think the industry really needs someone better with principles.
[1] https://x.com/binarybits/status/2034376359909130249
[2] https://www.wheresyoured.at/never-forget-what-theyve-done/
Edit: here's another example https://x.com/blader/status/2031216372169191678
I get that people make mistakes but it really does seem like there are no principles behind the guy. It seems like he can write whatever.
But Zitron frequently points out the inconsistencies in these data center deals, noting that companies like OpenAI and Anthropic make these announcements without a formal contract in place, companies like Oracle get a stock bump off of the news, and then we all find out from the mainstream press months later that the deal was never done and in fact may not even be happening anymore.
That's not really behavior you'd expect to see from a vehemently anti-tech press. They're happily making news to boost stock prices short-term, essentially acting as mouthpieces for large shareholders.
The media DOES occasionally say negative things about tech. But of what they say, they scratch, like, 1% of the bad stuff. And they make excuses and let people off easy.
It's very similar to how the media is overly sympathetic to Trump. Yes, Trump is critiqued - but everything he says is interpreted in the least crazy way possible, even though he is a lunatic. MSNBC and co will even go as far as fabricating reasoning for Trump's actions when he doesn't provide any - and it's good reasoning!
Smearing his character without directly addressing those just stinks the place up.
That being said. Since COVID there seems to be an ongoing and worsening DOS attack. Everybody who have access to media are lying. And we know they are lying! The craziest part is not only that they are getting away with it (so far at least), but this is becoming embraced, standardized and legalized. Which is fucking crazy.
I like listening to Ed's interviews, mainly because he is DOSing back.
Not incidentally, he's a PR guy by trade--who still runs his own PR firm! And that firm has done PR for AI companies!
https://archive.ph/2025.10.27-195752/https://www.wired.com/s...
I'm firmly on the skeptic side of the AI skeptic/booster divide, but I wish we had better mouthpieces on the skeptic side. I get the feeling that Zitron is more concerned with getting his newsletter numbers up than anything else.
> He's still an important counterpoint to the unexamined mainstream junk, which says more about the world than about him or his style.
Well, making new mathematical errors while trying to point out someone else's math errors isn't unprincipled. Even in the face of errors, it's implicit that things like transparency and data-driven decisions are considered desirable.
The next point is superficial, but I think you'll find that it tracks in general. Consider 3 headlines and how much discourse really boils down to this type of messaging: "AI can make you rich!" vs "Use AI or be left behind!" vs "AI Industry is Lying to You".
The substance behind the headlines may or may not tell you something true about the world. At the same time, only the last headline/content seems even remotely concerned with principles, implying in this case that lying is bad. The other two are just seeking to spur interest and motivation with greed or with fear.
> The other two are just seeking to spur interest and motivation with greed or with fear.
It just seems like your opinion but even in that case I don't see why we are talking about intention? Ultimately the world would be better if one just said truth so there's no excuse for this
Principles are not the same as intention though. Even if some articles are biased and have certain intentions, I don't mind if they are principled and stick to truth.
I expect principles from both. I don't expect non biased reporting however. I guess you are conflating them.
Principles in this case is to own mistakes, correct them and value truth and yes I do expect boosters to own mistakes. Your first two examples don't show lack of principles - they just show bias and intentions.
As far as I can tell, in February Anthropic projected their 2026+ annual revenue at $14 billion dollars, based on a month long period. If you added the numbers presented together for the 3 years of time, you would end up at $6 billion dollars of revenue.
But, a month later in a court document they only mention "exceeding $5 billion dollars". For the entire time the company has been in business.
Additionally, the month long period with ~1B would account for a fifth of the total revenue. That's eyebrow raising.
Unfortunately though I can't really find anyone else looking at this same information, so for now I have to wade through these newsletters to pick the gold from the shit
FWIW I have been trying to interview Ed about this for ages but he has ignored all of our requests.
The other question I have is... who exactly is doing all of 1. Using AI right now 2. Making substantial money on it or getting real value and 3. Capacity constrained? Who is actually going to productively soak up all this capacity? It seems to me that bringing all this stuff online can't really make things much cheaper than they are now because the fixed costs aren't going anywhere, and if anything, trying to jam so many projects through all at once just raises those fixed costs even higher. It's not like they triple data center capacity (and increasing AI capacity by, what, 10x? 20x?), stick them full of AI systems, and into that 10x+ greater AI capacity they can sell it at the prices they are now. Higher capacity would crash the selling price but the costs would be as high or higher than now.
I am at a complete loss as to how the numbers are supposed to work here. You can't build a company in 2026 on the economy and tech infrastructure of 2036 anymore than it worked to build a company in 1999 on the economy and tech infrastructure of 2019, no matter how rosy the numbers look on the projections based on conveniently ignoring the fact the company passes through "death" in a year and half. Everything promised in 1999 happened, but trying to artificially accelerate it onto Wall Street's time line burned money by the billions. I'm sure 2036 will have lots of AI in it, but you can't just spend money to bring it forward 10 years by sheer force of will. It has to happen at its own pace.
Almost all enterprise users for one. At least from what I have seen it is a massive productivity boost for coding and general research. If the costs were ~4x lower, we would be able to do much much more with them. Building datacenters will reduce the cost because increasing supply would reduce the cost.
> It's not like they triple data center capacity (and increasing AI capacity by, what, 10x? 20x?), stick them full of AI systems, and into that 10x+ greater AI capacity they can sell it at the prices they are now. Higher capacity would crash the selling price but the costs would be as high or higher than now.
This is false. Part of the costs are unit costs which are really high margin. I think the margins are around 50% to 60%. By increasing the capacity, the are bound to make even more profit.
But the other part is reflecting the lack of capacity.
That's great for us users but I'm talking from the point of view of the people trying to make money on the data centers.
"This is false. Part of the costs are unit costs which are really high margin."
Can you explain how everybody throwing their money at nVidia lowers the costs? When they are already apparently at max capacity?
Everybody trying to build a data center at once raises the costs of the data center. Everyone competing for power has already raised power prices and we've barely begun bringing this stuff online. Everyone demanding multiples of what nVidia is producing means nVidias isn't going to reduce prices any time soon.
Your use of "even more profit" also implies that you think that the AI world is making lots of money? nVidia is making lots of money. To a first approximation, everybody else involved has lost billions. Maybe not Apple. But everyone else you can name is deep in the negative on AI.
Why wouldn't they make money if they are the ones on whom money is thrown at?
> Can you explain how everybody throwing their money at nVidia lowers the costs? When they are already apparently at max capacity?
Increasing supply lowers the cost, I'm unsure which part of this is surprising.
> Your use of "even more profit" also implies that you think that the AI world is making lots of money? nVidia is making lots of money. To a first approximation, everybody else involved has lost billions. Maybe not Apple. But everyone else you can name is deep in the negative on AI.
The companies using AI are making money out of it. OpenAI will make money in the future but are losing it because of R&D and training.
Are companies release more software with less developers? If the answer is no, then the productivity has not improved. It might SEEM like it improves because you're able to produce more code and you spend less time programming, but that might not be the case in actuality.
From what I've seen, AI is very good and very popular but it hasn't improved programming productivity in a meaningful way. The bottlenecks are unchanged so writing more code faster doesn't help anything. A lot of companies let a lot of employees go due to AI, and their product velocity has noticably gone down and their quality is noticably worse.
A literal "virtuous cycle", if you will.
I believe that Ed Zitron plays a very important gadlfy role in all of this.
However, if you look at his subreddit, it appears that he has created a 100% AI denier following. My gut makes me worry for them, but I wonder where the truth really lies.
For those of us involved with code, Sonnet 3.5 was a revelation, and Opus 4.5 scared the crap out of many, and converted some of us to believers in "the exponential."
Now, in other verifiable output fields like finance/spreadsheets in general, Claude is scaring even more people.
I really do respect Ed, but I feel like his schtick might make too many people complacent, thinking that this is all fake. Also, I could be wrong.
Why worry?
Also I'm pretty sure I have seen a similar comment before
> Why worry?
Because, instead of telling people that "it's all a bubble," while he might be partially correct, he is still creating a confirmation bubble following. He is creating a denialist community, where as his followers might be best served by learning how to use the tools.
I am not sure about any of this.
Why worry? Because if he is wrong, then there is a chance that we will be killing the animals in our zoos, to feed the people. This is something that really happened during the last "great depression."
I worry about the plight of my fellow man as it affects me.
I see another impassioned, fervent cry daily about how it’s all going to collapse like a house of cards (as if smart money doesn’t know it and some podcaster is the first to realize data centers take a long time to build).
But unless I missed something, I didn’t see him disclose any financial positions that would indicate him betting on the collapse he is so clearly calling for.
I think that should be required to take any of these articles seriously - if your portfolio doesn’t reflect your stated opinions, your stated opinions aren’t what you really believe.
"Smart money" is an illusion that only holds up until they do something in an area you're familiar with. Many _many_ financiers know about finances and fuckall about anything else.
The claims and citations in the article stand on their own, without the additional burden of trying to make a bet based on facts in a rigged game based on mob mentality. How does one even bet against private equity data center deals??
If he believed any of this, there are myriad opportunities to put his money where his mouth is. But I have a feeling the author doesn’t believe it enough to do so. Which tells me all I need to know about his own opinion of his work.
He used the f- word 11 times to describe a catastrophe in the making. This is engagement bait.
Come on, man. How can you defend someone who wrote this:
> every executive forcing their workers to use AI is a ghoul and a dullard, one that doesn’t understand what actual work looks like, likely because they’re a lazy, self-involved prick.
That’s the scion exposing the deep state AI house of cards?
Its true this article isn't the most concise, but it might not deserve the flagging that people are giving it on here.
> And that number is a fucking disaster.
To me that number is reassuring. I was worried that all this Sam Altman plans to build 12GW per month stuff was going to fry the planet.
If the whole lot so far runs on less than one Zaporizhzhia nuclear power plant then that actually seems kind of reasonable. Also I'm skeptical the path to AGI is ever larger data centers - there seems much that could be done in terms of better algorithms and design. I don't think human brains update every neural connection for each word when training which is probably partly why they get by on 20W rather than 20KW.