I must admit the idea has a lot of appeal, because there are people seeing good ROIs, so it does not seem to be the tool as much as the tool user
The company needs to have the right culture and ability to integrate leading technology, whatever it is.
That's, I'd argue, the majority of companies though, which still spells a problem for the AI bubble.
I've been a part of enough failed ERP implementation projects to know that there's actually very few enterprises out there that collectively have their shit together and are good at implementing technology.
If AI also can't solve that problem for them, it'll just join the long list of already existing boring enterprise tech that some successful companies use to great effect, and others ignore or fail to adopt, which isn't exactly a multi-trillion dollar industry to live up to the current hype.
in the times when Agile was still just a way of work (not a mantra), adopting it showed exactly the glaring troubles in the overall (human) pipeline. But it needed quite some time - like months of work - to really show.
This seems same thing, only much faster..
Actually making some parallels might be interesting.. agile was/is also touted as "the silver bullet"..
My first job out of University was with a startup that had been recently acquired by IBM. I have never seen true agile since. Similar 1/100 actually do it really well and proper. I should see if I still have the slide deck I made to tell the rest of IBM, would make for a great blog post and bringing in this thread, chef's kiss?
And it's problem specific to software engineering. Any engineering deals with it - when you manufacture physical things, for example, tolerances, safety factors, etc, are all tools to deal with reality being messy.
Stock up on dry beans and rice. See if your parents have a spare room. Don’t buy anything expensive. This bubble is gonna hurt.
Though I'm more concerned about the effects of the current political climate, than the "AI" bubble popping. In the scenario of that going south, nothing will be normal for a long time.
There’s minimal risk to the decision makers. Meanwhile, every one of us peons is significantly more at risk of losing our jobs whether we could be effectively replaced with these AI tools or not because our own C-level execs decided to drink the snake oil that is the current bubble.
See Microsoft's recent "We don't understand how you all are not impressed by AI."
In the case of MS, you're right, Satya isn't going to fall on his own sword. They will just continue to bundle and raise prices, make it impossible to buy anything else (because you still need the other tools) and then pitch that to shareholders as success "Look how many people buy Copilot (even though it's forcefully bundled into every offering we sell("
This time around the ramifications might be larger, but it will still mostly be felt by those inside the bubble.
Personally, I would rather experience a slight discomfort from the crash followed by a normalization of hardware prices and the job market, than continue bearing the current insanity.
I don't know if I'd call myself a booster or skeptic. I'm loosely speaking all in in the office, but what would I actually spend?
On the one hand, my - I dunno, thought leader-y hat would say, why wouldn't I spend 10k/head on them. These tools are amazing, they could double someone's output.
But they also are these like infinite toys that can do anything, and you can spend a lot of money trying all the things. And does that actually provide value. Actually provide like rubber hits the road monetary value on a per-head basis. If you're really focused and good, probably? But if you're not disciplined. You can spend a bunch of money on tokens, then spend a bunch of money on the new features / services in production, and spend a lot of your colleagues time cleaning up that mess.
And this is all human stuff. This is all assuming the LLM works relatively perfectly at interpreting what you're trying to do and doing it.
So like. Does it actually provide the benefit of X dollars per engineer per year? Because it wouldn't have to, it could in fact go the opposite.
If it really augments my output, sure, currently I just watch my tokens drop to 0 within 3-4 days of using it and then having to wait a month for them to reset because I wont pay for more parrot tokens. The output speeds up some small things but to my overall speeds its not noticeable a ton.
Much like stacks and stacks of badly written web frameworks made things like collapsing comments on new reddit 200 ms of JavaScript execution ( https://bvisness.me/high-level/burnitwithfire.png ) I can easily imagine people layer stuff together till token burn is beyond insane.
I mean just look at the Gastown repository. Its like literally hundreds of thousands of lines of go and md files.
From the PwC survey:
> More than half (56%) say their company has seen neither higher revenues nor lower costs from AI, while only one in eight (12%) report both of these positive impacts.
So The Register article title is correct.
> It's a snowball effect that eventually builds bigger and bigger.
That's just wishful thinking based on zero evidence.
The hype train must go on, and I'm sure all employees are under strict NDAs, so we may never know.
So, what did you learn from that project??
For a few months I used Gemini Pro, there was a period when it was better than OpenAI's model but they did something and now it's worse even though it answers faster so I cancelled my Google One subscription.
I tried Claude Code over a few weekends, it definitely can do tiny projects quickly but I work in an industry where I need to understand every line of code and basically own my projects so it's not useful at all. Also, doing anything remotely complex involves so many twists I find the net benefit negative. Also because the normal side-effects of doing something is learning, and here I feel like my skills devolve.
I also occasionally use Cerebras for quick queries, it's ultrafast.
I also do a lot of ML so use Vast.ai, Simplepod, Runpod and others - sometimes I rent GPUs for a weekend, sometimes for a couple of months, I'm very happy with the results.
How do you measure developer productivity? Code quality? Developer happiness? As far as I know, no one in the industry can put concrete numbers to these things. This makes it basically impossible to answer the question you pose.
There are of course many factors at play here, and a substantial percentage of CEOs report a positive RoI, but the fact that a majority don't shouldn't be dismissed on the basis of this being difficult to measure.
Or the fact we are woefully unprepared for a peer conflict. We wasted how many trillions in the middle east?
Personally I think AI is super useful, but at my job the amount of progress we’ve made has basically ground to a halt compared to before AI.
The reason is that the people who they chose to lead the new, most innovative “AI initiatives” were the least innovative most corporate drone-y people I’ve ever met. The kind of people who in 2025 would unironically say stuff like “we need to work on our corporate synergies”.
They never wanted innovation, they just wanted people to toe the line for as long as possible until they could jump the sinking ship.
500 Internal Server Error