What jobs aren’t requiring usage of these tools by now?
Then there are those still using folders with timestamps.
Sadly, I'm still disagreeing while crypto kiddies are driving past me in lambo's. If its the future of money, yes we'll get there eventually, but like every technology shift, there's a lot of money to be made in the transition, not after. *
* I sold all crypto a few years ago and I'm a happier person :D
That is why I agree with the sentiment as well. I use AI a little. Not too much. And I'm as swamped with work as ever because my focus is on legacy stacks, where AI is really not strong.
Meanwhile, the main category of people who have consistently gotten rich off the "crypto revolution" were various scammers and pump-and-dumpers who have since moved on to meme stocks, AI content farming, and so on.
But I wouldn't use crypto as a benchmark because AI has more substance. We can debate if it's going to change the world, but you can build some new types of businesses and services if you have near-perfect natural language comprehension on the cheap.
I know a few people who got wealthy by being early to crypto. None of them had the correct reasoning at the time: They thought BTC was going to become a common way to pay for things or that “the flippening” was going to see worldwide currency replaced with BTC. They thought they’d be kings in a new economy but instead they’re just moderately wealthy with a large tax bill they’re determined to dodge.
I know far more people who lost money on crypto, though. Some were even briefly crypto-rich but failed to sell before the crash or did things like double down on the altcoin bubble.
The second group had gone quiet about their crypto while the few people in the first group gloat and evangelize (because continued evangelization is necessary to keep their portfolios pumped). This creates an intense survivorship bias where it appears like all the crypto kiddies are wealthy, but a quiet mass of people who played with crypto are most definitely not.
Why not simply evaluate things instead of ignoring them until its too late?
Sure, we don't have infinity time, but the fact that OP mentions these two things, means the pattern showed up enough.
You can also accept that certain things and be happy in life either way. Don't need to chase get rich schemes. Some are more privileged than others in being able to do this.
There’s this irony to the FOMO in crypto, which is people argue the “sensible” thing (it’s the future of money) to create FOMO for the insensible thing (it’s a lottery ticket). You’re right it’s too late to buy a lottery ticket, but the vision wasn’t a lottery, it was a medium of exchange!
AI assisted coding is the same way. I use it every day, but if I decided to stop and wait a year, I could still pick it up, probably more easily when the tools are better.
In fact, people who wait might do better than me because their mental model won’t be locked into a way of interacting that will be out of date in six months.
Wouldn’t it be ironic if all the early adopters were the losers because they liked the hacky nature of it? This happened to a lot of early computer adopters, low level programmers, etc.
And with AI... I am genuinely afraid my job will be automated. I'm trying to become a manager before we are relegated to minimum wage workers.
https://fortune.com/2025/03/04/amazon-ceo-andy-jassy-middle-...
Relevant XKCD: 598
It also has changed nothing in the way i do stuff. Checked it out, not for me, thanks but no thanks
(Sick and tired of hearing how you made an ui in a couple of hours by directing the code when it still takes me the same couple of hours of coding.)
Oh wait…
Many people are still coding without AI and doing perfectly fine. When you design serious things, coding is not where most time is spent anyway. Maybe it'll become unavoidable at some point, by that time the experience will be refined and it'll be easier to learn.
Point is, it's never too late. If you don't need to be cutting edge on a new tech, it may not make sense to put the extra effort of early birds. If you put that effort, you better not do it for free.
> Might I be 7% more effective if I'd suffered through the early years? Maybe. But so what? I could just as easily have wasted my time learning something which never took off.
I do think it's a bad take though. Not all new trends are the same: the metaverse was an obvious flop and crypto hasn't found practical applications. AI isn't like those because it's already practically changed the way I get my job done.
It takes time to learn skills, and getting started earlier will means more time to use them in your working life.
Once the tools and models stabilize more (as well as the pricing model), there's less risk in me learning something that is no longer relevant.
Except when I choose to wait on learning how to use AI tools effectively, I get told I am going to be "left behind".
If that were the case I’d hire pen guy.
Also, if your AI has a 20% error rate, you're not holding it right. You need to spend more time keeping it on rails - unit tests, integration tests, e2e tests, local dev + browser use, preview deployments, staging environments, phased rollouts, AI PR reviews, rolling releases. The error rate will be much closer to 0%.
I think the point the author is making is not that it's all useless, but against the very overly simplistic idea the plot of Amount of AI vs Productivity in All Situations is a hockey stick chart.
Being told to be excited about something when clearly all they're saying is "it works sometimes, other times not so much. I'll keep checking and when it's good enough for me I'll get on board" is aggravating.
Yet my pay stays the same, all my coworkers get fired, and Sam Altman gets all of their paychecks. Hrm.
That's a gross overestimate. 2x I would maybe believe.
Someone has to sign off your work and unless it's hard to write but easy to read, this is where the bottleneck currently lies.
This is the lazy guy path, is not the wise one.
That's the point of the blog post. If you can't even say right now whether it's for the better, then there's no reason to rush in.
As a freelancer I do a bit of everything, and I’ve seen places where LLM breezes through and gets me what I want quickly, and times where using an LLM was a complete waste of time.
Building a simple marketing website? Probably don’t waste your time - an LLM will probably be faster.
Designing a new SLAM algorithm? Probably LLMs will spin around in circles helplessly. That being said, that was my experience several years ago… maybe state of the art has changed in the computer vision space.
A very simple kind of query that in my experiences causes problems to many current LLMs is:
"Write {something obscure} in the Wolfram programming language."
I wasn't able to replicate in my own testing though. Do you know if it also fails for "mathematica" code? There's much more text online about that.
My experience concerning using "Mathematica" instead of "Wolfram" in AI tasks is similar.
I was looking at trying to remember/figure out some obscure hardware communication protocol to figure out enumeration of a hardware bus on some servers. Feeding codex a few RFC URLs and other such information, plus telling it to search the internet resulted in extremely rapid progress vs. having to wade through 500 pages of technical jargon and specification documents.
I'm sure if I was extending the spec to a 3.0 version in hardware or something it would not be useful, but for someone who just needs to understand the basics to get some quick tooling stood up it was close to magic.
The question relevant for LLMs would be "how many high quality results would I get if I googled something related to this", and for DICOM the answer is "many". As long the that is the case LLMs will not have trouble answering questions about it either.
I've been impressed by how this isn't quite true. A lot of my coding life is spent in the popular languages, which the LLMs obviously excel at.
But a random dates-to-the-80s robotics language (Karel)? I unfortunately have to use it sometimes, and Claude ingested a 100s of pages long PDF manual for the language and now it's better at it than I am. It doesn't even have a compiler to test against, and still it rarely makes mistakes.
I think the trick with a lot of these LLMs is just figuring out the best techniques for using them. Fortunately a lot of people are working all the time to figure this out.
Even if your architectural idea is completely unique... a never before seen magnum opus, the building blocks are still legos.
This is actually where I would be most reluctant to use an LLM. Your website represents your product, and you probably don’t want to give it the scent of homogenized AI slop. People can tell.
If you decide on your own brand colors and wording, there’s very little left about the code that can’t be done instantly by an LLM (at least on a marketing website).
Without playing around with it, you wouldn't know when to use an LLM and when not.
This is an industry that requires continuous learning.
Why would I do that? Well, I wanted to understand more deeply how differences in my prompting might impact the outcomes of the model. I also wanted to get generally better at writing prompts. And of course, improving at controlling context and seeing how models can go off the rails. Just by being better at understanding these patterns, I feel more confident in general at when and how to use LLMs in my daily work.
I think, in general, understanding not only that earlier models are weaker, but also _how_ they are weaker, is useful in its own right. It gives you an extra tool to use.
I will say, the biggest findings for "weaknesses" I've found are in training data. If you're keeping your libraries up-to-date, and you're using newer methods or functionality from those libraries, AI will constantly fail to identify with those new things. For example, Zod v4 came out recently and the older models absolutely fail to understand that it uses some different syntax and methods under the hood. Jest now supports `using` syntax for its spyOn method, and models just can't figure it out. Even with system prompts and telling them directly, the existing training data is just too overpowering.
For example: gemini became a lot better in a lot more tasks. How do I know? because i also have very basic benchmarks or lets say "things which haven't worked" are my benchmark.
Feels like a false dichotomy.
Have I become faster with LLMs? Yes, maybe. Is it 10x or 1000x or 10,000x? Definitely not. I think actually in the past I would have leaned more on senior developers, books, stack overflow etc. but now I can be much more independent and proactive.
LLM-based tools are a wide spectrum, and to argue that the whole spectrum is worth exploring because one sliver of it has definite utility is a bit wonky. Kind of like saying $SHITCOIN is worth investing in because $BITCOIN mooned as a speculative asset:
- I’m bullish on LLMs chat interfaces replacing StackOverflow and O’Reilly
- I could not be more bearish on Agents automating software engineering
Feel like we’re back at Adobe Dreameaver release and everyone is claiming that web development jobs are dead.The question isn’t if you’ve improved. It’s if the path you took to getting to your current improvement could have been shortcut with the benefit of hindsight. Given the number of dead ends we’ve traversed, the answer almost certainly is yes.
I truly believe so much of the anti-AI sentiment is the same as the Luddites.
They're often used as a meme now, but they were very real people, faced with a real and present risk to their livelihoods. They acted out of fear, but not just irrational fear.
AI is the same: it's unquestionably (to anyone evaluating it fairly) a huge boost to productivity ... and also, unquestionably, a threat to programmer jobs.
Maybe the OP is right about waiting, but to me whenever new tech is disrupting jobs, that seems like the best time to learn it. If you don't, it's not just FOMO as the author suggests ... it's failing to keep up on the skills that keep you employed.
AI is the same: it's unquestionably (to anyone evaluating it fairly) a huge boost to productivity .
And yet, the only research that tries to evaluate this in a controlled, scientific way does not actually show this. Critics then say those studies aren’t valid because of X, Y or Z but don’t provide anything stronger than anecdotes in rebuttal.It’s ridiculous double standard and poisons any reasonable discussion to assert something is a fact and anyone who disagrees is a hysterical Luddite based on no actual evidence.
To paraphrase another analogy that I enjoyed, it’s a bit like when 3d printing became a thing and hype con artists claimed that no one would buy anything anymore, you could just 3d print it.
If a stat like that is not accurately measured, it's useless.
This is all very tiring and difficult. You can be significantly better than other people at this skill.
I judge "failing to keep up" by my ability to "catch up". Right now if I search for paying courses on AI-assisted coding, I get a royal bunch for anything between 3$ to about 25$. These are distilled and converging observations by people who have had more time playing around with these toys than me. Most are less than 10 hours (usually 3 to 5). I also find countless free ones on YouTube popping up every week that can catch me up to a decent bouquet of current practices in an hour or two. They all also more or less need to be updated to relevancy after a few months (e.g. I've recently deleted my numerous bookmarks on MCP).
Don't get me wrong, LLM-assisted coding is disruptive, but when practice becomes obsolete after a few months it's not really what's keeping you employed. If after you've spent much time and effort to live near that edge, the gap that truly separates you from me in any meaningful way can be covered in a few hours to catch up, you're not really leaving me behind.
It may have reduced the time to an implementation, based on my experiences I sincerely doubt the veracity of applying the adjective "working".
It's clearly a textbook example of survivorship bias.
In the 90s the same argument was directed at this new thing called the internet, and those who placed a bet on it being a fad ended up being forgotten by history.
It's rather obvious that this AI thing is a transformative event in world history, perhaps more critical than the advent of the internet. Take a look at traffic to established sites such as Stack Overflow to get a glimpse of the radical impact. Even in social media we started to see the dead internet theory put to practice in real time.
And coding is the lowest of low hanging fruits.
It's worth noting that SO was declining well before ChatGPT launched. It seems more likely that the decline of SO was more driven by Google ranking changes to prioritise websites that served Google ads. Certainly I remember having to go down a few results to get SO results for a while, even when the top results were just copypasta from SO.
I don't think that's it. SO was the go-to page for troubleshooting, whose traffic was not exactly originating from web search. Also, the LLM-correlated drop in traffic is also reported by search engines. Stack Overflow just so happens to be a specialized service with a very specialized audience whose demand is perfectly dominated by LLM chatbots.
Allow me to introduce you to the dot-com boom, where everyone who bet on the internet went broke.
I can be much more specific about “everyone” if you’d like.
Almost all people are "forgotten" by history.
In any case, people who were not even born yet in the 1990s are using the internet today, very successfully, so clearly you can wait.
"It's rather obvious that this AI thing is a transformative event in world history" perhaps but it's not at all obvious how it's going to shake out or which bets are sensible.
I think you are missing the point, and also the very site you're posting on.
Look at the top 50 list of most valuable companies in the world. Over half of the total market value reported today is attributed to companies which were either dotcom startups or whose growth was driven by the dotcom growth period. Dismissing the advent of the internet as anything short of revolutionary is disingenuous, no matter how many zombo.com companies failed.
LLMs have the exact same transformative impact on humanity.
But this is begging the question.
Yes, we can see that the internet was radically transformative.
But you are arguing that this somehow proves that LLMs are too, when there's wildly insufficient evidence—either on where LLMs are going in themselves, or in the comparison—to credibly make that claim.
I can't really agree. I've never seen anything from an LLM that I would consider even helpful, never mind transformative.
How are you supposed to use them?
I am conservative regarding AI driven coding but I still see tremendous value.
It makes me want to ask you: do you ever see helpful things from your colleagues at all?
No, not at all. I may be using it wrong.
I put in "write me a library that decodes network packets in <format I'm working with>" and it had no idea where to start.
What part of it is it supposed to do? I don't want to do any more typing than I have to.
Why can't you just pass any of those to an AI?
That is a heavily symbolic exercise. I will "read" the spec, but I will not pronounce it in literal audible English in my head (I'm a better reader than that.)
I write Haskell tho so maybe I'm biased. I do not have an inner narrative when programming ever.
Actually writing code is the fun and easy bit.
Where LLMs are behind humans is depth of insight. Doing anything non-trivial requires insight.
The key to effectively using LLMs is to provide the insight yourself, then let the LLM do the grunt work. Kind of like paint by numbers. In your case, I would recommend some combination of defining the API of the library you want yourself manually, thinking through how you would implement it and writing down the broad strokes of the process for the LLM, and collecting reference materials like a format spec, any docs, the code that's creating these packets, and so on.
I don't agree. It can't write code at all, it can only copy things it's already seen. But, if that is true, why can't it solve my problem?
> The key to effectively using LLMs is to provide the insight yourself, then let the LLM do the grunt work
Okay, so how do I do that? Remember, I want to do ZERO TYPING. I do not want to type a single character that is not code. I already know what I want the code to do, I just want it typed in.
I just don't think AI can ever solve a problem I have.
Sure, maybe crypto changed some lives, but an entire industry? I think ALL of software dev is going under a transformation and I think we're past the point of "wait it out" IMO.
Or I'm wrong, but right I'm being paid to develop a new skill professionally. Maybe the skill ends up not being useful - ok, back to writing code the old way then.
You're trying to make the point using BitCoin, but in the early 2000s I had just over 14,000 of them, so I can quite clearly see a point in getting in early.
This underscores a main counter point: dipping your toes and casting a wide net often has a low cost, since back then you could mine (and even purchase) Bitcoin relatively inexpensively. If it hadn't worked out, then it wouldn't have hurt much at all.
If everyone had been talking about it like the casino that it actually is, then sure - some people made some good bets, and a lot of people made bad ones trying to get in early. Imagine being the person who sold all your bitcoin for whatever other stupid memecoin, to "get in early"?
It's not a real counter-argument, it's just "I had a lot of dumb luck on this one specific thing, aren't you silly for not guessing as well as me".
The answer was quite the opposite. I wanted to see if the technology lived up to the hype. The answer, unsurprisingly was no. If only Zuck had listened to me :-)
IMO it reads a little desperate and very much like the hype bros but from opposite side. Take a look at the articles if you don't believe.
https://shkspr.mobi/blog/tag/ai/
- I'm OK being left behind, thanks!
- Unstructured Data and the Joy of having Something Else think for you
- This time is different
- How close are we to a vision for 2010?
- AI is a NAND Maximiser
- Reputation Scores for GitHub Accounts
- Agentic AI is brilliant because I loath my family
- Stop crawling my HTML you dickheads - use the API!
- Removing "/Subtype /Watermark" images from a PDF using Linux
- LLMs are still surprisingly bad at some simple tasks
- Books will soon be obsolete in school
- Winners don't use ChatGPT
- Grinding down open source maintainers with AI
- Why do people have such dramatically different experiences using AI?
- Large Language Models and Pareidolia
- How to Dismantle Knowledge of an Atomic Bomb
- GitHub's Copilot lies about its own documentation. So why would I trust it with my code?
- LLMs are good for coding because your documentation is shit
I would coin it Chill Guys, and I claim on average they are more often correct than hype bros. We all know the old saying that nobody ever was fired for selecting an established solution/technology. Being chill, staying safe, it's more beneficial in real business. Just wait until the hype has normalized and then start dabbling with the new, when all others starting doing it also. You may not be the frontrunner of a new wave, but you can still ride it well. You are only really left behind when everyone changed, and your income starts drying up, until then, it's just business as usual.
And always remember: when ChatGPT started its hype, many were predicting Googles death and the end of Web searches. Now, 3(?) years later, Google is still around, the search is still going strong, and Gemini is seriously competing with ChatGPT. Hypes come fast and die fast.
Ironically one might even get projects to fix the mess left behind, as the magpies focus their attention into something else.
In the case of AI, the fallacy is thinking that even if ridding the wave, everyone is allowed to stay around, now that the team can deliver more with less people.
Maybe rushing out to the AI frontline won't bring in the interests that one is hoping for.
EDIT: To make the point even clearer, with SaaS and iPaaS products, serverless, managed clouds, many projects now require a team that is rather small, versus having to develop everything from scratch on-prem. AI based development reduces even further the team size.
Yeah, everyone is in panic mode - "they're killing all the horses", but one really needs to consider the similar historical events.
When ATMs were rolled out in the 70s, everyone assumed tellers were on their way out. What actually happened was counterintuitive: the number of bank tellers increased for the next few decades. ATMs lowered the cost of operating a branch, so banks opened more branches, which required more tellers. The teller's job also shifted from cash handling toward relationship-building. The predicted elimination took 40+ years to materialize, and even then it was gradual.
"Paperless office" in another example. Around 1970s futurists confidently predicted - computers would eliminate paper. Companies restructured workflows around that assumption. What happened, really? Paper consumption actually increased dramatically - laser printers and desktop publishing created more paper demand, not less. The prediction wasn't wrong, just half a century early.
US horse population peaked around 21 million in 1915 and crashed to roughly 3 million by 1950. The tragedy wasn't that people prematurely killed off horses - it's that the infrastructure around horses (farriers, feed suppliers, carriage makers, stable hands) was devastated faster than those workers could adapt, but that also took decades.
Imagine if in 1910, someone sold all their horses expecting automobiles to arrive in their rural county within two years, and then cars didn't reach reliable rural infrastructure until 1940. That's what it feels to me when companies lay off thousands of programmers because of AI.
Yes, AI may replace programmers, but what would probably happen is that the meaning of "programmer" would change. Yet that won't happen within a year or two. Even with the unseen advancements in the AI research.
So, a decade of hanging by a thread, getting by and doubling down on CS, hoping that the job market sees an uptick? Or trying to switch careers?
I went to get a flat tire fixed yesterday and the whole time I was envious of the cheerful guy working on my car. A flat tire is a flat tire, no matter whether a recession is going on or whether LLMs are causing chaos in white collar work. If I had no debt and a little bit saved up I might just content myself with a humble moat like that.
Anecdata, but the few people I know who were looking to switch gigs all had multiple offers within a few weeks. One thing they all had in common was taking a very targeted approach with their search and leveraging their networks. Not spamming thousands of resumes into the ether.
Just finished a search - agree. The resume process is fundamentally broken, but a strong network makes it irrelevant. Lean on connections - there's a ton of opportunity out there.
guess it's death and destitute for introverts
edit: please explain the downvotes, i'm curious why you think i'm wrong
if what op says it's true, that today only networking works, then it easily follows that if for some reasons you do not have a network then you don't get hired
This is really just a reversion to how things used to work, relying on human connections. People seemed to manage to get jobs 30 years ago just fine
If people put down the AI, and actually learn how to write a `for` loop, they would be more hire-able than 50% of candidates.
> "Guess it's death [...] for introverts"
There is a meritocracy somewhere in our capitalist system. Not everyone participates, but it exists.
of course if the process doesn't involve networking then we don't have a problem, we agree on that
That is not at all what I said. Please do not misrepresent.
I said they took a targeted approach *and* exercised their networks. Those are two separate things.
Right, so they applied to a couple of jobs and it worked for them?
I'm sorry, do you understand how uncommon and rare that is? sure, if their domain was REALLY niche and the jobs weren't publicly advertised, then i could see how that would work. but the experience is VASTLY different outside such niche cases
> It isn't hard.
you're not an introvert then
sounds like hell to me
I'm still working in tech, and likely will forever in a much reduced capacity. But pottery is my life now.
For an old dog like myself it feels an unjust rug pull.
I know many jobs are about giving partial access to secrets or insider knowledge etc but I simply can't see myself accepting that this is my value proposition.
No, let the pie grow. Let more people be able to do more things. Use the new capabilities to do even more. See how you can provide genuine value in the new environment. I know it isn't easy. There are many unknowns. But at least aspirationally I see that as the only positive way forward.
The same thing has happened to many jobs. 100 years ago being a photographer was a difficult skill. They must have felt a rug pull when compact cameras became mainstream and they were no longer called to take all family pictures. Surely the codex writers felt a rug pull when printing became widespread. Typesetters when people could use word processors on their PC with font settings. Prop designers and practical effects people when movies switched to vfx. Etc etc.
That's an incredibly uncharitable reading of the parent comment. At no point in history prior to maybe this year could you argue that working in software was gatekeeping, toll extracting, or rent seeking. Being a highly skilled craftsperson creating software for those who can't or don't want to is a very psychologically positive self identification. Lamenting that the industry is moving away from highly skilled craftspeople is also perfectly valid, even if you believe that it is somehow good for society, which is yet to become clear.
Yes, producing software was value. (It of course still is as of today, we are talking about what may be coming). My plead is to continue searching for ways to contribute value. Don't resign to a feeling that the only way to hold on is if you try to stop others from knowing about or being able to use the skill leveling tech. This makes one bitter and negative. Embrace it, aspire to be happy about it.
Its like getting scooped in science. In research, I always try to reframe it to be happy that science has progressed. Let me try to learn from it and pivot my research to some area where I can contribute something. Sulking about having been scooped does not lead to positive change and devalues ones own self-image.
We're about to pull the rug underneath all knowledge workers. This will disrupt wage earners lives. This will disrupt the economy.
You might feel great about when things become cheaper but remember that when things are cheap it's only because costs are low and when costs are low the revenues are low and when revenues are low salaries are low too. Keep in mind that one party's cost is other party's revenue.
The economy is ultimately one large circle where the money needs to go around. You might think of yourself a winner as long as someone else's salary drops to zero and you still get to keep your income but eventually it will be you whose income will also be disrupted.
Just something to keep in mind.
And also we're going to just not rug pull on the individual knowledge workers but businesses too. Any software company with a software product will quickly find themselves in a situation where their software is worth zero.
Also this comment about gatekeeping is absolutely stupid. It's like saying trained doctors and medical schools are gatekeeping people from doctoring. It would be so much better if anyone could just doctor away, maybe with some tool assistance. So much fantastically better and cheaper? Right! Just lay off those expensive doctors and hire doctor-prompters for a fraction of the price.
It will put and end to the middle class entirely, but that’s the intent.
The reality is a lot of people who were formerly middle or upper middle class, and even some lower class populations will face steep, irreversible “status adjustment”.
I’m not talking about “we used to be able to take vacations and now we can’t”. I’m talking about “we used to be highly paid professionals now we’re viciously competing for low paid day labor (gig work) to hopefully be able to afford the cheap cuts this week”.
So I'm extremely bitter about this potential direction
to tie back to the actually article, if you believe a rug pull is imminent then you got to get off the rug. Idk, you have to make a decision because we're certainly at a fork in the road. There's no guarantee waiting will result in a better outcome nor one saying it will be a worse outcome. There's going to be winners and losers always and lot of it is really just luck in timing. I guess, in reality, the careers we've built come down to a flip of a coin; stay on the rug, get off the rug.
/i'm thinking of buying a welding truck and getting in to that, then hire a welder and rinse repeat until i have a welding business. There's plenty of pipe fence in my neck of the woods and i see "welder wanted" all over the place so there's opportuntiy too.
So? Demand the source code. Run your own AI to review the quality of the code base. The contracting company doesn't want to do it? Fine, find one that will.
Interestingly, the model doesn't "know" that it's ignoring you. From its perspective, it has retrieved a "meaningful" pattern—virtual parameter names that probably fit common conventions it saw during training. Your actual request simply... wasn't documented.
Our software industry has specialized, for decades, in "rug pulling" / changing / "disrupting" other industries on a massive scale.
I find it pretty ironic when engineers make these statements in that context.
Do you think it's fair that when the society moves underneath, the capitalistic system moves its tectonic plates it's the individual who has to bear the cost of that?
Abd let's be clear only software devs are just sucking it up. You think lawyers and doctors would allow themselves to be laid off en masse and be replaced with trainees who just prompt the computer?
Also what will happen when high wage earners start loosing their discretionary income. The whole service sector for starters will be shaken.
Just imagine some big tech company laying off 10k engineers. Making 0.3m per year. That's 3b dollars that disappear from the incomes and thus from the economy and just stays in the pockets of the capital holders.
So then you have no choice but to seek alternative revenue streams (ads, data mining) and in fact this becomes the thing, since the original thing no longer produces a revenue.
Digital products such as "photoshop" have had value because people need a tool like that and there's only a limited number of competition, i.e. scarcity. The scarcity exists because of the cost. I.e. the cost of creating "photoshop"creates limit for how many "photoshops" exist. When you bring down the cost you'll have more "photoshops" when you have more "photoshops" as the volume increases the value decreases. Imagine if you can just tell claude "write me photoshop", go take a dump and come back 30 mins later to a running photoshop. You wouldn't now pay 200USD for a license, now would you? You'd pay 0USD.
If you now create a tool that can (or promises it) can obliterate the costs, it means essentially anyone can produce "photoshop". And when anyone can do it it will be done over and over and at which point they're worth zero and you can't give them away.
The same thing has happened to media publishing, print media -> web, computer games etc.
Then the problem is that when your product is worth zero you can no longer make a business by creating your product, so in order to survive you must look into alternative revenue streams such as ads, data mining etc. None of which are a benefit to to the product itself.
Boy I can't wait for the equivalent of low effort high volume clickbait to take over software. Yay!
If you think any programming task at hand one must have at least some reasonable grasp of formalism, boolean logic, predicate logic, then understanding the software developing concepts, your APIs frameworks, language constructs etc and finally the domain knowledge.Most of this goes away when changing from coding to prompting.
I was just doing some computer graphics work myself doing Signed Distance Fields and Claude just literally regurgitated code that I could just adopt (since it works) without understanding any of the math involved.
I'd say that prompting is at least two orders of magnitude easier than coding.
There’s really not much stopping changing tires from being automated away. Further standardization of tires or wage increases would probably do the trick.
There’s still plenty of software to be created. You’ll probably have to learn some ML tricks or whatever, but there’s nothing going away, just changing as software has always done.
Sounds like you've never changed a tire. Or at least not outside of a very controlled environment.
How are these put on in the first place on an assembly line?
"Oh, you think I've never changed a tire? Well here is my abstract high level understanding of the steps to changing a tire! And have you considered the quintessential controlled environment for putting tires onto cars?"
Car washes are automated even though they haven't answered the edge cases of how to wash your car when your car is rolled on its side or a terrorist is actively blowing up the equipment. They simply only operate when your car is right side up (and other conditions, like in neutral, wipers off, and a driver who is willing to not exit the vehicle) and when there aren't active bombings on the building. And other "edge" cases.
Just because there is a possibility for something to not work, doesn't make it useless. Automated tire replacements could start with very rigid cases where they are applicable, and expact the scope slowly to allow more cases, like a bent wheel or poor weather.
I have even replaced car tires before and yet still have this opinion.
These are predictable jobs with very few variables that there is still no sign of automation replacing any time soon. They often don't suck as bad as people think. One of the most enjoyable jobs I had was on an assembly line, because my mind was mostly free to wander. It was almost like meditation.
Theres a reason most people want a white collar job and send their kids to college instead of to such manual jobs.
Part of the reason for my prior comment is the clear fact that a not-insignificant percentage of white collar jobs are being massively devalued at the moment, which means many people who thought they'd be able to send their kids to college with income from such jobs won't.
Considering that the field of robotics is so far behind LLMs in terms of clear value outside of niche industrial applications, I think manual labor is about due for a resurgence. There may be some major rebalancing happening. The big question for laborers will be - as it has always been - what can I do that sucks the least but also allows me to pay for a decent life? Answers will vary.
But also, a lot of the manual labor is quite expensive and only affordable as long as there are white collar workers who can pay for fancy bathroom remodelings and landscaping and so on. I don't know how a big deluge of reskilled pipefitters and HVAC technicians will be able to find work. Will everyone just pay each other to do a bunch of handy work for each other?
However, besides a few trades that use unions/licensure/apprenticeship as an artificial supply limit, most trades are only limited by a willingness to do the work. A few decades ago, trade work was much less expensive, because supply was higher and many did their own DIY, which limited what prices the market would tolerate.
May not be as crazy of a thought as it seems on the surface. There are many different types of not-easily-roboticized manual work, doable by people of varying skillsets and physical abilities, which will continue to hold value due to our basic physiological needs.
The lower bound, or 'floor', of this value is not going to sink lower than group consensus among wide swaths of the population allows.
Programmers (and other white collar jobs) were able to luxuriously coast along the ZIRP era because capital (replenished twice via quantitative easing) was cheap and plentiful, and because the elites at the top had to pump huge amounts money to create a shared fantasy of the "technological future" that validates the neoliberal era. Now that the reality of the actual "physical economy" (the economy of making tangible things) has clawed back at us because of that forbidden three-letter word (war), we all realize that doubling and tripling oil prices were actually dictating our lives rather than some "Skynet AI" crap, and thus our fantasy simulacra of "virtual" play-things have now come to an end. Oh and we all found out that most of SaaS was actually bullshit anyway. In fact, if it could be completely replaced by AI then it was already pretty bullshit in the first place.
So, for smart STEM people uninterested in programming and only looking for a stable career, I think they would be better off by just doing engineering work that's a bit more tangible, like robotics, manufacturing, shipbuilding, construction, etc. (Or anything related to war, but only if you're able to stomach what you're doing.) If you don't like to sit all day for a salary, then niche blue collar work can also be a good option, since general-purpose robotics (Physical AI?) is still too far away because of many, many issues that's just too long to explain here. I still think if you like programming then you should stick to it in the long run - there will be a very cold winter because of the combination of LLMs, AI bubble pop, and general economic depression, but for those who survive this era there will be an opportunity because of the shortage of skilled programmers (since no-one bothered to hire juniors after the pop, no one will grow to become seniors themselves!) Computing will still be with us forever, just not in a way that investors thought that it's going to "engulf the world".
But something tells me you won't do that.
I think it's important to know and practice your passion, even if you have to work on something different to pay the bills. You can only be good at something if you really like it, and you never know what opportunity you'll stumble onto if you're ready for it.
Bad idea. Automotive repair is barely a moat, because you don't need that much training to work those jobs. There's a lot of people who want to do it. And cars are definitely susceptible to recessions - if fewer people are buying cars, if there's a shift to transit, if your locality builds more pedestrian-friendly infrastructure, if businesses that use work vehicles are forced to close, then your demand drops and everyone already in the field is forced to compete with one another.
For moats, look for things that are complex (not everyone can do it), licensed and always needed.
You need to study for 3 years to be a car mechanic. And even then you'll need baby sitting for a while in the auto-shop because no one trusts the new guy fresh out of school, with good reason.
>There's a lot of people who want to do it.
No there isn't. What do you mean by lot of people? Automotive repair is a blue collar job with intense physical strain, you're exposed to chemicals you shouldn't be, there will be hearing loss involved no matter how much protection you have and it doesn't even pay all that well, considering the risks involved and the amount of training you need. And no, it's not because the market is saturated with car mechanics, it's because auto repair shops have a lot of pressure to be cheap. Job listings are full of car mechanic openings, you'll never be unemployed.
All the "if's" presented are solved by relocation and not even that much of it, except for this:
>if fewer people are buying cars
Then the skills are easily transferable to other vehicles. But less people are buying cars already, they use uber, which involves a car. A car that needs 10x more repair time than the car you drive daily.
So yeah, auto repair is a good moat. It's complex, not everyone can do it, it's not licensed in most cases (unless you work for a brand or a niche) but there's reputation involved and it's always needed. It just doesn't pay all that well, specially not compared to what I see on HN's monthly whoshiring.
MS Access and so many more "you won't need a programmer again" dev tools over the decades blazed the trail.
But well, I feel like you too.
When Maps apps came around, people totally lost the brain muscle for being able to navigate. Using LLMs is no different, people over reliant on these tools are simply ngmi. They are going to be totally reliant on their favorite billionaire being willing to sell them competency via their thinking machines.
I would caution everyone to consider if the Billionaires who are screaming that you're going to be left behind, laid off and redundant if you don't (pay them to) use their brain nerfing machine, whether or not they have your best interest at heart.
You're not going to be left behind.
Secondly, I find that correct usage of LLMs can accelerate learning. My brother used an LLM to generate flash cards for a driver's license test. I use LLMs to digest a ton of text and debug issues that would have been impossible to find (I would have given up) Have it generate, explain, review, compare code or general writing.
It is like having access to wise old man in every field. They may have inferior reasoning capability, and their memory may falter, but they have seen everything in their corpus and are great at pointing you to external references. And you can delegate them to busywork.
You cannot run useful models on consumer hardware, sorry this is wrong and will always be the case. Atleast for 10 years until GPUs with 48GB vRAM depreciate. This is a limitation of llm architecture. You cannot post train a <1T Param model to a place where it competes with frontier model capability. If you think you 70b param models (which still require 5k in GPUs) are useful, you are being dishonest with yourself.
It costs about $60-80k to run a 1T param model at your house like Kimi 2.5 .. which is the only size model that's going to get anywhere close to a foundation model's capability. Nobody is going to spend close to 100k to run a mediocre open source model as opposed to spending $200.00 a month. Its a ridiculous notion.
There are ones that are distilled with better reasoning models or abliterated for whatever you need, and the multimodal features work... fine.
Just started running local LLMs this week, and it is pretty much overkill for what anyone in my family needs. All it really lacks is some tools for it to use, which I am putting together now.
To be fair, the best model I have used is claude sonnet. I don't really know what I am missing with opus.
Broadly speaking, I think this is a wise assessment. There are opportunities for productivity gains right now, but it I don't think it's a knockout for anyone using the tech, and I think that onboarding might be challenging for some people in the tech's current state.
It is safe to assume that the tech will continue to improve in both ways: productivity gains will increase, onboarding will get easier. I think it will also become easier to choose a particular suite of products to use too. Waiting is not a bad idea.
OTOH, tfa specifically said:
> I feel the same way about the current crop of AI tools. I've tried a bunch of them. Some are good. Most are a bit shit. Few are useful to me as they are now. I'm utterly content to wait until their hype has been realised.
So, it's not like he's being deliberate ignorant, rather simply deliberately slow-walking his journey.
Kind of weird tools also incorporate addictive gambling game's UX design. They're literally allowing you to multiply your output: 3x, 4x, 5x (run it 5 times for a better shot at a working prompt). You're being played by billionaires who are selling you a slot machine as a thinking machine.
Yes, it's hard to see how, at this moment in time, "Anybody can write code with an LLM" is so different from "Anybody can make money in the stock market."
The underlying mechanisms are completely different, of course, and the putative goal of the LLM purveyors is to make it where anybody really can write code with an LLM.
I'm typically a nay-sayer and a perfectionist, but many not-so-great things become and stay popular, and this may fall into that category.
> Kind of weird tools also incorporate addictive gambling game's UX design.
It's unclear it started out this way, but since it's obviously going this way, it is certainly prudent to ask if some of this is by design. It would presumably be more worrisome if there were only a single vendor, but even with multiple vendors, it might be lucrative for them to design things so that "true insider knowledge" of how to make good prompts is a sought-after skill.
Why? Because all the folks involved have created a technology in search for a problem to solve. That never, ever works. Steve Jobs of all people left this piece of wisdom behind. Its amazing how few actually apply it.
The internet was never this - its origins go back to the need to able to transmit data - darpa. And this is what we still do now...
And to be fair, Steve Jobs was a master of taking things that had been invented elsewhere, and making them work well enough to foster a demand.
But your point stands. Who made the most money, Xerox PARC, or Apple?
Next thing I'm waiting on is building a new server for a powerful locally hosted LLM in 5 years. No need to go through the headaches and cost of doing it now with models that may not be powerful enough.
If you test specific features of those solutions over time you see very inconsistent results, lots of lies, and seemingly stable solutions that one-shot well but suddenly experience behaviour changes due to tweaks on the backend. Tuesdays awesome agent stack that finally works is loading totally different on Thursday, and debugging is “oh, sorry, it’s better now” even when it isn’t. Compression, lies, and external hosting are a bad combo.
Sometimes I imagine a world where computers executed programs the same way each time. You could write some code once and run it a whole calendar month later with a predictable outcome. What a dream, we can hope I guess.
For a single dev team, vibe coding is great. Write specs, write plans, write code. I know what the project wants and needs because I'm the target market.
At work, I haven't written more than a few lines of code since December. But I work with other people vibe coding this same project. Lots of changing requirements and rapid iteration. Lots of mistakes were made by everyone involved. Lots of tech debt. Sure, we built something in 2 mos that would have otherwise taken us 6 mos, but now I'm fixing the mess that we caused.
I think the critical difference is the attitude towards our situation. My boss said to fix the AI harness so we can vibe code more confidently and freely. But other bosses might cut their losses and ban vibe coding. Who's right? I dunno. In both cases I'd just do what my boss wants me to do. But it's not that I don't want to be left behind. I don't want to lose my job. There's a difference.
You didn't actually build it in 2 months.
It's strange that the goalpost seems to have moved from "AI is net negative to productivity" to "only 2x improvement isn't worth it"
Prompt engineering as a specific skill got blown out of proportion on LinkedIn and podcasts. The core idea that you need to write decent prompts if you want decent output is true, but the idea that it was an expert-level skill that only some people could master was always a lie. Most of it is common sense about having to put your content into the prompt and not expecting the LLM to read your mind.
Harnesses isn’t really a skill you learn. It’s how you get th LLM to interact with something. It’s also not as hard as the LinkedIn posts imply.
Mixture of Experts isn’t a skill you learn at all. It’s a model architecture, not something you do. At most it’s worth understanding if you’re picking models to run on your own hardware but for everything else you don’t even need to think about this phrase.
I think all of this influencer and podcast hype is giving the wrong impression about how hard and complicated LLMs are. The people doing the best with them aren’t studying all of these “skills”, they’re just using the tools and learning what they’re capable of.
Keeping in mind the LinkedIn posters/audience (marketers/recruiters), it probably was quite hard for most of them.
Consider that.
The llm person having a blast is compelled to push everyone to see what they see. If they have a leadership role at their company, then the getting-left-behind drum does get banged in the form of "ai native company transformation" initiatives.
Now reviewing the 1k lines it generated and making sure its secure, thats going to take me longer than writing it by hand.
For example, in one of the projects I'm working on, I'm using the VSA pattern. I have the list of 50 to 75 features I need to implement and what "categories" they slot into, I have all of the frameworks and libraries picked out, and I have built out "feature templates" with all of the boilerplate setup (I'm reusing these over multiple projects going forward). for each of the features all I need to do is
'ftr new {FEATURE_TYPE} {FEATURE_NAME} {OUTPUT_FOLDER}'
and then plug int the domain specific business logic.
I'll most likely use Claude/Codex/Whatever to write out some of my tests, but the majority of the 'boilerplate' is already done and I'm just sorting out the pieces that matter / can't be automated.
Am I missing something huge with these tools?
Don't get me wrong, for doing reverse engineering they're great helpers and I've made a tonne of progress on projects that had been languishing.
Then I can look at the output and say things like "what if the data is lowercase?" or anything else I suspect they may have missed. A few rounds of these and I have a pretty good feel for the quality of the resulting checks, while taking a few minutes of my attention/tens of minutes of wallclock time to do.
I have a more detailed example here: https://www.stavros.io/posts/how-i-write-software-with-llms/
I'd share all my plans but I once found that the LLM used my actual phone number as test data, so I don't share those any more, just in case.
Then you still need to learn how to use the tools to speed up reviewing the code.
My CEO/CTO :)
It just sounds like a giant scheme to burn through tokens and give money to the AI corps, and tech directors are falling for it immediately.
It has often been the case for technologies though, like “now we’re doing everything in $language and $technology”. If you see LLM coding as a technology in that vein, it’s not a completely new phenomenon, although it does affect developers differently.
One person hand coding and one person having Claude still results in the same output that is compatible with each other.
This is more like mandating that you use vim. I’ve never some something like that before in 20+ years.
Absolutely not, a lot was done just because it was pushed as the current fashion and advertised to be solving problems that either weren’t applicable to the concrete use case or that it didn’t actually solve.
AI isn’t like this because the final output is the same as hand coding.
93% of Developers Use AI Coding Tools. Productivity Hasn't Moved. - https://philippdubach.com/posts/93-of-developers-use-ai-codi... - March 4th, 2026
Do you have an empirical study to support that your employer should buy you a laptop and possibly a monitor or two to help your productivity?
If there's no study, why should we believe it?
It's like "A study found that parachutes were no more effective than empty backpacks at protecting jumpers from aircraft."
https://www.npr.org/sections/health-shots/2018/12/22/6790830...
When engineers demand evidence that AI is productive, but not that having laptops and monitors are productive, it screams confirmation bias. "I'm right, you're wrong" as a default prior.
I would emphasize that I don't think there's anything particularly wrong with the converse either. If an executive is just absolutely convinced that dual monitors are a scam and nobody needs more than their laptop screen, they can run their company that way, and I'm sure there are many successful companies with that philosophy.
Are you under the impression that we don't bother to empirically prove things that seem obvious, like the safety benefits of parachutes? You don't think parachute manufacturers test their designs and quantify their performance?
This is repeatedly used as an example in the medical community about the limits of randomized controlled trials. This isn't some impression - your impression that such evidence exists is wrong.
There might be some parachute company tests about effective of velocity, etc., but there are no human trials.
Why? Because that would be unethical.
It's a good thing "randomized controlled trials" aren't the only kind of empirical evidence, then.
We know the limits of how fast a human can safely land. Parachute manufactures have to prove that their designs meet the minimum performance specifications to achieve a safe speed. This proof is not invalidated by the fact that it doesn't include throwing some poor bastard with a placebo parachute out of an airplane to demonstrate that he dies on impact.
Also, the answer to your original question is yes. There are numerous studies showing that multiple monitors improve productivity.
I don't think that's making the argument you think it is.
There is no point at which this argument will not be made. Therefore, it is a useless argument.
false. The article is from 4th of March 2026, less than a month ago.
Even moving from assembly language to compiled languages was not as much of a step change.
I'm currently tracking exactly two numeric metrics: total MAUs (to track the aforementioned), and total DAUs (to gauge adoption and rightsize seat-licensed contracts.)
If the benefit is there people will use it or get left behind, there's no sense having a mandate that people resentfully try the new tooling.
Imagine you had a developer who writes Java using vim. It sounds insane but they are just as productive as everyone else. Then you mandate they have to try IntelliJ every quarter, just to see if maybe they like it now. You're just going to piss them off and reduce their productivity by mandating their workflow.
FWIW in the face of these kind of mandates I have been using tokens but ignoring the output. So it's costing my employer money and they have a warped metric of whether the tool is actually useful.
If AI makes an employee 10X more productive they get a slight pay raise maybe, but the company makes substantially more money or gets substantially more output. So there is a large difference in incentives.
Great question. That is absolutely the goal. My take is that building with LLMs - at least with the current popular harnesses like Claude Code - is a skill on its own, and people need time to develop that skill and also to figure out where these tools might fit into their workflows.
> Are you willing to pay down the likely debt of some individual contributors never clicking with this or being outright resentful to towards the technology or the mandates?
I'll be honest as I have been elsewhere in the thread: A few years from now, I don't know what the state of the technology or its adoption will be, or what expectations of software engineers at large will be.
But for the foreseeable future, yes, absolutely, I'm willing to give engineers the time and space to develop familiarity and comfort with the tools, as long as they're engaging in good faith.
edit: oops, didn't mean to dodge the last part of your question (re: resentment): I genuinely don't know the answer to how I'll handle that, but I'm also sure it'll happen. Hopefully I'll still be in a position to speak publicly about how one can deal with those challenges.
edit 2: also, thank you for the thoughtful questions and dialogue.
What you're actually doing here, from my POV, is incentivizing your employer to use more invasive metrics when they tried to stay hands-off and mandate the absolute bare minimum of "uh, give it a shot and see if you think it's useful right now."
The analytics that Claude Enterprise exposes are far more intrusive than I would want to be subjected to as an engineer, so I rolled out a compromise. I don't even track who the active users are, currently.
But maybe you're right, and there are enough people sabotaging the metrics out of spite, that there's a reason they provide the other data.
I hope that the engineers in my org are more mature than that, and would be willing to just say "I'm not currently using it", but thanks for giving me something to think about.
Where I work there as certainly been that kind of discussions, "we need to use AI for this, because no offense but you are simply not fast enough". And this from people who do not understand software development and has never worked with it. They have only read the online stuff about 20X speeds and FOMO. (And my workplace is generally quite laid back and reasonable. I am sure many other places are much more aggressively steered.)
That’s not the bare minimum, though. The bare minimum is: “if you are meeting or exceeding your job expectations, great work, keep using the tools that are working for you.”
To a productive employee, merely saying “just try out AI, it might help” feels like the boss saying “just try out astrology or visit a psychic for a reading. You might find it interesting.”
If you have accurate metrics to gauge developer productivity then use them.
But you don’t because if you did you’d be a billionaire.
What you have is metrics that can measure developer busyness. If you use those metrics all you’ll do is run your good devs off and keep the ones who can’t find new jobs.
So you’ll have to do what anyone who manages software teams has always done and trust your line managers to manage your devs.
When it comes to people wasting tokens, most people aren’t gonna to do it with the intent to fuck your metrics. But if you tell people you are measuring something they will find a way in increase that metric whether it results in anything productive or not.
Go ahead and spend more time collecting more granular metrics spying on your employees. Apparently there aren't more valuable things for you to do than micromanaging individual developers.
"If the colleges were better, if they really had it, you would need to get the police at the gates to keep order in the inrushing multitude. See in college how we thwart the natural love of learning by leaving the natural method of teaching what each wishes to learn, and insisting that you shall learn what you have no taste or capacity for. The college, which should be a place of delightful labor, is made odious and unhealthy, and the young men are tempted to frivolous amusements to rally their jaded spirits. I would have the studies elective. Scholarship is to be created not by compulsion, but by awakening a pure interest in knowledge. The wise instructor accomplishes this by opening to his pupils precisely the attractions the study has for himself. The marking is a system for schools, not for the college; for boys, not for men; and it is an ungracious work to put on a professor."
-- Ralph Waldo Emerson
Re: some of them being upset about it- probably. Some people are also upset about being required to use Jira. I personally dislike using Okta.
I engaged in the thread in good faith, and am transparent about what I'm doing and why. I also clarified that part of the job in my org is experimenting with these tools.
The reason we force people to use Jira is because it only works if everyone uses it.
AI doesn’t work like that. If it does enhance productivity 50% then use will spread and the expectations of your line managers will naturally go up and the holdouts won’t be able to keep up.
Or only the exceptional ones will. And in that case why do you care how they do it?
In my experience, AI out of the box is at first a useless gimmick - until someone starts seriously playing with it and defines a skill file for integrating it with some internal tool. And another person starts playing with it and figures out that AI is pretty good at using another internal tool but only if the tool runs in --silent=1 mode by default, so as not to confuse AI with too much logging output. And a third person figures out that it's actively dangerous to let AI some some other internal tool - but hey, there's a safer alternative, and which happens to perform better too. And pretty soon you end up with an ecosystem of business-specific scripts and .md files and skills and MCPs that's actually helpful 85%+ of the time. But the only way to get there is to get devs and power users tinkering with it.
But even assuming this is the case, you don’t create enthusiastic power users with threats (implicit or explicit) and metric tracking. The only thing that does is force people to do the minimum to keep their job.
Not necessarily. A carpenter has a job to make things. Not to use specific tools and keep up with the latest tools used for repairs. It can be suggested, but telling a caprenter what tool to definitely falls under micromanagement.
>Some people are also upset about being required to use Jira
Jira's job is to report metrics to management. That implicit to the job. Telling people how to perform their tickets is micromanagement. The whole goal of a non-junior employee is to trust they can estimate and accomplish their task.
Who does this?
A friend is a team lead in an org that's mandating vibecoding via "Devin", a lesser known player an "architect" chose after shallow review. The company also has endemic process issues and simply can't do deployments reliably, it's behind the times in methodology in every other respect. Higher ups are placing their trust in a B-list agentic tool instead of fixing the problems.
Anyway, I wouldn't be caught dead working at either of those two shops even before the AI rollout, but this is what's going on in the IT underworld.
[EDIT] Oh and much of your post rings true for my org. They operate at a fraction the speed they could because of organizational dysfunction and failure to use what's already available to them as far as processes and tech, but are rushing toward LLMs, LOL. Yeah, guys, the slowness has nothing to do with how fast code is written, and I'm suuuuure you'll do a great job of integrating those tools effectively when you're failing at the basics....
LLM-generated code hits all the right notes, it's done fast, in great volumes, and it even features what the naysayers were asking for. Each PR has 20 pages of documentation and adds some bulk to the stuff in the tests folder, that can sit there looking pretty. How wonderful! Hell, you can even do now that "code review" that some nerd was always complaining about, just ask the bot to review it and hit that merge button.
Then you ask the bot to generate the commands again for the deploy (what CI pipeline?) and bam! New features customers will love. And maybe data corruption.
A firm that is led by people who can envision, very clearly, revenue-generating and cost-reduction projects - wins. Writing code by hand is absolutely irrelevant. Who fucking cares. The former is what matters.
Code generation acceleration only matters when those pre-requisites are met. How did Apple go from the verge of bankruptcy to where it is today?
All Im seeing is most people are not smart at all - no wonder they are so impressed by LLMs! They can't think straight. I only see this become even worse over-time. Perhaps this is the stated goal.
It was truly quite rare to have such well-honed manual processes though, the "average" place had a lot of elements that were far from perfect but still benefited after the computerization dust had settled. Then at the opposite end of the spectrum were companies where everything was an absolute shitshow, maybe since the beginning.
That's kind of where Conway's Law comes from, if you benchmark against a manual shitshow that has built up over the years, and replace it with a computerized version, you get a shitshow on steroids. The only other choice would have been to spend the appropriate number of years manually undoing the shitshow before making any really bold moves.
Now AI can really take things to a whole 'nother level, not just on steroids but possibly violating Conway's Law . . . squared.
suppose it's better than counting lines of code, though.
Cloud had a very similar vibe when it was really running advertising to CIO/CTOs hard. Everything had to be jammed into the cloud, even if it made absolutely no sense for it to be run there.
This seems to come pretty frequently from visionless tech execs. They need to justify their existence to their boss, and thus try to show how innovative and/or cost cutting they can be.
If your CEO doesn't look like a taxi dispatcher he's just moving his wings around waiting for a food pellet.
100% accurate - some of us are old enough to have lived through a few of the mini-revolutions in between the mega-revolutions of Internet/Web in the 1990s and now AI/LLM in the 2020s.
We are in the "stupid phase" of adoption still. C-level people have to follow the herd and they are being evaluated on keeping up with everyone else. Idiotic mandates are a way to cause things to happen short-term even though everyone knows long-term it will have to be re-done.
Consultants gonna make a looooooooot of money this coming decade.
Now you can make a perfectly tailored resume, apply to 50 jobs in a day, and it's not unexpected to not get any response from those in 2 weeks. You don't know if it's your resume, the company, or the economy. And no one wants to admit the latter two are problems.
Not to mention the utter disrespect these days. There's no decorum in many of these "professional" settings, when normally you want your interview process to show off your best face.
Im working on building something to address this. That's all I'll say lol.
I bet we could replace nearly all the CEOs in the country with chatgpt controlling a ceo@thatcompany.com email and nobody would notice.
But think of how much profits will improve by not paying $tens of millions to employ a CEO!
This started literally two weeks ago and a couple of days ago I talked to one of the admin people who wanted an update on the progress I'd made with sanding off some of the rough edges of the very rough implementation that the managing partner had put in place (he bought a Mac Mini, put OpenClaw on it, then gave it admin access to a whole pile of stuff!) I said I needed a couple more days. "Okay," she said, "but I need this quickly, because we're firing people next week."
They have literally gone from no agentic AI, to discovering OpenClaw, to firing people, in a two-week time span.
When economists say that the predicted job losses as a result of AI have not yet shown up in the data, I'm genuinely befuddled. Either we don't have long to wait to start seeing them, or there's something wrong with the data, because you can't tell me what I just described above is an isolated phenomenon.
I also have to say: I've always enjoyed working with this client, but this experience has been a huge turnoff on a number of different levels.
They had to hire a bunch of them back less than two months later. The speed-ups were approximately nil and making the editors edit AI slop all day long had them all close to quitting.
They didn't even wait to see if there were any actual benefits, they just blindly fired a bunch of people based on marketing lies. I can only assume they're the same sorts who fall for Nigerian Prince scams.
I’d have guessed the most annoying part would be that you’re assisting them in a hare brained scheme to terminate some people’s employment.
Funny enough, I got laid off last month, yes I’m a tech guy, now they apparently regret it because they are now scrambling to find a replacement to do the tech tasks!
TBH, I’m happy I got laid off because I’m finally building something I wanted to use.
It's actually kinda useful in some cases, but the UI is terrible and it needs to integrate much better with existing tools that are superior to it for specific purposes, before I'll be happy using it. I'd say the productivity gains are a wash, for me, so far. Plus it's entirely too memory-hungry, I'd just come to accept that a text editor takes a couple GB now (SIGH), and here it comes taking way more than that.
But is this something that is best done top to bottom, with a big report, counting tokens? Hell no. This is something that is better found, and tackled at the team level. But execs in many places like easy, visible metrics, whether they are actually helping or not. And that's how you find people playing JIRA games and such. My worse example was a VP has decided that looking at the burndown charts from each team under them, and using their shape as a reasonable metric is a good idea.
It's all natural signs of a total lack of trust, and thinking you can solve all of this from the top.
I’ve seen people use notepad and I’ve seen people who are so good at vim that they look like they’re on editing code directly with their mind.
Your particular example is extreme and my guess is the coworker is just not great at debugging. I use Claude all the time for finding bugs, but it fails fairly frequently though. I think there’s probably advantage to having some people who don’t use it that often, so you have someone to turn to when it fails.
I’m definitely not exercising my debugging skills as much as I used to and I’m fairly confident they’ve atrophied.
And ideally a sample large enough to capture any wasted time from dead ends in other tasks where the tool may actually fail to solve the problem.
I’ve definitely lost a couple hours here and there from when it felt like I was right on the verge of CC fixing something but never actually got there and finally had to just do it myself anyway.
Most execs didn't get where they were by being truly helpful and adding value to the company. They played the game long enough to know that politics trumps accomplishments. The rest from there is the ability to weave a good story (be it slightly or completely exaggerated).
It's not even about trust. It's about incentives in a structure that is dog-eat-dog. Rugged individualism in a corporate structure is a self defeating prophecy. But it's inevitable when executives extract from the company instead of rising the tides for all ships. And shareholders reward it.
But for those top layers, I’ve never seen so much FOMO in all my life. We’re a very slow moving company but they act like we’ve got 2 weeks to go “AI first” or we’re dead in the water. I’ve never seen such a successful hype cycle. I’m pretty sure it’s the bots that are accelerating it so far behind a normal hype cycle.
Right so you are going to be left behind whilst the ground keeps shifting under you, given the models are non-deterministic and continuously changing?
There was a big rush of prompt engineers. Where are they now? Nobody even referse to 'prompt engineering' anymore.
The best thing to do is wait for steady-state. Whats going on is insane... a slow implosion of the code base.
This is exactly what's happening. The top 5 or 6 companies in the s&p 500 are running a very sophisticated marketing/pressure campaign to convince every c-suite down stream that they need to force AI on their entire organization or die. It's working great. CEOs don't get fired for following the herd.
S&P 500 Concentration Approaching 50% - https://news.ycombinator.com/item?id=47384002 - March 2026
> No of course there isn't enough capital for all of this. Having said that, there is enough capital to do this for a at least a little while longer. -- Gil Luria (Managing Director and Analyst at D.A. Davidson)
OpenAI Needs a Trillion Dollars in the Next Four Years - https://news.ycombinator.com/item?id=45394071 - September 2025 (8 comments)
Patric Boyle has a video on this in case you care for the details.
If you broaden the comparison (only a little bit) it looks suspiciously like employees being forced to train their own replacement (be that other employees, or factory automation), a regular occurrence.
Yes, they tend to be incredible gullible to certain things, over-simplistic and over-confident but also very "agile" when it comes to sweep their failures under the rug and move on to keep their own neck in one piece. At this point in time even the median CEO knows AI has been way overhyped and they over invested to a point of absolute financial insanity.
The first line of defense about the pressure to deliver is to mandate their minions to use it as much as possible.
We spent a fortune on this over-rated Michelin star reservation, and now you kids are going to absolutely enjoy it, like it or not goddammit!
In this case, every executive is terrified of being "left out" in the AI race. As we saw with the mass layoffs across companies, most of CEO decision making is just adhering to herd behavior. So it is literally better for execs to have shoveled a shit ton of money into 'strategic' AI initiatives and have them fail than potentially deal with the potentially remote chance of some other exec or company succeeding with 'AI enabled transformation'.
What makes it even more fun is that nobody really has a good understanding of how to measure the ROI of AI. Hence we have people burning a lot of money due to FOMO and no easy way of measuring the outcome, which is usually how the foundations for good Ponzi schemes are laid.
This is unlikely to end well. However, as usual, it's us, the common plebs, who will suffer regardless of outcome.
OTOH, it’s an attempt to address a real problem. There are people who are in fact falling behind (I’m talking literally editing code in notepad), and we can either let them get PIPped eventually, or try to bring them along. There is a real “activation energy” required to learning new tools, and some people need an excuse/permission. Not saying that token count is a GOOD signal, but I haven’t heard many better ideas
For example even the layoffs, nowadays seem to be because of AI or so they said but just a year or two ago there were quite some layoffs and people said "its because of the high demand in COVID and now its over, or Ukraine or inflation" but then that ignores that exactly during that time earlier were there many layofs but it was super easy "Oh COVID and supply chain!" and earlier maybe something else.
Surely there are also economic booms but when did the whole world jsut suddenly started seriously listening to public statements of companies (and that jsut a few with no real income, just money from VCs) and nobody shows us the real data of whats actually happening? E.g. the companies saying they fired 10K due to AI, how much did they actually now direct their budget to AI? How many products are actually being build? Is the productivity the same? Are the customers thinking that support is suddenly amazing or actually it has seriously dropepd in quality? Or no change at all? Is it a company like KFC, your local hardwore chain store, financial isntitutions, truck manufactures or anothet AI company with funding using another AIs company with other funding using now one more AIs company products up to the power suppliers?
For me it seems that its definitely impacting things and a cool technology to be more productive (for example it helps me a lot daily but its not like my life really changed) but the other things I haven't seen yet.
Another point each actual AI generated app is either something akin to a toilet game or not really working (like the C compiler). So where are the amazing enterprise complicated apps fully built via agents? In banks, in government, in apps that respect GDPR and actually are secure but proudly build only or mostly with agents? The only ones, not even secure, are other AI apps to do AI stuff but its whole value it says is to be more productive in the "real" economy but it still hasn't done it yet anywhere. People still struggle with Word or AWS infra or debugging why some specific user cant log in with their custom auth provider at some esoteric region with their laws and audits and GDPR variant.
So one side says its basically a tool from God and they never have created more stuff but on the other hand the other group analyzing blood work, delivering food, writing reports, etc uses it a bit or not at all but all the 95% of problems they had are there with some new ones. Also I'm afraid most of them just write now their email better or with more volume, but no real work is getting done.
So yeah maybe my confusion simply lies in that fact that I have a real job and nobody can keep up with all the slop and shit generated online anymore. I'm open to feedback or learn.
Exactly this: "Jensen Huang says he would be 'deeply alarmed' if his $500,000 engineer did not consume at least $250,000 of tokens" : https://www.businessinsider.com/jensen-huang-500k-engineers-...
Sometimes it is better to get into things early because it will grow more complex as time goes on, so it will be easier to pick up early in its development. Consider the Web. In the early days, it was just HTML. That was easy to learn. From there on, it was simply a matter of picking up new skills as the environment changed. I'm not sure how I would deal with picking up web development if I started today.
That said, your point about the leverage of learning html and web in the early days compared to now rings true. pre-compiled isomorphic typescript apps are completely unrecognizable from the early days of index.html
Are the bootcampers better developers? Probably not. But they still were employable and paid relatively the same.
Many developers who picked up the web in the early years struggle with (front-end) web development today. It doesn't matter if they fetched jQuery or MooTools from some CDN as it was done in the mid 00s. Once the tooling became too complicated and ever changing they couldn't keep up as front-end dilettante. It required to commit as professionals.
If you started today, you'd simply learn the hard way, as it's always been done: get a few books or register for a course. Carve some time every day for theory and practice. All the while prioritizing what matters the most to get stuff done quickly right now, with little fluff. You will not learn Grunt, Bower, and a large array of historic tech. You'll go straight for what's relevant today. That applies to abstractions, frameworks, and tooling, but also to the fundamentals. You'll probably learn ES6+ and TS, not JS WAT. A lot of the early stuff seems like an utter waste of time in retrospect.
This is true for all tech. If you knew nothing about LLMs by the end of this year, you could find a course that teaches you all the latest relevant tricks in 5 to 10 hours for 10 bucks.
The best professionals did not fall for insanity of the modern front-end dilettante and continued hacking shit without that insanitity.
> You will not learn Grunt, Bower, and a large array of historic tech. You'll go straight for what's relevant today.
which will be outdated "tomorrow" just like grunt/bower... are looked at today
> A lot of the early stuff seems like an utter waste of time in retrospect.
This cannot be further from the truth, if you learned Javascript early, like really learned it, that mastery gets you far today. The best front-end devs I know are basically Javascript developers, everything else is "tech du jour" that comes and goes and the less of it you invest in the better off you'll be in the long-run.
> If you knew nothing about LLMs by the end of this year, you could find a course that teaches you all the latest relevant tricks in 5 to 10 hours for 10 bucks.
Hard disagree with this unless you are doing simple CRUD-like stuff
A lot of snark aside there's a bit of a false dichotomy (I think) here at work. Whenever or wherever your jumping in point is into $something it will always pay dividends to learn the fundamentals of that $something well and unless you interact with older iterations on that $something then you'll never have to bother learning the equivalent of Grunt, Gulp, Stylus, Nunjuncks and so on for that $something.
With that being said it's also good to put aside time once a year to check out a good recommended (and usually paid) course from an established professional aimed at busy professionals.
As for LLMs I feel it's slowly becoming a thing big enough where people will have to consider where to focus their energy starting with 2027. Kinda like some people branched from web development into backend, frontend and UI/UX a good while back. Do you want to get good at using Claude Code or do you want to integrate gen AI features at work for coworkers to use or customers/users? It's still early days just like when NodeJS started gaining a lot of traction and people were making fun of leftpad.
"Front-end professional" and "no tooling" have been exclusive propositions since the early 2010s. You either learned to use tools or you were out of the loop.
> which will be outdated "tomorrow" just like grunt/bower... are looked at today
Not really. Historically, the main problem with front-end development has not been change, but the pace of it. That's how it ties in with the current discussion regarding the (now) ever-changing terrain of LLM-assisted coding. Front-end development is still changing today, but it's coalescing and congealing more than it's revolving. The chasms between transitions are narrowing. If you observe how long Webpack lasted and familiarity with it carried over to using Vite, it's somewhat safe to expect that the latter will last even longer and that its replacement will be a near copy. Someone putting time to learn front-end skills today might reap the benefits of that investment longer.
> if you learned Javascript early, like really learned it, that mastery gets you far today.
I did. I got a copy of the Rhino book 4th ed. and read it cover to cover. I would not advise to learn JS today with historical references. JS was not designed like most other languages. It was hastily put together to get things done and it had a lot of "interesting", but ultimately undesirable, artifacts. It only slowly turned into a more sensible standard after-the-fact. Yes, there are some parts that are still in its core identity, but a lot in the implementation has changed. Efforts like "Javascript: The Good Parts", further standardization, and TS helped to slowly turn it into what we know today. You don't need to travel back in time for that mastery. Get a modern copy of the Rhino book and you'll be as good as the best of them.
Being a good professional developer means getting the primitives and the data model not horribly pointed in the wrong direction. So it's extremely helpful to be aware of those primitives. And the argument "nobody is better off knowing assembly as a primitive" doesn't hold because as-said the web is literally still html files. It's right there in the source.
> The web is special in this sense, it's intentionally long-lived warts and all. So the fundamentals pay outsized dividends.
Fundamentals pay dividends, but what makes you think that what you learn as an early adopter are fundamentals? Fundamentals are knowledge that is deemed intemporal, not "just discovered".
The historical web and its simplicity are as available to anyone today as it was back then. People can still learn HTML today and make table-based layouts. HTML is still HTML, whether you learned it then or today. But if back then you intended to become a professional front-end developer, you would still have to contend with the tremendous difficulties that some seem to have forgotten out of nostalgia. You'd soon have to also learn CSS in its early and buggy drafts, then (mostly non-standard) JavaScript (Netscape and IE6) and the multiple browser bugs that required all kinds of hacks and shims. Then you'd have to keep up with the cycles of changing front-end tools and practices, as efforts to put some sense into the madness were moved there. Much in all that knowledge went nowhere since it was not always part of a progression, but rather a set of competing cycles.
Fundamentals are indisputably relevant, but they're knowledge that emerges as victorious after all the fluff of uncertainty has been left behind. Front-end development is only now settling into that phase. With LLMs we're still figuring out where we're going.
You're right, fundamentals are distilled, so to think they are free just by getting in early is likely backwards. And earning one's professional chops doesn't stop or start based on when you enter.
Web dev definitely is nostalgic. I miss the early days but I also conveniently erased ie6, binding data to HTML, the need for backbone and jQuery to do anything. hmmm yeah doesn't matter when you start, it's all a grind if you dig deep enough.
Also known as PTSD-induced amnesia, haha. We all tried to forget.
Picking up the web early didn't help with the latter. I spent most of my early time memorizing tips and tricks that only applied to old browsers. I didn't pick up the fundamentals till I went back to school for CS and took a networking class.
How HTML, CSS and javascript come together is extremely relevant to developers 20 years ago and today.
I do support and agree with the parent comment, see the discussion, but I do credit getting into web development when it was raw and open paid dividends for me. Todays ecosystem is opaque in comparison. You don't think there's more friction today?
And yes understanding them is still relevant. But when I started I was spending more time memorizing the the quirks of IE6 than I was learning how JavaScript, CSS, and HTML come together.
I think it you start directly in react you don’t learn the layer below it sure. But there’s no reason you have to start leaning react. There’s nothing inherent about starting today that forces you to start directly with React. You could start building a static webpage. And if you did that it would be easier and more fundamental than if you did that same thing 20 years ago because you can ignore most of the non-standard browser quirks.
Remember all the hoopla over how people needed be a "prompt engineer" a couple years back? A lot of that alchemy is basically totally obsolete.
Think about the hoops you had to jump through with early GenAI diffusion models: tons of positive prompt suffixes (“4K, OCTANE RENDER, HYPERREALISTIC TURBO HD FINAL CHALLENGERS SPECIAL EDITION”) bordering on magical incantations, samplers (Euler vs. DPM), latent upscalers, CFG scales, denoising strengths for img2img, masking workflows, etc.
And now? The vast majority of people can mostly just describe desired image in natural language, and any decent SOTA model can handle the vast majority of use cases (gpt-image-1.5, Seedream 4, Nano-banana).
Even when you’re running things locally, it’s still significantly easier than it used to be a few years ago, with options like Flux and Qwen which can handle natural language along with a nice intuitive frontend such as InvokeAI instead of the heavily node-based ComfyUI. (which I still love but understand it's not for everybody).
The goal posts will move.
*I should qualify that "using" CC in the strict sense has no learning curve, but really getting the most out of it may take some time as you see its limitations. But it's not learning tech in the traditional sense.
Projects as simple as "set up a tmux/vim binding so I can write prompts in one pane and run claude in the other". Fails.
I've been coding for over 20 years.
If there is no learning curve, why doesn't it work for me? You can't say I'm not using it right, because if that was true, then all I need to do is climb the learning curve to fix that, the curve that you say doesn't exist.
I'm not sure why it isn't working for you. Maybe your expectation is a perfect one-shot or else it has zero value, and nothing in between?
But my advice is to switch gears and see the "plan file" as the deliverable that you're polishing over implementation. It's planning and research and specification that tends to be the hard part, not yoloing solutions live to see if they'll work -- we do the latter all the time to avoid 10min of planning.
So, try brainstorming the issue with Claude Code, talk it through so it's on the same page as you, ensure it's done research (web search, docs) to weigh the best solutions, and then enter plan mode so it generates a markdown plan file.
From there you can read/review,tweak the plan file. Or have it implement it. Or you implement it. But the idea is that an LLM is useful at this intermediate planning stage without tacking on additional responsibilities.
I think by "no learning curve" they are referring to how you can get value from it without doing the research you'd need to use a conventional tool. But there is a learning curve to getting better results.
I learned my plan file workflow just from Claude Code having "Plan Mode" that spits out a plan file, and it was obvious to me from there, but there are people who don't know it exists nor what the value of it is, yet it's the centerpiece of my workflow. I also think it's the right way to use AI: the plan/prompt is the thing you're building and polishing, not skipping past it to an underspecified implementation. Because once you're done with the plan, then the impl is trivial and repeatable from that plan, even if you wanted to do the impl yourself.
I'm way past the point of arguing anything here, just trying to help.
This is exactly the workflow that works very well for me in Cursor (although I don't use their Plan Mode - I do my version of it). If you know the codebase well this can increase your speed/productivity quite a bit. Not trying to convince naysayers of this, their minds are already made up. Just wanted to chime in that this workflow does actually work very well (been using it for over 6 months).
I've been reading a book about the history of math and at some points in the beginning the author pointed out how some fields undergo a radical change within due to some discovery (e.g. quantum theory in physics) and the practitioners in that field inevitably go through this transformation where the generations before and after can't really relate to each other anymore. I'm paraphrasing quite a bit though so I'll just recommend people check out the book if they're interested: The History of Mathematics by Jacqueline Stedall
And the aforementioned VS Code video, if I remember correctly: https://youtu.be/dutyOc_cAEU?si=ulK3MaYN7_CPO76k
It was depressing watching all of this unfold over the last few years, but now I'm taking on more projects and delivering more features/value than ever before. That was the reason I got into software anyways, to make good software that people like to use.
> the generations before and after can't really relate to each other anymore
Yeah, good point. In some ways it's already crazy to me that we used to write code by hand. Especially all the chore work, like migrating/refactoring, that's trivial for even a dumb LLM to do. It kinda feels like a liability now when I'm writing code, kinda like how it feels when the syntax highlighting or type-checker breaks in the editor and isn't giving you live feedback, so you're surprised when it compiles and runs on the first try.
I remember having a hard time imagining what it was like for my dad to stub out his software program on paper until his scheduled appointment with the university punch card machine. And then sure being happy that I could just click a Run button in my editor to run my program.
That's what's being asked of me in my last two jobs. Vibe code it, if it's bad just throw it away and regenerate it because it's "cheap". The only thing that matters is that you can quickly generate visible changes and ship it to market.
Out of frustration I asked upper management (in my current job), if you want me to use AI like that then I'll do it. But when it inevitably fails, who is responsible? If there's no risk to me, I will AI generate everything starting today, but if I have to take on the risk I won't be able to do this.
Their response was that AI generates the code, I'm responsible for reviewing it and making sure it's risk free. I can see that they're already looking for contractors (with no skin in the game) that are more than willing to run the AI agents and ship vibe code, so I'm at a loss on what to do.
Because LLMs are not actually good at programming, despite the hype.
I think a decent place to start is: given a small web app, give it a bug report and ask it what causes the bug.
There isn't? Then why is it that whenever devs have tried it and not achieved useful results, they're told that they just haven't learned how to use it right?
If people really counted all the time they spend coddling the AI, trying again, then trying again and again and again to get a useful output, then having to clean up that output, they would see that the supposed efficiency gains are near zero if not negative. The only people it really helps are people who were not good at coding to begin with, and they will be the ones producing the absolute worst slop because they don't know the difference between good and bad code. AI is constantly trying to introduce bugs into my codebase, and I see it happening in real-time with AI code completion. So, no you aren't "holding it wrong", the other people are no different than the crypto-bro's who were pushing blockchain into everything and hoping it would stick.
Exactly. I counted and reported my results in a previous thread [0].
So then Claude starts discecting the instructions. I start writing some code.
After a while Claude is done, and I've written about two or three dozen lines of code. Claude is way off, so I have to think about why and then write more instructions for it to follow. Then I continue coding.
After a while Claude is done, and I've written about three dozen more lines of code. Claude is closer this time, but still not right. Round 3 of thinking about how Claude got it wrong and what to tell it to do now. Then I continue coding.
After a while Claude is done (yet again), and I've written a lot more code and tested it and it's working as needed. The output Claude came up with is just a little bit off, so I have it rework the output a little bit and tell it to run again.
I downloaded the resulting code Claude wrote and compared it to my solution, and I will take my solution every single time. Claude wrote a bloated monstrosity.
This is my experience with "AI", and I'm honestly not loving it.
It does sometimes save me time converting code from one language to another (when it works), or implementing simple things based on existing code (when it works), and a few other tasks (when it works), but overall I end up asking myself over and over "Is this really how developers want the future to be?"
I'm skeptical that these LLM-based coding tools will ever get good enough to not make me feel ill about wasting my time typing instructions to them to produce code that is bloated and mostly not reusable.
And writing those instructions when I race it..it's more cognitive effort for me than coding!
If you were the type of person who makes tiny toy apps, or you worked on lots of small already been done stuff, you'd love doing this. It would speed you up so much.
But if you worked on a big application with millions of users that had evolved into it's own snowflake through time and use, you'd get very little from it.
I think I probably could benefit from looking at existing open source solutions and modifying them a lot of the time, and I kinda started out doing that at first. But eventually you realize that even though starting with something can save you time, it can also cost you a ton of time so it's frequently a wash or a net negative.
If you have try teaching someone something from the absolute ground up, you will quickly realize that a huge number of things you now believe are "standard assumptions" or "obvious" or "intuitive" are actually the result of a lot of learning you forgot you did.
I use LLMs pretty regularly, so I'm familiar with the kinds of tasks they work well on and where they fall flat. I'm sure I could get at least some utility from Claude Code if I had an unlimited budget, but the voracious appetite for tokens even on a trivially small project -- combined with a worse answer than a curated-context chatbot prompt -- makes its value proposition very dubious. For now, at least.
* I considered trying Opus, but the fundamental issue of it eating through tokens meant, for me, that even if it worked much better, the cost would dramatically outweigh the benefit.
The early adopters started years ago and they've seen improvements over time that they started attributing them to their own skill. They tell you that if you didn't spend years prompting the AI, it will be difficult to catch up.
However, the exact opposite is happening. As the models get better, the need for the perfect prompt starts waning. Prompt engineering is a skill that is obsoleting faster than handwriting code.
I personally started using codex in march and honestly, the hardest part was finding and setting up the sandbox. (I use limactl with qemu and kvm). Meanwhile the agentic coding part just works.
I waited until it seemed good enough to use without having to spend most of my time keeping up with the latest magical incantations.
Now I have multiple Claude instances running and producing almost all of my commits at work.
Yes, with a lot of time spent planning and validating.
Mistakes are less costly in the beginning and the knowledge gained from them is more valuable.
Over-sharing on social media. Secret / IP leaks with LLMs. That kind of thing.
I agree:
FOMO is an all-in mindset. Author admits to dabbling out of curiosity and realizing the time is not right for him personally. I think that's a strong call.
And even if your product is genuinely great, distribution is becoming the real bottleneck. Discovery via prompting or search is limited, and paid acquisition is increasingly expensive.
One alternative is to loop between build and kill, letting usage emerge organically rather than trying to force distribution.
Most of my AI usage comes from doing things I don't enjoy doing like making a series of small tweaks to a function or block of code. Honestly, I just levelled the playing field with vim users and its nothing to write home about
Could have fooled me, the way some people manage to confuse themselves with it
I don’t think folks are taking seriously the possible worlds at the P(0.25) tail of likelihood.
You do not get to pick up this stuff “on a timescale of my choosing”, in the worlds where the capability exponential keeps going for another 5-10 years.
I’m sure the author simply doesn’t buy that premise, but IMO it’s poor epistemics to refuse to even engage with the very obvious open question of why this time might be different.
We have no reason to believe that they won't keep an eye on this.
Little to nothing about AI tools so far suggests that that one can't just as easily pick the skills later. Tools that will get "exponentially better" will almost certainly be unrecognizable to someone desperately engaging with them now, for not other reason the sake of "having 1-2 years of experience"
Someone might reasonably choose to to bet on the upside. That doesn't imply that everyone else ought to fearfully hedge.
Feels to me like there are at least one or two more paradigm shifts coming in how AI gets used which will make current tools obsolete.
As one example, I think we will eventually get GUI dashboards to manage AI agents which will be easier to use than current CLI tools.
> There are a 16,000 new lives being born every hour. They're all starting with a fairly blank slate.
Not long ago we were ridiculing genZ for not knowing why save icon looks like a floppy disk.
Do you want to feel like that in next 5-10 years?
My last job was a cable technician - making house calls to fix wifi, satellite tv, phone issues. Mostly elderly residents. The majority of them all were computer and phone illiterate. They were slow adopters to the fast-moving technology and many of them did not know how to operate their devices after we (UI/UX/hardware/software engineer 'we') removed them.
I wonder if this also has contributed to the elderly lonliness problem - sure its probably mostly related to physical companionship, acceptance of aging, etc, but the world that they knew (in general and the technological world they grew up in) is no longer recognizable.
My mother has a phone, but only use it to call. She has never needed a computer even though I spent my teenage years glued to one. But I have like 1 percent of a skill in cooking.
If you started early webdev, you learned lots of tricks, that dont benefit a modern webdev. E.g soap, long polling, the JsonP workaround... and so on
Many of the Llm frameworks will be seen simular. Mcp is already kinda heading in the obsolete direction imo, as skills took over
But there’s some stuff that I don’t bother explore in depth because my time is finite and I don’t really need it. And anything LLM tooling is probably easier than a random JS framework. Vim’s documentation is probably longer than cursor’s.
I think it's a luxury to be able to ignore a trend like AI. crypto was fine to ignore because it didn't really replace anyone, but Ai is a different beast
I think they want to sell that perception, but the biggest thing the tech execs want in their SWEs is fear.
Fear that makes us stay in our jobs, even when raises and bonuses stagnate, because the market is scary with all the "AI layoffs" (which have largely been regular downsizing with the AI label slapped on).
Fear that makes us use the LLMs and then put in extra hours when we don't see the 10x productivity gains that they expect.
Fear that makes us erode our own skills and become dependent on these gatekeepers to maintain even a base level of productivity.
So much of the advertising and discussion around AI is based around fear. It's inevitable. It will take your job. Render you useless. It will render humanity useless. You better get in now, or you'll both lose your job and could end up in a virtual hell (https://en.wikipedia.org/wiki/Roko%27s_basilisk).
FWIW I do client work on the side. Full time client work has always been more draining than just having a regular job IME. Maybe I just can't find the right clients, but that's not something I have to worry about when I work for a company.
Client work is hard, but you have to decide if the freedom to work the way you want is worth it.
But I know not everyone is in my shoes which is where my comment is coming from.
There are loads of BS tools out there of course but I don’t use that many tools.
Once you have a good social position, or at least one you're happy with, you stop doing this, and you grow ever more irritated at others doing it ... because it's your social position that they're coming after. And they're younger, more motivated and hungrier. More than that, a decent chunk of these people want a better social position, even if that means taking yours.
The only reason the numbers aren’t even more tilted is because people stopped hiring juniors 2 years ago. But they’re out there, and if there’s a new technology around that makes them vastly more productive than the seniors today, there’s nothing stopping companies from hiring them.
You do have to drag stubborn people, kicking and screaming, into the future or they will continue using old tech. The article is framed in the past tense, "someone tried", "the crypto grift was". As if it's not currently swallowing the world. I guess he is so maximally sensible that he self-assess faster than MS and realizes bitcoin just isn't for him every time.
He has a strange hyper-specific definition of utility and productivity, (wrote my MSc, had fun) don't count.
for example, (dodging the whole full-self-driving controversy) tesla cars have had advanced safety features like traffic aware cruise control and autosteer for over a decade.
so, buying into safety early...
for other technologies, there's sort of the rugpull effect. The people who get in early enjoy something with little drama vs the late adopters. ask people who bought into sonos early vs late, probably more exampless of this.
so getting technology the founders envisioned, vs later enshittified versions.
For me though, I'm dabbling in AI because it fascinates me. Bitcoin was like, I don't know, Herbalife? —never interesting to me at all.
But IMO the most fruitful thing for an engineering org to do RIGHT NOW is learn the tools well enough to see where they can be best applied.
Claude Code and its ilk can turn "maybe one day" internal projects into live features after a single hour of work. You really, honestly, and truly are missing out if you're not looking for valuable things like that!
You're right, it's possible. But you might be both overestimating the ease of onboarding and underestimating the variety of tasks and constraints devs are responsible for.
I've seen Claude knock out trivial stuff with a sufficiently good spec. But I've also seen it utterly choke on a bad spec or a hard task. I think these outcomes are pretty broadly established. So is the expectation that the tech will get better. Waiting isn't unwise.
Bikers in the Tour de France used to not wear helmets. They were seen as uncouth (“why jump on the bandwagon?”). Helmets today are way better than they were then. But if the utility provided is greater than the cost, of course it makes sense to act sooner.
I’m not explicitly arguing for investing in AI or other newfangled tech, I’m arguing that the premise of waiting may be “sounded” but also “leaves money on the table”, or in some cases, lives.
The author talks about vaccines as a counter example but doesn’t really address the cost/benefit in any detail.
Who writes software and doesn't have a list of "I'll fix this one day" issues as long as their arm?
This is honestly one of the things I enjoy most at the moment. There's whole classes of issues where I know the fix is probably pretty simple but I wouldn't have had time to sort it previously. Now I can just point claude at it and have a PR 5mins later. It's really nice when you can tell users "just deployed a fix for your thing" rather than "I've made a ticket for your request" your issue is on the never-ending backlog pile and might get fixed in 5 years time if you're lucky.
On the other hand, I can see these tools getting good enough that scope creep doesn't even matter.
ATM I usually get stuck around the review/verification stage. As in, my code works, I have tested that it works, but it is failing CI or someone left a PR comment. And for each comment I'll have to make sure it makes sense, make the change, test again, and get CI passing again.
The risk of getting in early on crypto is you lose a little money. The risk of not is missing out on money. You can't simply replay that later, the way that you could invest the time to catch up on how git works.
This is in an excellent characterization of the kind of marketing tactic I see all over social media right now and that I find absolutely disgusting.
The keyword here is fear. Despite faux-positive veneer, the messaging around certain technologies (especially GenAI) is clearly designed to induce anxiety and fear, rather than inspire genuine optimism or pique curiosity. This is significant, because fear is one of the most powerful tools to shut down rational thinking.
The subliminal (although not very subtle) message there is something very primitive. "If you don't join our group, you will soon starve to death." This is radically different from how most transformative technologies were promoted in the past.
With AI people are able to say 'this is nonsense' without people getting the pitchforks out.
As for myself, I don't have the bandwidth to learn how to do clever things with AI. I know you just have to write a prompt and it all happens by magic, but I have been burned quite badly.
First off, my elderly father got tricked out of all of his money and my mother's savings, which were intended for my niece, when she comes of age. It was an AI chatbot that did the deed. So no inheritance for me, cheers AI, didn't need it anyway!
Then there was the time I wanted to tidy up the fonts list on my Ubuntu computer. I just wanted to remove Urdu, Hebrew and however many other fonts that don't have any use for me. So I asked Google and just copied and pasted the Gemini suggestion. Gemini specified command line options so that you could not review the changes, but the text said 'use this as you can review changes'. I thought the '-y' looked off, but I just wanted to do some drawing and was not really thinking. So I typed in the AI suggestion. It then began to remove all the fonts and the window manager, and the apps. It might as well have suggested 'sudo rm -fr /'.
This was my wakeup call. I am sure an AI evangelist could blame me for being stupid, which I freely admit to. However, as a clueless idiot, I have been copying and pasting from Stack Overflow for aeons, to never be tricked into destroying all my work.
My compromise is to allow some fun with cat pictures, featuring my uncle's cat, with Google Banana. This allows me to have a toe in the water.
Recently I went on a course with lots of people with few of them being great intellects. I was amazed at how popular AI was with people that have no background in coding. They have collectively outsourced their critical thinking to AI.
I did not feel the FOMO. However, I am old enough to remember when Word came out. I was at university at the time and some of my coursemates were using it. I had genuine FOMO then. What is this Word tool? I was intimidated that I had this to learn on top of my studies. In time I did fire up Word, to find that there was nothing to learn of note, apart from 'styles', which few use to this day, preferring to highlight text and making it bold or biglier. I haven't used a word processor in decades, however, it was a useful tool for a long time.
Looking back, I could have skipped learning how to use a word processor, to stick to vi, latex and ghostscript until email became the way. But, for its time, it was the tool. AI is a bit like that, for some disciplines, you can choose to do it the hard way, using your own brain, or use the new tools. However, I have been badly burned, so I am waiting it out.
This isn't Twitter though.
Clearly there's an advantage for being an early adopter, but the advantage is often overblown, and the cost to get it is often underestimated.
- If you'd invested in Bitcoin in 2016, you'd have made a 200x return
- If you'd specialized in neural networks before the transformer paper, you'd be one of the most sought-after specialists right now
- If you'd started making mobile games when the iPhone was released, you could have built the first Candy Crush
Of course, you could just as well have
- become an ActionScript specialist as it was clearly the future of interactive web design
- specialized in Blackberry app development as one of the first mobile computing platforms
- made major investments in NFTs (any time, really...)
Bottom line - if you want to have a chance at outsized returns, but are also willing to accept the risks of dead ends, be early. If you want a smooth, mid-level return, wait it out...
I still think it's stupid, but I'd be a whole lot richer if I went along with it at the time!
Didn't pull the trigger. I just tell myself I'd have sold them when they doubled in price or they'd have been hacked in one of the mt. gox attacks and I'd have lost them anyways.
Today it would be about 120m. Oh well.
There was a local food delivery service at the time that accepted bitcoin. Can you imagine looking back on life and realizing you spent the equivalent of $1M on a burrito?
Admirable self-reflection.
I bought 10ish BTC at some point for almost nothing, sold them for a low 4-digit amount thinking they were stupid anyway. I still think they were stupid but it turns out they could have paid off my house easily. Oh well.
Otherwise you would most likely have sold during one of huge crashes or values, attempted trade and lost it all, invested into the new shitcoin NFT or whatever or just got hacked along the way.
But even $100 would have been nice given you could still pop them out for free on a standard PC back then with mining software.
My goal in life is not to maximize financial return, it's to maximize my impact on things I care about. I try to stay comfortable enough financially to have the luxury to make the decisions that allow me to keep doing things I care about when the opportunities come along.
Deciding whether something new is the right path for me usually takes a little time to assess where it's headed and what the impacts may be.
In the vast majority of cases, financial returns help maximize your impact on the things you care about. Arguably in most cases it's more effective for you to provide the financing and direction but not be directly involved. That's why the EA guys are off beng quants.
The only real exceptions are things that specifically require you personally, like investing time with your family, or developing yourself in some way.
Or in prison for fraud.
I've not found this to be true at all, for a variety of reasons. One of my moral principles that extreme wealth accumulation by any individual is ultimately harmful to society, even for those who start with altruistic values. Money is power, and power corrupts.
Also, the further from my immediate circle I focus my impact on, the less certainty I have that my impact is achieving what I want it to. I've worked on global projects, and looking back at them those are the projects I'm least certain moved the needle in the direction I wanted them to. Not because they didn't achieve their goals, but because I'm not sure the goals at the outset actually had the long term impact I wanted them to. In fact, it's often due to precisely what we're talking about in this thread: sometimes new things come along and change everything.
The butterfly effect is just as real with altruism as it is with anything else.
If there were a way to be a true Robin Hood and only extract wealth from the wealthy and redistribute that to poor, I'd call that a noble cause, although finance is not my field (nor is crime, for that matter) so it's not for me.
My chosen wealth multiplier is working at a community-owned cooperative, building the wealth for others directly.
I also, candidly, haven't ever seen anyone successfully do that.
The EA guys aren't the final word on ethics or a fulfilling life.
Ursula K. Le Guin wrote that one might, rather than seeking to always better one's life, instead seek to share the burden others are holding.
Making a bunch of money to turn around and spend on mosquito nets might seem to be making the world better, but on the other hand it also normalizes and enshrines the systems of oppression and injustice that created a world where someone can make 300,000$ a year typing "that didn't work, try again" into claude while someone else watches another family member die of malaria because they couldn't afford meds.
Yes there are flaws in the system, but smugly opting out of it and declaring yourself morally superior isn't helpful. Instead you need to actually do the work of understanding the system, its virtues and flaws before you can propose changes that would actually improve things.
The system of imperialism that enables some to starve while others eat is inherently bad and is propped up and legitimized when you act within its framework.
Adding plumbing to your house isn't saying "it's normal that people are dying of thirst." Structuring your impact around donations is, meanwhile, saying "though this system results in people starving while others throw away half their food, we can only solve these problems by working really hard within the rules this system defines, and then lending aid within the rules this system defines." After all, there's only one way to make money enough to be "impactful..."
This is a slightly tangential example, I don't want to be mistaken that I'm saying they're equivalent: Buying and freeing slaves is not a good form of activism when trying to overthrow slavery. It's doing the exact opposite: upholding the institution of slavery with every purchase. Legitimizing it and even in fact funding it. You tell yourself you're at least slightly reducing harm but in reality you're motivating slave catchers to go find more people to slave - and meanwhile btw you're doing nothing to address the fact that slave catchers in your own country are just grabbing the slaves you freed.
The only truly ethical choice for activism against slavery is to break chains and use violence against anyone that prevents you from breaking chains.
Again, not exactly equivalent, just an example of how "helping" can actually prop up the thing you think you're trying to take down.
So, the things that matter the most for most people?
Studies pretty consistently show that happiness caps off at relatively modest wealth.
People don't become quants because they are EAs, they become EAs to justify to themselves why they became quants.
Your first paragraph is just a standard response to utilitarianism, although a poor one because it doesn't consider EV.
Nonetheless I'm not quite sure why merely mentioning EA draws out all these irrelevant replies about it. It was incidental, not an endorsement of EA.
1. 2016 was years after Bitcoin was developed. So you could still make 200x returns without being early adopter.
2. Is this even true? I'd bet scraping experts or people who can fine tune LLMs have easier time finding a job than classical ML academics.
3. Candy Crash was released when iPhone was on its 5th iteration.
If anything, you just added to OPs points. Being an early adopter gives limited advantage.
I think the question really is about how well you hit your timings. You can have held bitcoin but sold it went it hit $5k or less. You can have a technical advantage in a given field but somehow waste it (dead startup, serious illness) and lose your timing. Nobody knows what the right timings are, but I think the OP is pushing for a more consistent risk investment strategy and setting up the timings to raise the floor significantly at the cost of losing some of the best possible ceilings.
What you aim for if you want to invest early is rather a probability distribution of
- get rich with a small (but nevertheless realistic) probability p
- get something between little, nothing, and loosing a little bit with probability 1-p
This is a very different offering than the profit probability distribution that index funds give you.
Sure. I’m saying someone pursuing that portfolio will probably end up underperforming an index. Most new early-stage VCs do.
> get something between little, nothing, and loosing a little
Broadly speaking, when your investment outcomes don’t differentiate between anything and zero, you’re mostly going to get zero.
This holds if you consider "underperforming" to be a comparison of expected values.
On the other hand, if you consider "probability of getting a really huge payoff" to be the measure by which the investments are compared, the index fund is the one that looses in the comparison.
I wouldn't "invest" in lottery tickets because for these p is far too small (exception: if I found a loophole in the system of the lottery, which has been found for some lotteries). For casinos, there is additionally the very important aspect that the casino will scam you (if you start winning money (for example by having found some clever strategy that gives you an advantage), the security will escort you out of the building and ban you from entering the casino again).
So, to give an explanation of the differences:
- Because "the typical run" for such an investment will be loosing, you should never invest your whole net worth (or a significant fraction thereof) into such an investment. The advice that I personally often give is to use index funds or stock investments for generating the money for investments that are much more risky, but have huge possible payouts.
- You should only do such an "early investment" if you have a significant information advantage over the average person. Such an advantage is plausible, for example, if you are deeply interested in technology topics
- Lottery tickets have an insanely small p (as defined in my comment). You only do "early investments" into topics where the p is still small, but not absurdly bad. The difference is that for lottery tickets the p is basically well-known. On the other hand, for "early investments", people can only estimate the p. Because of your information advantage from the previous point, you can estimate the p much better than other people, which gains you a strong advantage in picking the right "early investments" to choose.
But be aware that this is a strategy for risk-affine people. If you aren't, you better stay, for example, with index funds.
If you’re paying a fair price for the risk, sure. Most of the examples you gave seemed to be in deep speculative territory to the point that they don’t very much resemble anything economic.
That’s gambling. You’re truncating the curve below the top. It’s a terrific strategy for middlemen. Its expected value is lower than index investing.
Or you could take what's in the box!!
> - If you'd started making mobile games when the iPhone was released, you could have built the first Candy Crush
I disagree:
Concerning the first point: how neural networks are today is very different from how they were in former days. So, the knowledge of neural networks from the past does only very partially transfer to modern neural networks, and clearly does not make you a very sought-after specialist right now.
Concerning your second point: the success of mobile games is very marketing-centric. While it is plausible that being early in mobile games development when the iPhone was released might have opened doors in the game industry for you, I seriously doubt whether having this skill would have made you rich.
If you had worked on anything other than a transformer based architecture post 2016, such as Mamba or RWKV, you would have wasted your time.
Mamba 3 is the third iteration and somehow I doubt that it will catch on.
I was ahead of the game with my intimidate expertise in ActionaScript and Silverlight! I made 3D engines in browsers well before WebGL was a spec.
It was quite profitable for a few years, then poof. Dead end lol
I think these ideas are similar to long-term relationships. Identify when it's clear it's worth your time, like the author, and commit appropriately, and then when it's time to move on move on.
AngularJS, Backbone, Knockout, YUI, were all a wave of pretty groundbreaking frontend technology. It was absolutely worth experimenting with and committing to once they had some uptake, but probably not before then unless you wanted to work on the teams building them. Time went on, they had years of longevity that overlapped with the next wave of Vue, React, and the rest, and those became worth investing in long-term. Along the way, fundamentals in underlying web technologies were crucial, programming, logic, networking, markup, design.
Actionscript was totally worth investing in, until it ran it's course, and then other things came along and you would have adapted your game programming and engine programming skills yo a different platform.
Luckily I was also doing frontend work alongside, so when the time came to transition to html+css+javascript, it wasn’t much of a move at all, it was just putting down AS and focusing fully on JS
Except you would've probably sold it at any of 1.5x, 2x, 4x, or 10x points. That's what people keep missing about this whole "early bitcoin". You couldn't tell it will 2x at 1.5x, you couldn't tell it will 4x at 2x, and so on.
By the time iPhone was released, it was already too late for small companies. When you developed apps and games for MS PocketPC and Blackberry, you charged $20 per app, and any average quality product would make money.
In 2007, there were only 2 kinds of success stories:
1. Companies that were able to throw a lot of money around (Your example of Candy Crush).
2. Some rare flukes of someone getting rich with some app.
So what I was trying to say: The golden days were really before the release of the iPhone.
But time sharing was never as global and accessible. It was a group in a company or in a university sharing a specific machine.
The currency discussion is different too. Time sharing was just a local fee. Now people are talking about the tokens themselves can do for them.
Bitcoin is a good example: if you bought it 15 years ago and held it, you're probably quite wealthy by now. Even if you sold it 5 years ago, you would have made a ton of money. But if you quit your job and started a cryptocurrency company circa 2020, because you thought crypto would eat the entire economic system, you probably wasted a lot of time and opportunities. Too much invested, too much risked.
AI is another one. If you were using AI to create content in the months/years before it really blew up, you had a competitive advantage, and it might have really grown your business/website/etc. But if you're now starting an AI company that helps people generate content about something, you're a bit late. The cat is out of the bag, and people know what AI-speak is. The early-adopter advantage isn't there anymore.
If you sold the farm to get in early in the Metaverse, you're totally hosed now because that was a dead end. The idea of digital real estate was as terrible then as it is now.
> Or even more specifically, maximize your foothold in it while minimizing your downside.
It's easy to say "well of course I would have invested in Google in 1999" but there was nothing in 1999 to say that Google was going to be as big as it was. Why not Lycos or Dogpile or AskJeeves?
How many people dedicated their careers to Flash, only to have it die at the hands of Steve Jobs and HTML5? It's not just about bailing out: lots of folks had to start over because taking advantage of the opportunity means actually investing real time and money. "As a tulip bulb producer, I would have simply stopped producing tulip bulbs when it started to seem questionable." https://en.wikipedia.org/wiki/Tulip_mania
I think the logical thing to do is to invest a minor amount of time/money across a broad spectrum of new promising tech. If you had been aware of and bought $500 of Bitcoin in 2010, you'd be a billionaire today. The early people involved with NFTs also did very well.
The Flash example is specifically the opposite of my point. Flash was a lucrative skill for a period of time, but at a certain point it became very clear that it didn't have a future.
If only Mt Gox didn't vaporize all of my bitcoin in the early 2010's :(
This question is easy to answer in hindsight but it's not trivial to answer in the moment. I like your mindset though
What would have you done when the Bitcoin fork happened 50/50? Would you have gone int ICOs? Which ones? Etc…
There’s simply too many “new things”, so by trying to get exposure to them you’ll be massively in the red.
Let’s say you get into 1000 “new things”, and you strike it lucky and hit BTC. You’d had to buy BTC in early 2013, hold it over the whole period and sold at the historical maximum for you to be at break even.
If instead of buying 1000 “new things”, you’ve put your money into the S&P you’d be at +250% by the same time.
The problem is this leaves you undifferentiated from every hype chaser in Silicon Valley. Our world is littered with folks who went to coding school, traded Bitcoin, did something in the metaverse and blogged about AI. That jack-of-all-trades knowledge can be useful. But only if you’re making unlikely connections. Having the same cutting-edge familiarity as every tech journalist doesn’t that make.
Better: develop deep knowledge and expertise in something. Anything. Not only does this give you some ability to recognize what expertise looks like from afar, it also lets you dip into new topics and have a chance at seeing something everyone else hasn’t already. That, in turn, gives you the ability to be a meaningful first mover.
You are giving too much credit to tech journalists. How many of them truly understand Bitcoin or AI?
Counterpoint, I sold all my Bitcoin in 2011 when Mt Gox got hacked and the price plummeted 80%. Would have done it again after their 2014 hack too if I had any left.
> Bitcoin is a good example: if you bought it 15 years ago and held it, you're probably quite wealthy by now
But you just said bail the moment it's future starts to be questionable. If you follow that you would have never held it for 15 years.
Just go out and prove how useless it is. If, during your testing, you find that it has no good use case, toss it.
Waiting for others to validate a tech for you is a mistake IMO.
Like investing in index funds, a big part of it is psychology of the individual as Jeeves would say.
Find a way to scratch the FOMO itch without taking on too much risk.
I didn't pick them up until last November and I don't think I missed out on much. Earlier models needed tricks and scaffolding that are no longer needed. All those prompting techniques are pretty obsolete. In these 3-4 months I got up to speed very well, I don't think 2 years of additional experience with dumber AI would have given me much.
For now, I see value in figuring out how to work with the current AI. But next year even this experience may be useless. It's like, by the time you figure out the workarounds, the new model doesn't need those workarounds.
Just as in image generation maybe a year ago you needed five loras and controlnet and negative prompts etc to not have weird hands, today you just no longer get weird hands with the best models.
Long term the only skill we will need is to communicate our wants and requirements succinctly and to provide enough informational context. But over time we have to ask why this role will remain robust. Where do these requirements come from, do they simply form in our heads? Or are they deduced from other information, such that the AI can also deduce it from there?
This is more on the scale of the invention of the press, the telegraph, or the internet itself.
"I'm ok being left behind, I will join this Internet thing when it really becomes useful"...
Ok... you do you. Hope you don't get there too late.
Remember how NFTs were supposed to be the future or art ownership, and all it amounted to was awful pictures of bored apes and ahegao lamas? The NFT bros proudly displayed their shitty art - not because it was good, but because it signaled their allegiance.
Now go on to any pro-AI blog. Look at the images. They've stopped trying to edit out the AI errors - they proudly display images with garbled text and bad anatomy. Just like before, it signals allegiance to AI consumption.
Even the last sentence of your post is the same sentiment as "have fun staying poor" was for the crypto bros.
It’s the same people every time. Stupid, gullible idiots.
Remember when HN was obsessed with the room temp superconductivity fraud a few years back? Remember the zealous indignation at anyone suggest skepticism? The empty attempts to downplay their rabid stupidity afterwards?
Too late for what? Could no one start a viable internet business in 2005, or were they all taken in 1998? Is it impossible to learn machine learning today, if you weren't jumping into Tensorflow in 2015? Do you think it's impossible to learn OpenClaw today, if you weren't playing with it six months ago, and do you think there might not be a successor that "wins" and is easier to learn and use six months from now, or will I have "gotten there too late" to possibly leverage or learn agents?
I just don't understand what it is you think anyone will be too late for, unless this is just self-justification and snide ego-boosting.
But in AI a single person created OpenClaw.
It's called low-hanging fruits.
Do you think no one can create anything alone ever again? Or can they only do it by adopting the bleeding edge?
> It's called low-hanging fruits.
1 in a million ideas are 1 in a million, and they don't require being a bleeding edge adopter of anything. Do you think no one can create a better version of a first-try service? Is the agentic world now closed because someone built a mediocre version of it?
For a start-up based board, this point-of-view just feels so sad and myopic.
This really hinges on what you mean by "didn't use git".
If you were using bzr or svn, that's one thing.
If you were saving multiple copies of files ("foo.old.didntwork" and the like), then I'd submit that you're making the point for the AI supporters. I consulted with a couple developers at the local university as recently as a couple years ago who were still doing the copy files method and were struggling, when git was right there ready to help.
I'm still stuck with TFS and SVN in my day jobs but use Git on and off on side projects. I really wish all my clients would just switch to Git.
I don't understand how this, at all, makes "the point" for anyone.
Or have bet on Mercurial. Which is also close to dead. Or darcs, which has been big in certain environments and now practically extinct.
I made these kind of mistakes early in my career, stuck it out with PHP for far too long ignoring all the changes with frontend design trends, react, etc. I was using jQuery far too late in my career and it really hurt me during interviews. What I was doing was seen as dated and it made ageism far worse for me.
Showing a portfolio website that was using tables instead of divs.
I had to rapidly skill up and it takes longer than you think when you stick too long with what works for you.
If AI truly is a nothing-burger than guess what? Nothing lost and perhaps you learned some adjacent tech that will help you later. My advice is to NEVER stop learning in this field.
Learning is your true superpower. Without that skill, you are a cog that will be easily replaced. AI has revealed to me who among my colleagues is curious, and a continuous learner. Those virtues have proven over the course of my 25+ year career in technology to be what keeps you relevant and marketable.
It is easy NOW to look back and see the optimal path for a web developer, but was that obvious from the start? How many killer technologies lie unused today?
The only scenario where I think it pays off to be on top of the hype is of you are chasing money sloshing around the latest hype. You know, the hustle culture thing. If that's not your thing, waiting until things are established (if they ever get there) is harmless.
And yeah, AI as it is now is at best moderately useful. I use it on a daily basis, but could do without it with little harm.
As much as I dislike the idea of not writing/checking code I am responsible for, it was a surprise to me seeing a few "anti/limited AI in coding" articles that don't pass an LLM detector. (I know those are not perfect but not much else one can do).
As an employee (perhaps even a highly stock-option compensated one) the equation is very different. Perhaps you're aligned if you're an employee of a startup/AI obsessed company. But for the vast majority they're not.
Did handmade Swiss watch movements lose all demand when Asia started mass manufacturing watches? No. There is always going to be more demand for quality over slop. Its the same reason that handmade clothes are worth 100x more than clothes at a department store.
This is all by design too, these billionaires selling thinking machines are trying to make us all dependent on their fountain of tokens. Don't fall for it. Just like how maps apps made everyone reliant on Google/Apple for your ability to navigate around your own city, these billionaires want to do the same think with your ability to think, build, plan and even learn/read.
Don't fall for this scam, unlike other hype cycles like NFTs and Crypto this will actually damage more than just your bank account, it will fry your brain if you become over reliant on it.
Take a second and consider why these LLM tool companies design their products like slot machines. They put multipliers in there UIs (run this x3,x4,x5) times so that you inevitably treat the thing like a slot machine. And it is like a slot machine, you have no way to control the results its quite random, in the case of llms they just have a better payout percentage, at the cost of making your brain become dopamine and structurally dependent on their output. They convince people there is some occulted art in the formation of a prompt, like a gambler who thinks if they press buttons in a certain order they'll get better results or many other gambling superstitions.
If you're writing software please take a moment to breath, and ask yourself if its really that useful to have piles of code where you have little idea how things work, even if they do. Billionaires will sell you on the idea that this doesn't matter because the llm, that you conveniently have to pay them to use, will always be able to fix that bug.
Don't fall for the ruling classes trick, they want you reliant on this thing so they can tell you that your input isn't as valuable, and therefor your salary and skills are not as valuable. We have to stop this now.
https://mystudentfailedtheirmid.substack.com/p/theres-a-shoc...
it’s readily apparent who has bought into the llm hype and who hasn’t
But the curious early adopters were the ones best positioned to be leading the charge on "cloud migration" when the business finally pulled the trigger.
Similarly with mobile dev. As a Java dev at the time that Android came along, I didn't keep abreast of it - I can always get into it later. Suddenly the job ads were "Android Dev. Must have 3 years experience".
Sometimes, even just from self-interest, it's easier to get in on the ground floor when the surface area of things to learn is smaller than it is to wait too long before checking something out.
> But the curious early adopters were the ones best positioned to be leading the charge on "cloud migration" when the business finally pulled the trigger.
From a technological perspective, these sysadmins were right: in nearly all cases (exception: you have a low average load, but it is essential that the servers can handle huge spikes in the load), buying cloud services is much more expensive overall than using your own servers.
The reason cloud computing took of is that many managers believed much more in the marketing claims of the cloud providers than in the technological expertise of their sysadmins.
So just read up on it and say you do. They don't really need 3 years experience, so you don't really need to have it.
Anyways, checking happens often enough that the risk of being considered a liar and a fraud for claiming experience you don't have is high.
Of course don't fraud by like pretending you're a statistician when you have absolutely no mathematical background, but also don't take at face value the "Must have {x} years of experience in {y} tech" requirement when you know you have the necessary work experience to have a good grasp on it in a few weekend prototypes, and you also know that the job doesn't actually require deep expertise of that particular tech.
I did the same for my first React.js job, and I didn't feel bad because 1) I was honest about it and did not sold myself as a React expert, and 2) I had 10 years of front-end development, and I understood web dev enough to not be baffled by hooks and the difference between shallow copy vs. deep copy of a data structure, so passing technical test was good enough for it.
lol no. There's nothing actually different about managing VMs in EC2 versus managing physical servers in a datacenter. It's all the same skills, and anyone who is competent in one can pick up the other with zero adjustment.
Obviously there are tons of tools and systems building up around LLMs, and I don't intend to minimize that, but at the end of the day, an LLM is more analogous to a tool such as an IDE than a programming language. And I've never seen a job posting that dictated one must have X number of years in Y IDE; if they exist, they're rare, and it's hardly a massive hill to climb.
Sure, there's a continuum with regards to the difficulty of picking up a tool, e.g. learning a new editor is probably easier than learning, say, git. But learning git still has nothing on learning a whole tech stack.
I was very against LLM-assisted programming, but over time my position has softened, and Claude Code has become a regular part of my workflow. I've begun expanding out into the ancilary tools that interact with LLMs, and it's...not at all difficult to pick up. It's nothing like, say, learning iOS development. It's more like learning how to configure Neovim.
In fact, isn't this precisely one of the primary value propositions of LLMs -- that non-technical people can pick up these tools with ease and start doing technical work that they don't understand? If non-technical folks can pick up Claude Code, why would it be even _kind_ of difficult for a developer to?
So, I'm with the post author here: what is there to get left behind _from_?
Not quite on topic but as an engineering manager responsible for IDE development, explaining to recruiters and candidates I wanted engineers who developed IDEs, not just used them. Unfortunately, that message couldn't get through so I saw many resumes claiming, say 5 years of Eclpse experience, but I would later determine they knew nothing of the internals of an IDE.
Presumably, people now claim 3 years of machine learning experience but via ChatGPT prompting.
i'll just say, and i understand this is not the point of the article at all, but for all its faults, if you got in on flash as earl as html 2.0 and you were staring at an upcoming dead-end of flash in say, 2009, you also knew or had been exposed at that time to plenty of javascript, e4x and what were essentially entirely clientside SPAs, providing you a sort of bizarro view of the future of react in a couple of years. honestly, not a bad offramp even if flash itself didn't make it.
In contrast to the current top comment [1], I don't think this is a wise assessment. I'm already seeing companies in my network stall hiring, and in fact start firing. I think if you're not trying to take advantage of this technology today then there may not be a place for you tomorrow.
I find it hard to empathise with people who can't get value out of AI. It feels like they must be in a completely different bubble to me. I trust their experience, but in my own experience, it has made things possible in a matter of hours that I would never have even bothered to try.
Besides the individual contributor angle, where AI can make you code at Nx the rate of before (where N is say... between 0.5 and 10), I think the ownership class are really starting to see it differently from ICs. I initially thought: "wow, this tool makes me twice as productive, that's great". But that extra value doesn't accrue to individuals, it accrues to business owners. And the business owners I'm observing are thinking: "wow, this tool is a new paradigm making many people twice as productive. How far can we push this?"
The business owners I know who have been successful historically are seeing a 2x improvement and are completely unsatisfied. It's shattered their perspective on what is possible, and they're rebuilding their understanding of business from first principles with the new information. I think this is what the people who emerge as winners tomorrow are doing today. The game has changed.
Speaking as an IC who is both more productive than last year, but simultaneously more worried.
Hopefully not too many people are "enhanced" to the tune of 0.5x!
I think it depends on why you do programming. I like programming for its own sake. I enjoy understanding a complex system, figuring out how to make change to it, how to express that change within the language and existing code structure, how to effectively test it, etc. I actively like doing these things. It's fun and that keeps me motivated.
With AI I just type in an English sentence, wait a few minutes, and it does the thing, and then I stare out the window and think about all the things I could be doing with my life that I enjoy more than what just happened. I find my productivity is way down this year since the AI push at work, because I'm just not motivated to work. This isn't the job I signed up for. It's boring now.
The money's nice, I guess. But the joy is gone. Maybe I should go find more joy in another career, even if it pays less.
Unfortunately that doesn't change my outlook on where all this is headed.
I'm also daydreaming about other careers instead of doing something useful.
To be blunt about it, there's a decent chance I'll be quitting this job later this year, largely because of the AI push. I just hate these tools and I do not want to work this way. Losing an employee is a pretty big cost to the company. I guess the AI stuff is probably worth it to them, but there's a downside to it, too.
I hope everything works out well for you.
> But that extra value doesn't accrue to individuals, it accrues to business owners.
What is value?
Is a 2X faster lumberjack 2X as valuable? Sure
Is a 2X faster programmer 2X as valuable? At what, fixing bugs? Adding features? That's not how the "ownership class" would define value.
Productivity is a measure of efficiency, not growth. Slashing labor costs while maintaining the status quo is still a big productivity gain.
Maybe I didn't express myself properly, but I think we agree, at least on this point?
Besides this effect, of enabling smaller teams to produce the same results, I think there is a larger effect coming where fundamentally different structures produce the same or better results as last year. I just don't think we've completely figured out what that looks like yet.
But I think it's just a matter of when not if.
My current guess at my slow fortune 500 is ~1-2 years before we see real employment impact.
Startups are happening now at least with my anecdotal conversations. Right now the discussion is more just slower growth than actually doing layoffs. That coin will flip at some point.
Why are they not satisfied with the 2x improvement? Could you give an example of the "rebuilding" that you mean?
I was highly skeptical of this happening not that long ago, but I have to say that it seems increasingly likely. LLMs are still quite mediocre at esoteric stuff, but most software development work isn't esoteric. There's the viable argument that software development largely isn't about writing code, but the ability to write code is what justifies software developer salaries, because there's a large barrier to entry there that most just can't overcome. The 80/20 law seems to apply to everything, certainly here - 80% of your salary is justified from 20% of what you spend your time doing.
It's quite impossible to imagine what this will do to the overall market, because while this sounds highly negative for software developers, we're also talking about a future where going independent will be way easier than ever before, because one of the main barriers for fully independent development is gaps in your skillset. Those gaps may not be especially difficult, but they're just outside your domain. And LLMs do a terrific job of passably filling them in.
It'd be interesting if the entire domain of internet and software tech plummets in overall value due to excessive and trivialized competition. That'd probably be a highly disruptive but ultimately positive direction for society.
In general, we as a society have not adjusted to technology. We've gone through to much change to have any stable base lines. So we're going to float in insanity for a while until things finally settle down. Probably 2 wars, a famine, and several periods of resource scarcity away still, but we'll get there one day...
Not going to lie, I’d rather be poor. Not destitute - I’ve been poor but not destitute and I’d rather not go desperate - but poor? As in (because “poor” is very imprecise and can imply anything between utter poverty to “not owning three homes”) like having a low paying job but still enough to pay rent?
I’d rather be that than do AI assisted software development. Genuinely the only thing stopping me now is that there’s actually way more skill and qualifications in most low-paying jobs than a typical software developer imagines, and acquiring those takes time and money itself. But by now I know multiple people who made the jump even before the latest madness, and they’re all happier. Some still code, but don’t even publish. Some are like “I haven’t used a proper computer in _months_ this is great.” All work hard jobs at odd hours. None regret.
Employment?
Wonderful life lesson on hype cycles. I am curious if hype literacy will join media literacy in academia.
This line, as one example:
> For every HTML 2.0 you might have tried, you were just as likely to have got stuck in the dead-end of Flash.
Like a lot of tech Flash had its moment in the sun and then faded away, but that “moment” lasted a decade, and plenty of people got their start because of or built successful businesses around it. Did they have to pivot as Flash waned? Sure, but change is part of life.
I’m sorry but I find the take expressed in this piece to be absolutely miserable and uninspiring.
But, hey, congratulations on the 20:20 hindsight, I suppose.
I mean... yeah? It's obviously true. However people use LLM coding today not because they're "afraid of being left behind" or "investing into a new tech" or whatever abstract reasoning. It's because they're already reaping the benefit right away. It takes just a few hours to go through like 80% of the learning curve.
This is a great framing.
To keep and/or increase my current compensation, I have to be competitive in the software development market.
(Whether I need AI to remain competitive is another matter.)
The 16,000 new babies will be competing in different markets.
Oh, and of those 16,000 babies, many are born in far less fortunate circumstances, they're already far behind their cohort. :/
Author’s point is that competitiveness can come in many forms. Having the same AI proficiency as everyone else isn’t differentiating. (And it isn’t table stakes.)
I think the framing just doesn't help at all.
It is said that major providers more than break even on what they're charging.
But at the same time that's not the point of capitalism, is it? The point is to charge close to the value you're providing.
My lunch money is approximately $10 and I often blow through as much in Claude tokens generously provided by the company which hired me. But I'm not getting $10 value from those tokens, but much more.
The cost of entry to this market is extremely high. Should Anthropic win and become an almost monopoly, it is bound to keep increasing prices to the point, where the value it's providing matches the cost.
That's the endgame of every AI company out there. It's worth using these tools now, while there's still competition and moats weren't established.
If they decide to increase pricing in lockstep just like FAANG applied RTO policies, we as consumers are going to get the short end of the stick.
The question is how large the value delta will be from open models. And I’m not sure if the cost of entry is really “extremely” high comparatively if as you project the market will be so profitable. Surely investors will see a chance to get a larger piece of that profit. While model training is costly, there is a ceiling imposed by the training material being limited (at least for text). LLMs also don’t have a network-effect moat like social media has, or a web search moat like Google has, or a chip technology moat like TSMC has. It’s unclear if a significant moat will emerge.
I am actually surprised by people willingly trying to be more productive, like... machines. And then crying when machines are proven to be better at being machines than meatbags.
I'm glad I jumped early on: Linux, Python, virtualization, cloud, nodejs, Solana.
I wish I'd gotten into Rust and LLMs earlier.
A practitioner with more experience maybe a few percentage points more productive, but the median - grab subscription, get tool, prompt, will be mostly good enough.
LLMs, at the moment, are all about giving up your own brain and becoming fully dependent on a subscription-based online service.
It is a skill, but not a special AI specific skill.
> It is 100% OK to wait and see if something is actually useful.
> I took part in a vaccine trial
> Getting Jabbed With EXPERIMENTAL SCIENCE!
This is such a weird article. The author presents so many contradictory anecdotal experiences against the author's own conclusion.
At any moment, you are failing at thousands of things that you may not even know about, and that is the gist of what I took away from it. The thing is that you have to be OK when you intentionally choose to not invest in something as regret is ultimately a poison.
The other thing is this: you are not obligated to bring people with you and you have a choice of free association.
But on the other hand... I also only learned git when I needed it at a new job... So we can pump the breaks a bit.
When I eventually got around to using Rust, I was hooked, and now I don't use C++ anymore if I can choose Rust instead. The hype was not completely unjustified, but it was also misplaced, and to this day I disagree with most of those hype projects.
It was no issue to silently pick up Rust, write some code that solves problems, and enjoy it as a very very good language. I don't feel a need to personally contact C or C++ project maintainers and curse at them for not using Rust.
I do the same with AI. I'm not going around screaming at people who dare to write code by hand, going "Claude will replace you", or "I could vibe code this for 10 bucks". I silently write my code, I use AI where I find it brings value, and that's it.
Recognize these tools for what they are: Just tools. They have use-cases, tradeoffs, and a massive community of incompetent idiots who like it ONLY because they don't know better, not because they understand the actual value. And then there's the normal, every day engineers, who use tools because, and ONLY because, they solve a problem.
My advice: Don't be an idiot. It's not the solution for all problems. It can be good without being the solution to a problems. It can be useful without replacing skill. It can add value without replacing you. You don't have to pick a side.
No, they are not.
As with any other skill, if you can't do something, it can be frustrating to peers. I don't want collegeues wasting time doing things that are automatable.
I'm not suggesting anyone should be cranking out 10k LOC in a week with these tools, but if you haven't yet done things like sent one in an agentic loop to produce a minimal reprex of a bug, or pin down a performance regression by testing code on different branches, then you could potentially be hampering the productivity of the team. These are examples of things where I now have a higher expectation of precision because it's so much easier to do more thorough analysis automatically.
There's always caveats, but I think the point stands that people generally like working with other people who are working as productively as possible.
It also shows a passion for learning and improvement, something hiring managers are often looking for signals of.
But of course it's a trade off. This rewards people who don't have family or other obligations, who have time to learn all the new fads so they can be early on the winners.
Writing the actual code that's efficient is iffy at times and you better know the language well or you'll get yourself in trouble. I've watched AI make my code more complex and harder to read. I've seen it put an import in a loop. It's removed the walrus operator because it doesn't seem to understand it. It's used older libraries or built-ins that are no longer supported. It's still fun and does save me some time with certain things but I don't want to vibe code much because it removes the joy out of what you're doing.
Chasing every new tech will lead to burnout and disillusionment at some point.
AI probably isn't going away in the same way NFTs largely did, and I use it to some degree. However, I don't see a lot of value of being on the bleeding edge of AI, as the shape it takes for those skills that will be used for the next 10 years are still forming. Trying to keep up now means constantly adapting how I work, where more time is spent keeping up on the changes in AI than actually doing something useful with it.
After the bubble pops, I think we'll start to see a much more clear picture of what the landscape of AI will look like long-term. Who are the winners, who are the losers, and what tools rise to the top after the hype is gone. I'll go deeper at that time.
Right now, the only thing I'm allowed to use at work is Copilot, so I just use that and don't bother messing around with much more in my free time.
> Few are useful to me as they are now.
Except current AI tools are extremely useful and I think you're missing something if you don't see that. This is one of the main differences between LLMs and cryptocurrency; cryptocurrencies were the "next big thing", always promising more utility down the road. Whereas LLMs are already extremely useful; I'm using them to prototype software faster, Terrance Tao is using them to formalize proofs faster, my mom's using them to do administrative work faster.
I know, I know. I'm prompting it wrong. I'm using the wrong model. I need to pull the slot-machine arm just one more time.
I know I'm not as clever as Terrance Tao - so I'll wait until the machines are useful to someone like me.
That's does not obviously follow, I do worry about the ever increasing proportion of humanity who are no longer 'economically viable' and this includes people who are not yet born.
When AI will be easy to pick up and guide, guess what, there will be no need for a programmer to pick it up. AI will be using itself, Claude Manager driving Claude programmers.
So leverage AI while you still can provide value doing so.
It's literally a "use it or lose it situation".
But AI is a beast.
Its A LOT to learn. RAG, LLMs, Architecture, tooling, ecosystem, frameworks, approaches, terms etc. and this will not go away.
Its clear today already and it was clear with GPT-3 that this is the next thing and in comparison to other 'next things' its the next thing in the perfect environment: The internet allows for fast communication and we never have been as fast and flexible and global scaled manufactoring than today.
Which means whatever the internet killed and changed, will happen / is happening a lot faster with ai.
And tbh. if someone gets fired in the AI future, it will always be the person who knows less about AI and knows less about how to leverage than the other person.
For me personally, i just enjoy the whole new frontier of approaches, technologies and progress.
But i would recommend EVERYONE to regularly spend time with this technology. Play around regularly. You don't need to use it but you will not gain any gut knowledge of models vs. models and it will be A LOT to learn when it crosses the the line for whatever you do.
AI/LLMs are not getting stupid in something they are already good in (or very seldom if).
So i see it more as an evolving thing not a wicked thing.
Newest claude release made context better, for this information I already need to know claude and know what a context is and what the problem with context length is.
I'm following AI for a few years now and def not have the feeling at all its wicked.
Of course those that believe that AI will convert into AGI and destroy society as we know it won't be convinced.
I remember when React was the hotness and I was still using jQuery, I didn't learn it immediatley, maybe a couple years later is when I finally started to use React. I believe this delayed my chance in getting a job especially around that time when hiring was good eg. 2016 or so.
With vibe-coding it just sucks the joy out of it. I can't feel happy if I can just say "make this" and it comes out. I enjoy the process... which yeah you can say it's "dumb/waste of time" to bother with typing out code with your hands. For me it isn't about just "here's the running code", I like architecting it, deciding how it goes together which yeah you can do that with prompts.
Idk I'm fortunate right now using tools like Cursor/Windsurf/Copilot is not mandatory. I think in the long run though I will get out of working in software professionally for a company.
I do use AI though, every time I search something and read Google's AI summary, which you'd argue it would be faster to just use a built in thing that types for you vs. copy paste.
Which again... what is there to be proud of if you can just ask this magic box to produce something and claim it as your own. "I made this".
Even design can be done with AI too (mechanical/3D design) then you put it into a 3D printer, where is the passion/personality...
Anyway yeah, my own thoughts, I'm a luddite or whatever
I do agree that the notion of difficulty needs to be recalibrated though, seemingly impressive RE tasks can now be done trivially with LLMs.
Is using an LLM the same as writing JavaScript over Assembler? idk
I guess it's the same argument of doing math yourself vs. using a calculator, gets the job done
But yeah it goes back to my perspective of why be a carpenter/furniture maker when a 3D printer can just spit one out
Why milk the cow when you can just buy the milk
From Thomas Kuhn's Structure of Scientific Revolutions:
"a new scientific truth does not triumph by convincing its opponents... but rather because its opponents eventually die, and a new generation grows up that is familiar with it"
These companies are paying for the privilege of having their IP stolen.
I am dying inside when I make a comment and receive a response that has clearly been prompted toward my comment and possibly filtered in the voice of the responder if not copied and pasted directly. Particularly when it's wrong. And it often is wrong because the human using them doesn't know how to ask the right questions.
Fortunately, most of the fundamental technological infrastructure is well in place at this point (networking, operating systems, ...). Low skilled engineers vibe coding features for some fundamentally pointless SaaS is OK with me.
That said, my only regret with Bitcoin was deleting my early wallets when I realized the coins were only worth $.25 ... if I'd had any inkling what they'd be worth someday, I'd probably have just bought $1000 worth back then and zipped it up until closer to today. I'm truly curious how many bitcoins were similarly deleted from existence.
(I'm not the earliest adopter of crypto and AI by any means. I only rode up crypto a couple of times for 2X and 3X kinda gains on my investment, and I only started using Claude last year.)
All I know is, I've always enjoyed building things. And I enjoy building things with AI-assisted tools too, so I'll continue doing it.