I read on Reddit about a podcast where Karpathy described how he went from writing 80% of his own code to 0%, being in a constant state of “AI psychosis” because the possibilities feel infinite.

I’ve personally found that my workflow has become very “opportunistic”—I feel like I can do anything with AI, so I try everything. That might be good…or bad. I’d be curious to see what HN has to say, or whether anyone else has experienced something similar.

Here’s the Reddit post for context: https://www.reddit.com/r/ClaudeAI/comments/1s08r1c/karpathy_says_he_hasnt_written_a_line_of_code/

Anyone also feeling this way?? If not psychosis which may be an exaggeration then feeling more stressed, frazzled, whatever.

  • sameergh 20 hours ago |
    Yes a bit, The hard part now is not coding, it is deciding what is actually worth building
    • jawerty 18 hours ago |
      Agreed. However, I'm worried the productivity hack AI gives us might effect the "what" negatively.
  • salawat 19 hours ago |
    Here's the thing. You're kind of dopamine hacking yourself. Using current LLM's is something akin to using a slot machine, and one specifically tuned to work on/predate on knowledge workers.

    The fact is, while it can talk a good game, and has been RLHF'd to high heaven to validate you all the bloody time to keep you engaged and burning tokens, your brain is simply tuned to reward any semblance of progress, and you getting a little bit more out of the LLM is in the same damn family of hit you get off coding. The dangerous bit though, is the inherently probabilistic nature of it though. This crank on a prompt may be different from same inputs, but different crank on the machine.

    Just remember to get out from in front of the screen, and try to experience the worldly implementations of the systems you think you're building. Without that real world experience, no one's going to trust a bloody thing you do. You are a world model. It's a language model. It may know how to shoot the lingo, you know or can reckon how to do the thing.

    Try running yourself a local model on a sufficiently beefy laptop. The lack of instant feedback tends to help soften the feedback loop, and gives you a less "ecstasy" coded position from which to actually objectively evaluate the efficacy of the thing at converting raw electricity -> thing. You'll find the added friction from the additional constraints (no outsourcing to a datacenter funded by someone else's money), suddenly changes the character of the thing.

    • Our_Benefactors 14 hours ago |
      > Try running yourself a local model on a sufficiently beefy laptop

      I don't understand why you think the solution to using a well tuned and intelligent model is to use one that is a dumbass

  • Grimblewald 17 hours ago |
    Not really. Basic stuff, sure, ai nails. However much stuff thats interesting or useful AI sucks at. Try getting it to replicate the performance of microsoft research's image composite editor. The research/knowledge is in public domain (brown et al paper on panoramic stitching via ransac/sift, gain correction etc) and yet ai suck at it. A task that takes ICE less than a minute can take AI version 30 minutes+ and produces worse results. After loads of hand holding you can kind of get it to be close to ICE performance, but never really. Every new model that comes out, that's one of my personal tests among a battery of others. Ironically llms seem to be getting worse at many of these tasks, not better. Need some webapp with a database and a sleek looking ui? AI has your back (kind of, still sloppy or dangerously unsafe half the time) need some simple get data plot data thing? Ai can do that.

    however, actually interesting useful things it tends to fail at, and these can be small reasonably sized projects.

    worse still, AI models seem to be optimizing for how many tokens they can make you burn before you give up, rather than minimizing turns required to have a finished product, I say that because each new model that comes out, it needs more turns of coaxing and prodding to get to a functional state.

  • al_borland 17 hours ago |
    If your assistant is causing you to work more hours and only sleeping 4 hours per night, is it really a good assistant? I think I'd be firing an assistant that did that to my life.

    I'm a big believer is not just doing something because I can. Could AI build me a personal suite of apps to manage my life in the exact way I want... maybe? Should I spend my time doing that, even if AI is writing 100% of the code? Probably not. Will it be better enough to justify the investment? No. When it breaks or has bugs, who has to deal with me? Me. What about the infrastructure? Another thing to do.

    You can say AI is writing all the code, but if someone has to be there to babysit and guide it the whole way, it's still work. Less engaging and rewarding work. I mostly find vibe coding to be boring and frustrating, unless it can one-shot it, which it can only do for small stuff.

    I use AI, but I use it in the same way I would use a search engine or a hammer. It's a tool to help do what I was already doing. Sure, it grows my capacity to some degree, but pushing that too far ends up being problematic, as I lose my ability to properly oversee it.

    • jawerty 16 hours ago |
      Yea there could also be an issue with learning how to hand hold an AI vs working on how to actually engineer good solutions. Maybe one feeds into the other since we're not getting off the AI train...
    • netsharc 16 hours ago |
      If my brain got a 100x overclock (both on speed and endurance), I'd be excited to use it all the time too.

      The issue is obviously AI isn't that, it's a simulation of that that often fails...

    • fragmede 15 hours ago |
      > When it breaks or has bugs, who has to deal with me? Me. What about the infrastructure? Another thing to do.

      Wait, why are you doing it? If you're already that far, have the AI agent do that for you as well.

      • lazide 15 hours ago |
        Because of course the thing that persistently fails to make it work will somehow fix it?
      • al_borland 14 hours ago |
        Who is having the AI do it? It would be me, correct? I have to tell the AI to fix the bugs and make sure they are fixed. I have to tell the AI to build the infrastructure and hope it works. Then I have to pay for it and hope it didn’t do something stupid that will cost Me a small fortune.

        The point is, none of this stuff just happens. I would have to be involved in all of it, and the more it does, the more I need to do from a guidance and oversight perspective.

        Is this what I want my life to be? It sounds absolutely awful.

  • journal 17 hours ago |
    I too get stressed out when I'm in over my head.
  • rsrsrs86 16 hours ago |
    Yeh the guy is going nuts. His whole job analysis thing is ridiculous. To me, it tells the guy has no critical sense, because this kind of paper would ruin the career of any economist.
    • iainctduncan 15 hours ago |
      To me me he's just another example of a very smart programmer being really bad at seeing big pictures, imagining from other perspectives, and generally having people/systems/economics/philosophy wisdom.

      Unfortunately it seems endemic in our field. So many great coders have this laughably naive belief that, because they are good at something that makes them feel like a genius when they solve problems, they are in fact geniuses at solving all problems.

      Even more unfortunately, AI seems to ramp that up to 10x along with the code generation.

      I'm willing to be the public perception of programmers in general is going to be a lot workse five years from now....

      • vunderba 29 minutes ago |
        There's a tangentially related concept to this where Nobel Prize winners have held, shall we say, rather questionable beliefs in areas outside their domain of expertise.

        https://en.wikipedia.org/wiki/Nobel_disease

  • fathermarz 16 hours ago |
    I have learned to temper it, but it is very challenging. The part that I actually struggle with is creating too much code that I can’t keep up with the high level “what does this do”, user and data flow. I went through the plan, refined it and spent all of my cognitive tokens on that part. That by the time I revisit that feature, I personally lost the context of the “why”.

    This has forced me to not do anything extra or while I’m in there. Just focus on 1-3 critical features at a time.

    I must remember to go back and clear out dead code, tighten up the repo, and make sure all the new services are following the standards of the rest of the codebase

  • sergiotapia 11 hours ago |
    I had a brief period of that of about a month, and quickly realized the limitations that come with specifically coding using AI agents. I now still use AI agents but step by step with me steering it. Otherwise it turns into slop no matter how sophisticated the guard rails and skill files.
  • taylorius 7 hours ago |
    100% yes. As you say, Psychosis is an exaggeration but frazzled captures it. And it's definitely a feeling that "this is such an opportunity, but the window will close - got to make the most of it somehow". Very stressful in a diffuse kind of way.
  • muzani 5 hours ago |
    I stick to the basic subscriptions. Whenever I get into psychosis, it yanks me back. I'm forced to read a book, go to the gym, talk to people, or actually have a think about what I'm trying to do. In emergencies, I may buy add on credits or subscribe to a secondary service, but it keeps me leashed.
  • andrewvector 3 hours ago |
    Absolutely! It feels to me almost like the toxic productivity, or longevity craze, where because you can do something or measure something, you do.. and over time your threshold and hyper focus on so many things & threads causes, anxiety, burn-out, etc. I can't pin down exactly why, but I find myself having the same feelings about software devving my own project now that I did when I was into tracking my HSV and sleep etc, after a while those things can become a bit compulsive!