The Agentic AI Handbook: Production-Ready Patterns
125 points by SouravInsights 3 hours ago | 35 comments
  • verdverm 3 hours ago |
    looks to be a good resource with lots of links

    thanks for the share!

  • N_Lens 2 hours ago |
    I sometimes feel like the cognitive cost of agentic coding is so much higher than a skilled human. There is so much more bootstrap and handling process around making sure agents don't go off the rails (they will), or that they will adhere to their goals (they won't). And in my experience fixing issues downstream takes more effort than solving the issue at the root.

    The pipe dream of agents handling Github Issue -> PullRequest -> Resolve Issue becomes a nightmare of fixing downstream regressions or other chaos unleashed by agents given too much privilege. I think people optimistic on agents are either naive or hype merchants grifting/shilling.

    I can understand the grinning panic of the hype merchants because we've collectively shovelled so much capital into AI with very little to show for it so far. Not to say that AI is useless, far from it, but there's far more over-optimism than realistic assessment of the actual accuracy and capabilities.

    • aaronrobinson 2 hours ago |
      It can definitely feel like that right now but I think a big part of that is us learning to harness it. That’s why resources like this are so valuable. There’s always going to be pain at the start.
    • nulone 2 hours ago |
      Cognitive overhead is real. Spent the first few weeks fixing agent mess more than actually shipping. One thing that helped: force the agent to explain confidence before anything irreversible. Deleting a file? Tell me why you're sure. Pushing code? Show me the reasoning. Just a speedbump but it catches a lot. Still don't buy the full issue→PR dream though. Too many failure modes.
    • cik 37 minutes ago |
      I'm a huge naysayer.. I'm also very much coming around. My workflow may be very horrid to some.

      1. I create multiple git workspaces 2. I connect Antigravity in multi mode, - one to each 3. I have prompts that I craft on a per workspace node 4. I ask questions and give it bit sized tasks.

      Now to be clear, I'm not seeing the wheel reinvented. But the tedious work I'm now churning out at scale, so that I can focus on the heart of the problem. Frankly, I think this speaks to how much boilerplate is frequently required.

  • MrOrelliOReilly 2 hours ago |
    This is a great consolidation of various techniques and patterns for agentic coding. It’s valuable just to standardize our vocabulary in this new world of AI led or assisted programming. I’ve seen a lot of developers all converging toward similar patterns. Having clear terms and definitions for various strategies can help a lot in articulating the best way to solve a given problem. Not so different from approaching a problem and saying “hey, I think we’d benefit from TDD here.”
    • Kerrick an hour ago |
      I recognized the need for this recently and started by documenting one [1]... then I dropped the ball because I, too, spent my winter holiday engrossed in agentic development. (Instead of documenting patterns.) I'm glad somebody kept writing!

      [1]: https://kerrick.blog/articles/2025/use-ai-to-stand-in-for-a-...

  • Bukhmanizer 2 hours ago |
    I’d rather just read the prompt that this article was generated from.
    • straydusk an hour ago |
      I finally found the perfect way to describe what I feel when I read stuff like this.
      • ares623 an hour ago |
        Tempted to copy the content and launder it through another LLM and post a comment linking to my own version
      • iwrrtp69 an hour ago |
        I Would Rather Read The Prompt (IWRRTP)
        • sebastiennight 18 minutes ago |
          I hereby second the motion to get this acronym widely adopted
      • aj_g an hour ago |
        I remember some proto-memes about translation of some text between English and Chinese 100 times and the results being hilarious...modern parallel would be to ask a LLM to read the article, and generate the prompt that constructed the article. Then generate an article based on that prompt. Repeat x100.
    • aitchnyu 13 minutes ago |
      Donate me the tokens, dont donate me slop PRs - open source maintainer
  • wiseowise 2 hours ago |
    So it begins, Design Patterns and Agile/Scrum snake oil of modern times.
    • bandrami 2 hours ago |
      No no. We promise this solution has a totally different name.
    • 63stack an hour ago |
      No dude, you just don't get it, if you shout at the ai that YOU HAVE SUPERPOWERS GO READ YOUR SUPERPOWERS AT ..., then give it skills to write new skills, and then sprinkle anti grader reward hacking grader design.md with a bit of proactive agent state externalization (UPDATED), and then emotionally abuse it in the prompt, it's going to replace programmers and cure cancer yesterday.
      • wiseowise an hour ago |
        Curing cancer is H2 2030, once my options have vested. :cool-eyeglasses-emoji:
  • laborcontract an hour ago |
    If you're remotely interested in this type of stuff then scan papers arxiv[0] and you'll start to see patterns emerge. This article is awful from a readability standpoint and from an "does this author give me the impression they know what they're talking about" impression.

    But scrap that, it's better just thinking about agent patterns from scratch. It's a green field and, unless you consider yourself profoundly uncreative, the process of thinking through agent coordination is going to yield much greater benefit than eating ideas about patterns through a tube.

    0: https://arxiv.org/search/?query=agent+architecture&searchtyp...

  • Bishonen88 an hour ago |
    AI written article about AI usage, building things with AI that others will use to build their own AI with. The future is now indeed.
    • jbstack 12 minutes ago |
      I feel like HN should have a policy of discouraging comments which accuse articles and other comments of being written by AI. We all know this happens, we all know it's a possibility, and often such comments may even be correct. But seeing this type of comment dozens of times a day on all sorts of different content is tedious. It almost feels like nobody can write anything anymore without someone immediately jumping up and saying "You used AI to write that!".
      • simianparrot 9 minutes ago |
        No. Public shaming for sharing AI written slop is what we need more of.
  • bluehat974 an hour ago |
  • alkonaut an hour ago |
    All of this might as well be greek to me. I use ChatGPT and copy paste code snippets. Which was bleeding edge a year or two ago, and now it feels like banging rocks together when reading these types of articles. I never had any luck integrating agents, MCP, using tools etc.

    Like if I'm not ready to jump on some AI-spiced up special IDE, am I then going to just be left banging rocks together? It feels like some of these AI agent companies just decided "Ok we can't adopt this into the old IDE's so we'll build a new special IDE"?_Or did I just use the wrong tools (I use Rider and VS, and I have only tried Copilot so far, but feel the "agent mode" of Copilot in those IDE's is basically useless).

    • hahahahhaah 43 minutes ago |
      I feel like just use claude code. That is it. Use it you get the feel for it. Everyone is over complicating.

      It is like learning to code itself. You need flight hours.

      • cobolexpert 6 minutes ago |
        This is something that continues to surprise me. LLMs are extremely flexible and already come prepackaged with a lot of "knowledge", you don't need to dump hundreds of lines of text to explain to it what good software development practices are. I suspect these frameworks/patterns just fill up the context with unecessary junk.
    • dude250711 37 minutes ago |
      The idea is to produce such articles, not read them. Do not even read them as the agent is spitting them out - simply feed straight into another agent to verify.
    • embedding-shape 37 minutes ago |
      > I never had any luck integrating agents

      What exactly do you mean with "integrating agents" and what did you try?

      The simplest (and what I do) is not "integrating them" anywhere, but just replace the "copy-paste code + write prompt + copy output to code" with "write prompt > agent reads code > agent changes code > I review and accept/reject". Not really "integration" as much as just a workflow change.

    • CurleighBraces 36 minutes ago |
      Yeah if you've not used codex/agent tooling yet it's a paradigm shift in the way of working, and once you get it it's very very difficult to go back to the copy-pasta technique.

      There's obviously a whole heap of hype to cut through here, but there is real value to be had.

      For example yesterday I had a bug where my embedded device was hard crashing when I called reset. We narrowed it down to the tool we used to flash the code.

      I downloaded the repository, jumped into codex, explained the symptoms and it found and fixed the bug in less than ten minutes.

      There is absolutely no way I'd of been able to achieve that speed of resolution myself.

    • tmountain 15 minutes ago |
      I used to do it the way you were doing it. A friend went to a hackathon and everyone was using Cursor and insisted that I try it. It lets you set project level "rules" that are basically prompts for how you want things done. It has access to your entire repo. You tell the agent what you want to do, and it does it, and allows you to review it. It's that simple; although, you can take it much further if you want or need to. For me, this is a massive leap forward on its own. I'm still getting up to speed with reproducible prompt patterns like TFA mentions, but it's okay to work incrementally towards better results.
  • _pdp_ 39 minutes ago |
    If you are interested here is a list of actual agentic patterns - https://go.cbk.ai/patterns
  • embedding-shape 39 minutes ago |
    > The Real Bottleneck: Time

    Already a "no", the bottleneck is "drowning under your own slop". Ever noticed how fast agents seems to be able to do their work in the beginning of the project, but the larger it grows, it seems to get slower at doing good changes that doesn't break other things?

    This is because you're missing the "engineering" part of software engineering, where someone has to think about the domain, design, tradeoffs and how something will be used, which requires good judgement and good wisdom regarding what is a suitable and good design considering what you want to do.

    Lately (last year or so), more client jobs of mine have basically been "Hey, so we have this project that someone made with LLMs, they basically don't know how it works, but now we have a ton of users, could you redo it properly?", and in all cases, the applications have been built with zero engineering and with zero (human) regards to design and architecture.

    I have no yet have any clients come to me and say "Hey, our current vibe-coders are all busy and don't have time, help us with X", it's always "We've built hairball X, rescue us please?", and that to me makes it pretty obvious what the biggest bottleneck with this sort of coding is.

    Moving slower is usually faster long-term granted you think about the design, but obviously slower short-term, which makes it kind of counter-intuitive.

    • catlifeonmars 7 minutes ago |
      [delayed]
  • comboy 9 minutes ago |
    Here's a pattern I noticed - you notice some pattern that is working (let's say planning or TODO management) - if the pattern is indeed solid then it gets integrated into the black box and your agent starts doing that internally. At which point your abstraction on top becomes defective because agents get confused about planning the planning.

    So with the top performers I think what's most effective is just stating clearly what the end result you want to be (with maybe some hints for verification of results which is just clarifying the intent more)

500 Internal Server Error

500 Internal Server Error