• agentultra 5 days ago |
    Traditional engineering involves more than talking to people, teams, and trade offs.

    Where I live the engineering society here does license software engineers now and enforces the term, "Software Engineer," as a protected term. You can't just call yourself a software engineer. You have to have educational credentials, pass exams, and be licensed. You have to keep up with your educational requirements. You have to be insured.

    It boils down to the same thing, people and teams, but the difference is liability.

    Personally I think we're better off pair programming with actual people than GenAI chat bots. We get to teach each other and learn together which improves our skills. We actually need to socialize and be around people to remain healthy. You miss out on all of these benefits with chat bots.

    And there's growing evidence that you don't learn as much when using them [0].

    Consider when using them is appropriate and maybe don't rely on them for everything.

    [0] https://arxiv.org/abs/2506.08872

    • udog2 5 days ago |
      I'm curious, which country or region does this licensing? I'd like to learn more.
    • paodealho 5 days ago |
      > ... "Software Engineer," as a protected term. You can't just call yourself a software engineer.

      In my irrelevant opinion, this is good. To me at least, the word engineer represents someone with a big, heavy responsibility on their hands.

      I never liked being called an engineer and only have it on my resume because that's the keyword recruiters search on linkedin nowadays. One reason is that I don't have formal education. The other is that, in almost 15 years of experience, I witnessed very few occasions of software receiving proper care to the extent that I could call it "engineering".

    • FatherOfCurses 5 days ago |
      Thank fucking god. I have been advocating for this for years, I'm glad some engineering societies are actually doing it.
  • ramon156 5 days ago |
    > There is so much fun, beauty and pleasure in writing code by hand. You can still handcraft code. Just don’t expect this to be your job. This is your passion.

    Can people keep a good mental model of the repo without writing code? I always feel like I lose my thoughts if I let an LLM do it for me. I totally get that LLMs can do stuff faster and (Given the right context) sometimes keep better track of this than humans can.

    Even musicians had to go digital, but that doesn't mean people stopped playing raw instruments. Will company culture shift towards one senior that has the context + 7 LLMs that work for him? is that where we're heading towards?

    • WillAdams 5 days ago |
      I've found that the best aid to keeping a mental model of the structure of a project is to document it well using Literate Programming:

      http://literateprogramming.com/

      which then affords a PDF w/ a ToC, indices, and sidebars and other navigational aids all hyperlinked so as to make moving through the code and its documentation quick and fluid.

      Then, when I arrive at the section of code which needs to be updated, the documentation and reasoning about its current state is right there.

      Not sure if this scales up to multiple developers though....

      • son_of_gloin 5 days ago |
        Literate Programming would also probably help coding assistants understand the code better and generate better output. Or is that the point you are trying to make?
        • WillAdams 5 days ago |
          No, but that's an interesting adjunct.

          Maybe if that takes hold, LP will finally take off?

    • TylerLives 5 days ago |
      You're still in charge, don't let LLMs do whatever they want.
    • threethirtytwo 5 days ago |
      Eventually AI will get so good that why do we even need context? That’s the direction we are heading towards.

      Digitization of music has a wall. There was an aspect of intelligence that simply digitizing things can’t replace. AI is rapidly climbing over that wall.

    • dkarl 5 days ago |
      > Can people keep a good mental model of the repo without writing code?

      This is the age-old problem of legacy codebases. The intentions and understanding of the original authors (the "theory of the program"[0]) are incredibly valuable, and codebases have always started to decline and gain unnecessary complexity when those intentions are lost.

      Now every codebase is a legacy codebase. It remains to be seen if AI will be better at understanding the intentions behind another AI's creation than humans are at understanding the work of other humans.

      Anyway, reading and correcting AI code is a huge part of my job now. I hate it, but I accept it. I have to read every line of code to catch crazy things like a function replicated from a library that the project already uses, randomly added to the end of a code file. Errors that get swallowed. Tautological tests. "You're right!" says the AI, over and over again. And in the end I'm responsible as the author, the person who supposedly understands it, even though I don't have the advantage of having written it.

      [0] Programming as Theory Building, Peter Naur. https://pages.cs.wisc.edu/~remzi/Naur.pdf

      • macintux 5 days ago |
        Reviewing someone else's PR, who used Copilot but barely knows the language, has been a mixture of admiration that AI can create such a detailed solution relatively quickly, and frustration with the excess complexity, unused code, etc.
    • fhd2 5 days ago |
      Based on my own experience as someone taking the journey from junior developer to CTO of a mid sized company: No, you can't keep that mental model for long.

      At first I would write code, which involves a ton of reading and _truly_ understanding code written by others.

      Then I would increasingly spend my (technical) time on code reviews. At some point I lost a lot of my intuition about the system, and proper reviews took a long time, I ended up delegating all of that.

      Finally, I would mainly talk to middle managers and join high level conversations. I'd still have a high level idea about how everything worked, but kinda lost my ability to challenge what our technical people told me. I made sure to carve out some time to try and stay on top, but I got really rusty.

      This was over a time frame of perhaps two or three years. Since then, I've made changes, working at lower levels. I think I got my mojo back, but it took another one or two years of spending ~50% of my day programming.

      Other people will be different, but that's how it was for me. To truly understand and memorise something, I need to struggle with it personally. And truly understanding things helps with a lot of higher level work.

      But as with anything that takes a few years to materialise, you usually notice it quite a while after the damage is done. Long feedback cycles (like for business decisions, investments into code quality etc) are the root of all evil, IMHO.

    • pizlonator 5 days ago |
      > Can people keep a good mental model of the repo without writing code?

      Probably not.

      In my experience working on large SW, you can't do interesting stuff without talking to the people who wrote the code that your interesting stuff will touch. And the reason you talk to those people is to get their visceral feel for how their code will be affected.

      • brabel 5 days ago |
        Ig that’s the case you really need to get some documentation in place. Otherwise after a few older people leave your code can never be changed properly anymore!
    • daxfohl 5 days ago |
      They're currently still at the "create a million branches until the tests pass" phase. Once they are more capable of genuine design, refactoring, and maintenance, their code will arguably be more readable than code written by humans, as they'll be more able and more inclined to refactor on the fly, rather than letting things bitrot the way we do. They'll be better able to refactor without regressions, refactor across services, all without disrupting feature development and parallel work (because they're fast enough that work will become more serialized), and remembering to update documentation accordingly.

      Unless there's a wall getting AI to plan, design, and refactor better than it does now.

    • bitwize 5 days ago |
      Software engineering concerns itself with the industrial production of software, at scale. Much like chemical engineering for chemicals. Artisanal handcrafted software is kind of outside the scope of the discipline of software engineering.

      AI generation and assistance is such an accelerant to industrial software production that it is now a must for serious software engineering. If you need to evolve your mental model of the code base, ask your LLM. It's at least as good an accelerant of understanding existing code as it is of writing new code—maybe better.

    • xixixao 5 days ago |
      The original submission addresses this point fairly well.

      (No, but you can use AI to get around that)

  • m0llusk 5 days ago |
    Still not convinced.

    Advocates claim this helps, but imply that programmers should now be constantly evaluating a dozen or more assistants in various ways and honing prompting strategies. Does that not take time and effort? Wouldn't it make more sense to let the technology develop more?

    And then there is this claim that it makes things easier and faster, but what is actually being coded? Has anything remarkable ever come out of vibe coding? Seems like nothing but a lot of unremarkable applications that push no boundaries.

    Really coming up with new ideas often means coming up with variations and then honing the results. When iterating like that the speed of coding is already a minor input because what really matters is the judgement calls of knowing what kinds of things to try and when to abandon approaches that are not working out.

    And what about all the negative implications that we are still trying to figure out. Just because no vibe code has yet triggered intellectual property lawsuits doesn't mean that won't happen because there is already a lot of that kind of thing going on in the LLM space. And what about the damage that data centers are doing to the environment, to the grid, to markets for processors and memory? When I code things by hand the amount of liability and wreckage I generate is minimal.

    • brabel 5 days ago |
      > Seems like nothing but a lot of unremarkable applications that push no boundaries.

      That’s 99% of software.

  • empiko 5 days ago |
    One thing that I do not see mentioned enough is how fast editing code is now. I want to create a new function to replace a functionality that appears in multiple places? I can just write the name of the function and the IDE understands what I want to do. It will then promptly copy the existing code into this new function and suggest to replace the occurrences in the code with it. Tab, tab, tab, editing is complete in a few clicks.
  • cadamsdotcom 2 days ago |
    > When coding assistants write most of the software, the fidelity of the mental model I hold degrades quickly. Instead of fighting this new normal I have been trying to create methods to use the model as a tool to query and develop the mental model on-demand.

    That’s the “after the fact” approach.

    Instead of this I had Claude Code write a script which, when run, traverses the AST of the entire codebase and gives an error if any architectural constraints are violated. For example, backend route handlers cannot import or use the database client. The error message says “you must go through the service and repository layers”.

    This script is then included in the agent’s coding loop by being added as a custom check in pre-commit (or husky, if you’re using node et al)

    The coding agent now treats architecture violations as a compiler error. It reads the error, has a think, and fixes it.. and I sip coffee and work on something else.

    Never be looking at code and talking back to your agent. Your agent should only be handing you code that’d pass code review. That bar is something I’ve build up to incrementally and will keep adding to forever.