Where I live the engineering society here does license software engineers now and enforces the term, "Software Engineer," as a protected term. You can't just call yourself a software engineer. You have to have educational credentials, pass exams, and be licensed. You have to keep up with your educational requirements. You have to be insured.
It boils down to the same thing, people and teams, but the difference is liability.
Personally I think we're better off pair programming with actual people than GenAI chat bots. We get to teach each other and learn together which improves our skills. We actually need to socialize and be around people to remain healthy. You miss out on all of these benefits with chat bots.
And there's growing evidence that you don't learn as much when using them [0].
Consider when using them is appropriate and maybe don't rely on them for everything.
In my irrelevant opinion, this is good. To me at least, the word engineer represents someone with a big, heavy responsibility on their hands.
I never liked being called an engineer and only have it on my resume because that's the keyword recruiters search on linkedin nowadays. One reason is that I don't have formal education. The other is that, in almost 15 years of experience, I witnessed very few occasions of software receiving proper care to the extent that I could call it "engineering".
Can people keep a good mental model of the repo without writing code? I always feel like I lose my thoughts if I let an LLM do it for me. I totally get that LLMs can do stuff faster and (Given the right context) sometimes keep better track of this than humans can.
Even musicians had to go digital, but that doesn't mean people stopped playing raw instruments. Will company culture shift towards one senior that has the context + 7 LLMs that work for him? is that where we're heading towards?
http://literateprogramming.com/
which then affords a PDF w/ a ToC, indices, and sidebars and other navigational aids all hyperlinked so as to make moving through the code and its documentation quick and fluid.
Then, when I arrive at the section of code which needs to be updated, the documentation and reasoning about its current state is right there.
Not sure if this scales up to multiple developers though....
Maybe if that takes hold, LP will finally take off?
Digitization of music has a wall. There was an aspect of intelligence that simply digitizing things can’t replace. AI is rapidly climbing over that wall.
This is the age-old problem of legacy codebases. The intentions and understanding of the original authors (the "theory of the program"[0]) are incredibly valuable, and codebases have always started to decline and gain unnecessary complexity when those intentions are lost.
Now every codebase is a legacy codebase. It remains to be seen if AI will be better at understanding the intentions behind another AI's creation than humans are at understanding the work of other humans.
Anyway, reading and correcting AI code is a huge part of my job now. I hate it, but I accept it. I have to read every line of code to catch crazy things like a function replicated from a library that the project already uses, randomly added to the end of a code file. Errors that get swallowed. Tautological tests. "You're right!" says the AI, over and over again. And in the end I'm responsible as the author, the person who supposedly understands it, even though I don't have the advantage of having written it.
[0] Programming as Theory Building, Peter Naur. https://pages.cs.wisc.edu/~remzi/Naur.pdf
At first I would write code, which involves a ton of reading and _truly_ understanding code written by others.
Then I would increasingly spend my (technical) time on code reviews. At some point I lost a lot of my intuition about the system, and proper reviews took a long time, I ended up delegating all of that.
Finally, I would mainly talk to middle managers and join high level conversations. I'd still have a high level idea about how everything worked, but kinda lost my ability to challenge what our technical people told me. I made sure to carve out some time to try and stay on top, but I got really rusty.
This was over a time frame of perhaps two or three years. Since then, I've made changes, working at lower levels. I think I got my mojo back, but it took another one or two years of spending ~50% of my day programming.
Other people will be different, but that's how it was for me. To truly understand and memorise something, I need to struggle with it personally. And truly understanding things helps with a lot of higher level work.
But as with anything that takes a few years to materialise, you usually notice it quite a while after the damage is done. Long feedback cycles (like for business decisions, investments into code quality etc) are the root of all evil, IMHO.
Probably not.
In my experience working on large SW, you can't do interesting stuff without talking to the people who wrote the code that your interesting stuff will touch. And the reason you talk to those people is to get their visceral feel for how their code will be affected.
Unless there's a wall getting AI to plan, design, and refactor better than it does now.
AI generation and assistance is such an accelerant to industrial software production that it is now a must for serious software engineering. If you need to evolve your mental model of the code base, ask your LLM. It's at least as good an accelerant of understanding existing code as it is of writing new code—maybe better.
(No, but you can use AI to get around that)
Advocates claim this helps, but imply that programmers should now be constantly evaluating a dozen or more assistants in various ways and honing prompting strategies. Does that not take time and effort? Wouldn't it make more sense to let the technology develop more?
And then there is this claim that it makes things easier and faster, but what is actually being coded? Has anything remarkable ever come out of vibe coding? Seems like nothing but a lot of unremarkable applications that push no boundaries.
Really coming up with new ideas often means coming up with variations and then honing the results. When iterating like that the speed of coding is already a minor input because what really matters is the judgement calls of knowing what kinds of things to try and when to abandon approaches that are not working out.
And what about all the negative implications that we are still trying to figure out. Just because no vibe code has yet triggered intellectual property lawsuits doesn't mean that won't happen because there is already a lot of that kind of thing going on in the LLM space. And what about the damage that data centers are doing to the environment, to the grid, to markets for processors and memory? When I code things by hand the amount of liability and wreckage I generate is minimal.
That’s 99% of software.
That’s the “after the fact” approach.
Instead of this I had Claude Code write a script which, when run, traverses the AST of the entire codebase and gives an error if any architectural constraints are violated. For example, backend route handlers cannot import or use the database client. The error message says “you must go through the service and repository layers”.
This script is then included in the agent’s coding loop by being added as a custom check in pre-commit (or husky, if you’re using node et al)
The coding agent now treats architecture violations as a compiler error. It reads the error, has a think, and fixes it.. and I sip coffee and work on something else.
Never be looking at code and talking back to your agent. Your agent should only be handing you code that’d pass code review. That bar is something I’ve build up to incrementally and will keep adding to forever.