> It's not just brain atrophy, I think. I think part of it is that we're actively making a tradeoff to focus on learning how to use the model rather than learning how to use our own brains and work with each other.
I agree with the sentiment but I would have framed it differently. The LLM is a tool, just like code completion or a code generator. Right now we focus mainly on how to use a tool, the coding agent, to achieve a goal. This takes place at a strategic level. Prior to the inception of LLMs, we focused mainly on how to write code to achieve a goal. This took place at a tactical level, and required making decisions and paying attention to a multitude of details. With LLMs our focus shifts to a higher-level abstraction. Also, operational concerns change. When writing and maintaining code yourself, you focus on architectures that help you simplify some classes of changes. When using LLMs, your focus shifts to building context and aiding the model effectively implement their changes. The two goals seem related, but are radically different.
I think a fairer description is that with LLMs we stop exercising some skills that are only required or relevant if you are writing your code yourself. It's like driving with an automatic transmission vs manual transmission.
Previous tools have been deterministic and understandable. I write code with emacs and can at any point look at the source and tell you why it did what it did. But I could produce the same program with vi or vscode or whatever, at the cost of some frustration. But they all ultimately transform keystrokes to a text file in largely the same way, and the compiler I'm targeting changes that to asm and thence to binary in a predictable and visible way.
An LLM is always going to be a black box that is neither predictable nor visible (the unpredictability is necessary for how the tool functions; the invisibility is not but seems too late to fix now). So teams start cargo culting ways to deal with specific LLMs' idiosyncrasies and your domain knowledge becomes about a specific product that someone else has control over. It's like learning a specific office suite or whatever.
> An LLM is always going to be a black box that is neither predictable nor visible (the unpredictability is necessary for how the tool functions; the invisibility is not but seems too late to fix now)
So basically, like a co-worker.
That's why I keep insisting that anthropomorphising LLMs is to be embraced, not avoided, because it gives much better high-level, first-order intuition as to where they belong in a larger computing system, and where they shouldn't be put.
Sort of except it seems the more the co-worker does the job it atrophies my ability to understand.. So soon we'll all be that annoyingly ignorant manager saying, "I don't know, I want the button to be bigger". Yay?
Only if we're lucky and the LLMs cease being replaced with improved models.
Claude has already shown us people who openly say "I don't code and yet I managed this"; right now the command line UI will scare off a lot of people, and people using the LLMs still benefit from technical knowledge and product design skills, if the tools don't improve we keep that advantage…
…but how long will it be before the annoyingly ignorant customer skips the expensive annoyingly ignorant manager along with all us expensive developers, and has one of the models write them bespoke solution for less than the cost of off-the-shelf shrink-wrapped DVDs from a discount store?
> using the LLMs still benefit from technical knowledge and product design skills, if the tools don't improve we keep that advantage…
I don't think we will, because many of us are already asking LLMs for help/advice on these, so we're already close to the point where LLMs will be able to use these capabilities directly, instead of just for helping us drive the process.
Indeed, but the output of LLMs today for these kinds of task are akin to a junior product designer, a junior project manager, a junior software architect etc.
For those of us who are merely amateur at any given task, LLMs raising us to "junior" is absolutely an improvement. But just as it's possible to be a better coder than an LLM, if you're a good PM or QA or UI/UX designer, you're not obsolete yet.
> and can at any point look at the source and tell you why it did what it did
Even years later? Most people can’t unless there’s good comments and design. Which AI can replicate, so if we need to do that anyway, how is AI specially worse than a human looking back at code written poorly years ago?
I mean, Emacs's oldest source files are like 40 years old at this point, and yes they are in fact legible? I'm not sure what you're asking -- you absolutely can (and if you use it long enough, will) read the source code of your text editor.
The little experience I have with LLM confidently shows that LLMs are much better at navigating and modifying a well structured code base. And they struggle, sometimes to a point where they can't progress at all, if tasked to work on bad code. I mean, the kind of bad you always get after multiple rounds of unsupervised vibe coding.
I agree with the sentiment but I would have framed it differently. The LLM is a tool, just like code completion or a code generator. Right now we focus mainly on how to use a tool, the coding agent, to achieve a goal. This takes place at a strategic level. Prior to the inception of LLMs, we focused mainly on how to write code to achieve a goal. This took place at a tactical level, and required making decisions and paying attention to a multitude of details. With LLMs our focus shifts to a higher-level abstraction. Also, operational concerns change. When writing and maintaining code yourself, you focus on architectures that help you simplify some classes of changes. When using LLMs, your focus shifts to building context and aiding the model effectively implement their changes. The two goals seem related, but are radically different.
I think a fairer description is that with LLMs we stop exercising some skills that are only required or relevant if you are writing your code yourself. It's like driving with an automatic transmission vs manual transmission.