This article is really only useful if LLMs are actually able to close the gap from where they are now to where they want to be in a reasonable amount of time. There are plenty of historical examples of technologies where the last few milestones are nearly impossible to achieve: hypersonic/supersonic travel, nuclear waste disposal, curing cancer, error-free language translation, etc. All of which have had periods of great immediate success, but development/research always gets stuck in the mud (sometimes for decades) because the level complexity to complete the race is exponentially higher than it was at the start.
Not saying you should disregard today's AI advancements, I think some level of preparedness is a necessity, but to go all in on the idea that deep learning will power us to true AGI is a gamble. We've dumped billions of dollars and countless hours of research into developing a cancer cure for decades but we still don't have a cure.
LLMs are noisy channels. There's some P(correct|context). You can increase the reliability of noisy channels to an arbitrary epsilon using codes. The simplest example of this in action is the majority decoding logic, which maps 1:1 to parallel LLM implementation and solution debate among parallel implementers. You can implement more advanced codes but it requires being able to decompose structured LLM output and have some sort of correctness oracle in most cases.
I would argue that "augmented programming" (as the article terms it) both is and isn't analogous to the other things you mentioned.
"Augmented programming" can be used to refer to a fully-general-purpose tool that one always programs with/through, akin in its ubiquity to the choice to use an IDE or a high-level language. And in that sense, I think your analogies make sense.
But "augmented programming" can also be used to refer to use of LLMs under constrained problem domains, where the problem already can be 100% solved with current technology. Your analogies fall apart here.
A better analogy that covers both of these cases, might be something like grid-scale power storage. We don't have any fully-general grid-scale power storage technologies that we could e.g. stick in front of every individual windmill or solar farm, regardless of context. But we do have domain-constrained grid-scale power storage technologies that work today to buffer power in specific contexts. Pumped hydroelectric storage is slow and huge and only really reasonable in terms of CapEx in places you're free to convert an existing hilltop into a reservoir, but provides tons of capacity where it can be deployed; battery-storage power stations are far too high-OpEx to scale to meet full grid needs, but work great for demand smoothing to loosen the design ramp-rate tolerances for upstream power stations built after the battery-storage station is in place; etc. Each thing has trade-offs that make it inapplicable to general use, but perfect for certain uses.
I would argue that "augmented programming" is in exactly that position: not something you expect to be using 100% of the time you're programming; but something where there are already very specific problems that are constrained-enough that we can design agentive systems that have been empirically observed to solve those problems 100% of the time.
I'm already not going back to the way things were before LLMs. This is fortunately not a technology where you have to go all-in. Having it generate tests and classes, solve painful typing errors and help me brainstorm interfaces is already life-changing.
In software we are always 90% there. Is that 10% the part that gives us jobs. I don’t see LLMs that different from, let’s say, the time compilers or high level languages appeared.
100%; Exactly as you've pointed out, some technologies - or their "last" milestones - might never arrive or could be way further into the future than people initially anticipated.
> Don’t bother predicting which future we'll get. Build capabilities that thrive in either scenario.
I feel this is a bit like the "don't be poor" advice (I'm being a little mean here maybe, but not too much). Sure, focus on improving understanding & judgement - I don't think anybody really disagrees that having good judgement is a valuable skill, but how do you improve that? That's a lot trickier to answer, and that's the part where most people struggle. We all intuitively understand that good judgement is valuable, but that doesn't make it any easier to make good judgements.
It's just experience, i.e. a collection of personal reference points against seeing how said judgements have played out over time in reality. This is what can't be replaced.
I think the current state of AI is absolutely abysmal, borderline harmful for junior inexperienced devs who will get led down a rabbit hole they cannot recognize. But for someone who really knows what they are doing it has been transformative.
> Will this lead to fewer programmers or more programmers?
> Economics gives us two contradictory answers simultaneously.
> Substitution. The substitution effect says we'll need fewer programmers—machines are replacing human labor.
> Jevons’. Jevons’ paradox predicts that when something becomes cheaper, demand increases as the cheaper good is economically viable in a wider variety of cases.
The answer is a little more nuanced. Assuming the above, the economy will demand fewer programmers for the previous set of demanded programs.
However. The set of demanded programs will likely evolve. So to over-simplify it absurdly: if before we needed 10 programmers to write different fibonacci generators, now we'll need 1 to write those and 9 to write more complicated stuff.
Additionally, the total number of people doing "programming" may go up or down.
My intuition is that the total number will increase but that the programs we write will be substantially different.
This mindset that the value of code is always positive is responsible for a lot of problems in industry.
Additional code is additional complexity, "cheap" code is cheap complexity. The decreasing cost of code is comparable to the decreasing costs of chainsaws, table saws, or high powered lasers. If you are a power user of these things then having them cheaply available is great. If you don't know what you're doing, then you may be exposing yourself to more risk than reward by having easier access to them. You could accidentally create an important piece of infrastructure for your business that gives the wrong answers, or requires expensive software engineers to come in and fix. You accidentally cost yourself more in time dealing with the complexity you created than the automation ever brought in benefit.
Well, this has happened to me with pieces of code directly in front of an AI. You go 800% faster or more and now you have to go and finish it. All the increase in speed is lost in debugging, fixing, fitting and other mundane tasks.
I believe the reason for this is that we still need judgement to do those tasks, AIs are not perfect at it and they spit a lot of extra code and complexity at times. Then now you need to reduce that complexity. But to reduce it, you need to understand the code in the first place. Now you cut here and there, you find a bug, but you are diving in code you do not understand fully yet.
So the human cognition has to go on par with what the AI is doing.
What ended up happening to me (not all the time, for example this for one-off scripts or small scripts is irrelevant, or to author a well-known algorithm that is short enough without bugs) is that I have a sense of speed that ends up not being really true once you have to complete the task as a whole.
On top of that, you tend to lose more context if you generate a lot of code with AI, as a human, and the judgement must be yours anyway. At least, until AIs get really brilliant at it.
They are good at other things. For example, I think they do decently well at reviewing code and finding potential improvements. Bc if they say bullsh*t, as any of us could say in a review, you just go ahead to the next comment and you can always find something valuable from there.
Same for "combinatoric thinking". But for tasks they need more "surgery" and precision, I do not think they are particularly good, but just that they make you feel like they are particularly good, but when you have to deal with the whole task, you notice this is not the case.
Interesting, but way too optimistic and biased towards the scenario that infinite progress of LLMs and similar tools is just given, when it's not.
"Every small business becomes a software company. Every individual becomes a developer. The cost of "what if we tried..." approaches zero.
Publishing was expensive in 1995, exclusive. Then it became free. Did we get less publishing? Quite the opposite. We got an explosion of content, most of it terrible, some of it revolutionary."
Some valid questions asked in the article but I don’t like the terminology used from title to content to assess situation and options. I’d rather call it Commoditization of Software Engineering.
> How would one even market oneself in a world where this is what is most valued?
That's basically the job description of any senior software development role, at least at any place I've worked. As a senior pumping out straightforward features takes a backseat to problem analysis and architectural decisions, including being able to describe tradeoffs and how they impact the business.
A related idea is sub-linear cost growth where the unit cost of operating software gets cheaper the more it’s used. This should be common, right? But it’s oddly rare in practice.
I suspect the reality around programming will be the same - a chasm between perception and reality around the cost.
I’ve been thinking about the impact of LLMs on software engineering through a Marxist lens. Marx described one of capitalism’s recurring problems as the crisis of overproduction: the economy becomes capable of producing far more goods than the market can absorb profitably. This contradiction (between productive capacity and limited demand) leads to bankruptcies, layoffs, and recessions until value and capital are destroyed, paving the way for the next cycle.
Something similar might be happening in software. LLMs allow us to produce more software, faster and cheaper, than companies can realistically absorb. In the short term this looks amazing: there’s always some backlog of features and technical debt to address, so everyone’s happy.
But a year or two from now, we may reach saturation. Businesses won’t be able to use or even need all the software we’re capable of producing. At that point, wages may fall, unemployment among engineers may grow, and some companies could collapse.
In other words, the bottleneck in software production is shifting from labor capacity to market absorption. And that could trigger something very much like an overproduction crisis. Only this time, not for physical goods, but for code.
I think this is a bit like attempting your own plumbing. Knowledge was never the barrier to entry nor was getting your code to compile. It just means more laypeople can add "programming" to their DIY project skills.
Maybe a few of them will pursue it further, but most won't. People don't like hard labor or higher-level planning.
Long term, software engineering will have to be more tightly regulated like the rest of engineering.
I agree with the first part of your comment, but don't follow the rest - why SE you should be more tightly regulated? It doesn't need to be; if anything, it will just stifle its progress and evolution
I think AI will make more visible where code diverges from the average. Maybe auditing will be the killer app for near-future AI.
I'm also thinking about a world where more programmers are trying to enter the workforce self-taught using AI. The current world is the continued lowering of education standards and political climate against universities.
The answer to all of the above from the perspective of who don't know or really care about the details may be to cut the knot and impose regulation.
Delegate the details to auditors with AI. We're kinda already doing this on the cybersecurity front. Think about all the ads you see nowadays for earning your "cybersecurity certification" from an online-only university. Those jobs are real and people are hiring, but the expertise is still lacking because there aren't clearer guidelines yet.
With the current technology and generations of people we have, how else but AI can you translate NIST requirements, vulnerability reports, and other docs that don't even exist yet but soon will into pointing someone who doesn't really know how to code towards a line of code they can investigate? The tools we have right now like SAST and DAST are full of false positives and non-devs are stumped how to assess them.
Literally all new products nowadays come with a great degree of software and hardware. Whether they are a SaaS or a kitchen product.
Programming will still exist, it will be just different. Programming has changed a lot of times before as well. I don't think this time is different.
If programming became suddenly too easy to iterate upon, people would be building new competitors to SAP, Salesforce, Shopify and other solutions overnight, but you rarely see any good competitor coming around.
The necessary involvement behind understanding your customers needs, iterating on it between product and tech is not to be underestimated. AI doesn't help with that at all, at maximum is a marginal iteration improvement.
Knowing what to build has been for a long time the real challenge.
Not saying you should disregard today's AI advancements, I think some level of preparedness is a necessity, but to go all in on the idea that deep learning will power us to true AGI is a gamble. We've dumped billions of dollars and countless hours of research into developing a cancer cure for decades but we still don't have a cure.
"Augmented programming" can be used to refer to a fully-general-purpose tool that one always programs with/through, akin in its ubiquity to the choice to use an IDE or a high-level language. And in that sense, I think your analogies make sense.
But "augmented programming" can also be used to refer to use of LLMs under constrained problem domains, where the problem already can be 100% solved with current technology. Your analogies fall apart here.
A better analogy that covers both of these cases, might be something like grid-scale power storage. We don't have any fully-general grid-scale power storage technologies that we could e.g. stick in front of every individual windmill or solar farm, regardless of context. But we do have domain-constrained grid-scale power storage technologies that work today to buffer power in specific contexts. Pumped hydroelectric storage is slow and huge and only really reasonable in terms of CapEx in places you're free to convert an existing hilltop into a reservoir, but provides tons of capacity where it can be deployed; battery-storage power stations are far too high-OpEx to scale to meet full grid needs, but work great for demand smoothing to loosen the design ramp-rate tolerances for upstream power stations built after the battery-storage station is in place; etc. Each thing has trade-offs that make it inapplicable to general use, but perfect for certain uses.
I would argue that "augmented programming" is in exactly that position: not something you expect to be using 100% of the time you're programming; but something where there are already very specific problems that are constrained-enough that we can design agentive systems that have been empirically observed to solve those problems 100% of the time.
I feel this is a bit like the "don't be poor" advice (I'm being a little mean here maybe, but not too much). Sure, focus on improving understanding & judgement - I don't think anybody really disagrees that having good judgement is a valuable skill, but how do you improve that? That's a lot trickier to answer, and that's the part where most people struggle. We all intuitively understand that good judgement is valuable, but that doesn't make it any easier to make good judgements.
I think the current state of AI is absolutely abysmal, borderline harmful for junior inexperienced devs who will get led down a rabbit hole they cannot recognize. But for someone who really knows what they are doing it has been transformative.
> Economics gives us two contradictory answers simultaneously.
> Substitution. The substitution effect says we'll need fewer programmers—machines are replacing human labor.
> Jevons’. Jevons’ paradox predicts that when something becomes cheaper, demand increases as the cheaper good is economically viable in a wider variety of cases.
The answer is a little more nuanced. Assuming the above, the economy will demand fewer programmers for the previous set of demanded programs.
However. The set of demanded programs will likely evolve. So to over-simplify it absurdly: if before we needed 10 programmers to write different fibonacci generators, now we'll need 1 to write those and 9 to write more complicated stuff.
Additionally, the total number of people doing "programming" may go up or down.
My intuition is that the total number will increase but that the programs we write will be substantially different.
Not if you believe most other articles related to AI posted here including the one from today (from Singularity is Nearer).
Additional code is additional complexity, "cheap" code is cheap complexity. The decreasing cost of code is comparable to the decreasing costs of chainsaws, table saws, or high powered lasers. If you are a power user of these things then having them cheaply available is great. If you don't know what you're doing, then you may be exposing yourself to more risk than reward by having easier access to them. You could accidentally create an important piece of infrastructure for your business that gives the wrong answers, or requires expensive software engineers to come in and fix. You accidentally cost yourself more in time dealing with the complexity you created than the automation ever brought in benefit.
I believe the reason for this is that we still need judgement to do those tasks, AIs are not perfect at it and they spit a lot of extra code and complexity at times. Then now you need to reduce that complexity. But to reduce it, you need to understand the code in the first place. Now you cut here and there, you find a bug, but you are diving in code you do not understand fully yet.
So the human cognition has to go on par with what the AI is doing.
What ended up happening to me (not all the time, for example this for one-off scripts or small scripts is irrelevant, or to author a well-known algorithm that is short enough without bugs) is that I have a sense of speed that ends up not being really true once you have to complete the task as a whole.
On top of that, you tend to lose more context if you generate a lot of code with AI, as a human, and the judgement must be yours anyway. At least, until AIs get really brilliant at it.
They are good at other things. For example, I think they do decently well at reviewing code and finding potential improvements. Bc if they say bullsh*t, as any of us could say in a review, you just go ahead to the next comment and you can always find something valuable from there.
Same for "combinatoric thinking". But for tasks they need more "surgery" and precision, I do not think they are particularly good, but just that they make you feel like they are particularly good, but when you have to deal with the whole task, you notice this is not the case.
"Every small business becomes a software company. Every individual becomes a developer. The cost of "what if we tried..." approaches zero.
Publishing was expensive in 1995, exclusive. Then it became free. Did we get less publishing? Quite the opposite. We got an explosion of content, most of it terrible, some of it revolutionary."
If it only were the same and so simple.
How would one even market oneself in a world where this is what is most valued?
That's basically the job description of any senior software development role, at least at any place I've worked. As a senior pumping out straightforward features takes a backseat to problem analysis and architectural decisions, including being able to describe tradeoffs and how they impact the business.
Question 2: Do you think this will ever become valuable?
I suspect the reality around programming will be the same - a chasm between perception and reality around the cost.
Something similar might be happening in software. LLMs allow us to produce more software, faster and cheaper, than companies can realistically absorb. In the short term this looks amazing: there’s always some backlog of features and technical debt to address, so everyone’s happy.
But a year or two from now, we may reach saturation. Businesses won’t be able to use or even need all the software we’re capable of producing. At that point, wages may fall, unemployment among engineers may grow, and some companies could collapse.
In other words, the bottleneck in software production is shifting from labor capacity to market absorption. And that could trigger something very much like an overproduction crisis. Only this time, not for physical goods, but for code.
Maybe a few of them will pursue it further, but most won't. People don't like hard labor or higher-level planning.
Long term, software engineering will have to be more tightly regulated like the rest of engineering.
I'm also thinking about a world where more programmers are trying to enter the workforce self-taught using AI. The current world is the continued lowering of education standards and political climate against universities.
The answer to all of the above from the perspective of who don't know or really care about the details may be to cut the knot and impose regulation.
Delegate the details to auditors with AI. We're kinda already doing this on the cybersecurity front. Think about all the ads you see nowadays for earning your "cybersecurity certification" from an online-only university. Those jobs are real and people are hiring, but the expertise is still lacking because there aren't clearer guidelines yet.
With the current technology and generations of people we have, how else but AI can you translate NIST requirements, vulnerability reports, and other docs that don't even exist yet but soon will into pointing someone who doesn't really know how to code towards a line of code they can investigate? The tools we have right now like SAST and DAST are full of false positives and non-devs are stumped how to assess them.
Programming will still exist, it will be just different. Programming has changed a lot of times before as well. I don't think this time is different.
If programming became suddenly too easy to iterate upon, people would be building new competitors to SAP, Salesforce, Shopify and other solutions overnight, but you rarely see any good competitor coming around.
The necessary involvement behind understanding your customers needs, iterating on it between product and tech is not to be underestimated. AI doesn't help with that at all, at maximum is a marginal iteration improvement.
Knowing what to build has been for a long time the real challenge.