> The thing is that even if I was wrong (I'm not) and AI was somehow helpful for software engineering (it isn't), I still wouldn't want to use it.
So even if you were wrong on the facts (you are) you still wouldn't change your mind? In other words, you're unreasonable and know you're unreasonable and think that's totally fine?
It's fine if people don't want to use AI for anything, and honestly I don't even believe you need to justify it. The justification given here is interesting and I think shows misunderstanding.
At one point the author writes
> AI is a tool that can only produce software liabilities
which I would argue is completely caused by misuse of AI. Sure, you can have AI write a ton of code that often comes with subtle bugs. But using AI doesn't mean that it has to write any code for you at all. I've been using LLM often for security analysis and the results are quite good. Vulnerabilities that we had collectively missed were shown and we could fix them ourselves.
In this case, instead of creating liabilities, we were able to use LLM to get more information about our code. It's completely possible we could have deduced this information on our own, but we didn't and LLM is capable of doing it much more quickly than humans.
I think I’m lucky that I never enjoyed programming, I enjoyed thinking about problems. That makes AI coding great, because I’m good enough at programming that I can describe what I want easily to an LLM, and I can judge the results very well for myself. I read and understand each line so I know I’m not committing crap.
I feel similarly. I wanted to develop software, I didn’t want to “program”. I want my code to fix problems, I want the end result to feel great to use, I want it to be able to fix problems and feel great a year from now, too.
I want to be better month after month, I want to be able to discover new areas.
Using AI tools makes sense to me. It’s important that you don’t believe everything the hype men are telling on Twitter, but it would also be a mistake to believe there is nothing valuable in this technology.
Just how much boilerplate have people been putting up with for this to be an oft cited advantage of LLM usage? I know boilerplate has to exist somewhere, but I've been labouring these past couple decades under the assumption that boilerplate should be rare and to be avoided.
> It feels as good for developer ergonomics as the move away from CRT monitors.
I kind of think CRT monitors were much better for developer ergonomics than LCD because of the tendency to set modern monitors much deeper into the desk and have to lean forward to see them. CRTs forced you to sit with better posture
I was a hand tool woodworker, but the first time I had to rip 56 6 foot boards into 7 strips I immediately purchased a table saw. Now I use hand tools rarely because I find the speed and quality of my cuts are better. I still use hand tools for things that require certain standards, but electric tools almost always produce better quality results.
It’s about the same for AI coding, I just get better results.
Similar to wood working, sometimes I use the LLM rough out the concept quickly then refine it. The initial roughing looks awful and this seems to bother some people a lot. It’s fine for me because I still have the correct tools to pull it all together. It saves me immense amounts of time.
Another analog is using power tools to make jigs for hand tools. I’m constantly rigging up test or data wrangling harnesses to improve my ability to verify and refine solutions. It’s so ridiculously useful for improving outputs, even if it isn’t writing the code that makes it to production.
Your power tools run out of tokens and you have to open yet another online account to get around the daily sawing limits in order to finish the task today?
You can use qwen 3.5 for genuinely useful stuff without worrying about subscriptions and tokens. The 35b model works well on my Mac Studio and does all kinds of menial tasks so I can use my subscriptions for more important or complex things. I don’t think it’ll be long until models comparable to Sonnet today will run on my machine.
I have no idea what the frontier will look like in a few years but I don’t doubt local models like qwen will still be a staple of my workflows.
And for what it’s worth, there are people out there who lose their sawing ability because a safety brake totals their blade and needs to be replaced for something like $100. Sometimes we pay extra for features we value. We can always pull out the hand tools if we have to. In the mean time, make hay I guess.
I think we have to be careful with such analogies. One does not have to have sweated for years with hand tools to understand what an accurate rip cut through ply looks like. On the other hand, if you just gave someone some rough cut wood and an electric sander, how would they even understand what that wood could look like having never used a good, sharp hand plane?
With AI coding we're talking about people producing abstract artifacts that most people do not understand and do not know how to test. These aren't just strips of board. They are little machines. So you shouldn't be asking whether you'd trust a table saw to cut your boards, you should be asking whether you'd trust someone who has never cut boards to build your table saw.
Is it? Isn't the inverse? The speed of your cuts is improved with AI a bit, but aren't the cuts all rough and need additional work? Isn't the quality less than what you would do by hand?
Because that's what every AI usage I've experienced has been.
- Chips are becoming more politicized; I fear artificial scarcity as with housing will be put on chips, driving up prices.
- It causes a lot of centralisation. No, I cannot run deepseek at home. I don´t have 100.000+ USD laying around. 1TB of VRAM is not chump change.
- It can be a threat to the flourishing of open source. There is no longer a reason for me to work with other devs to build something in public together. I just have the LLM write what I need. It isolates.
These are the only drawbacks. Eveything else is clearly the artisans' ego getting in the way. That being said, if a piece of code is critical infra onto which many other things hinge, I will still hand code it.
Most people would not be able to ride horse properly, that would end up in catastrophe (or nothing just standing around or going to random directions). So your analogy is good but not in way you probably intended.
I'm all in favor of talking about drawbacks of AI coding and potential future problems. No problem. But at this point just the blanket statement that you'll never use it is not reasonable. It's the equivalent of a master car mechanic seeing a robot that can pretty reliably rebuild a transmission in a few minutes saying "I'll never use that; I'll always do it myself." Okay, sure buddy. You keep taking 8 hours to do what now takes everyone else 5 minutes. Knock yourself out.
almost everything he says is reasonable and correct though. using AI does undermine understanding, and companies hiring less juniors will be the death of them. Also juniors using AI will be the death of deep understanding if they continue. Robots fixing cars is not an apt analogy because it's a rote task. LLMs are being used for far beyond rote tasks and that's where the danger lies. People forget that most frustration and struggle are crucial, not something to remove. And people especially beginners do not have the judgement to know when struggling is appropriate.
By analogy, you can imagine a mid-century human computer shouting "I will never use computing machines to perform numerical calculations! I must perform every addition by hand!" You can even imagine a commune forming around "hand-made calculation" and trying to sell services that are certified "automation-free".
Back in the day of the early industrial revolution and roads being improved, I imagine there were quite a few "horses forever" people. Some people embrace progress, some hate it. No one however is comfortable with change, if they had any skin in the game.
And everyone having a calculator from grade 4 in school, hasn't made everyone an accountant.
But to be fair, no one has ever experience change as fast as our profession has.
As much as I also enjoyed the actual coding part, a lot of it is just .. boring plumbing. I enjoy solving the problems - designing the solutions, the algorithms, choosing the right tech, coming up with nice abstractions.
When doing agentic development, you need to be in control, at least for now. Every frontier model will still do incredibly stupid stuff, and if you let it cook unchallenged, you'll have a codebase that doesn't scale. Claude will happily keep piling turds upon your tower of turds, but at some point, even an LLM will have a hard time working in it.
When you are at the wheel, the engineering hasn't changed. You're still solving all the same problems, but you can iterate a lot faster. Code is now ~free, and the cost of having a bad idea is now much cheaper, because you can quite literally speak the solution out loud and fix it in a few minutes.
This writing is terrible and immediately put me off. The ‘superfluous’ swearing that the author seems to be proud about is instead going to put off a lot of his potential audience. Anyways, the ideas are nothing new that people haven’t read before as far as arguments against AI and the AI industry.
I prefer this writing style 100x over the bland AI assisted garbage that I have to read every day.
Give me something with an opinion, personality and evidence of battle scars any day. There’s actually extra signal here that helps me process what I’m reading. When I understand where the author is coming from I can extrapolate, attenuate and compare/contrast the content with my existing mental model far better.
> These are not the same thing. You don't develop skills by reading about them. You have to use them, to process the information, integrate what you've learnt into your existing mental schema,
My mental AI detector would classify that passage as AI-generated with confidence around 85%. It would be 95% if the list had stopped at three items. Regardless of who wrote it, it's the same style.
> The thing is that even if I was wrong (I'm not) and AI was somehow helpful for software engineering (it isn't), I still wouldn't want to use it.
So even if you were wrong on the facts (you are) you still wouldn't change your mind? In other words, you're unreasonable and know you're unreasonable and think that's totally fine?
Well, cool. Next time, lead with that.
At one point the author writes
> AI is a tool that can only produce software liabilities
which I would argue is completely caused by misuse of AI. Sure, you can have AI write a ton of code that often comes with subtle bugs. But using AI doesn't mean that it has to write any code for you at all. I've been using LLM often for security analysis and the results are quite good. Vulnerabilities that we had collectively missed were shown and we could fix them ourselves.
In this case, instead of creating liabilities, we were able to use LLM to get more information about our code. It's completely possible we could have deduced this information on our own, but we didn't and LLM is capable of doing it much more quickly than humans.
I only commit code that is roughly the same as I would have written anyway.
It feels as good for developer ergonomics as the move away from CRT monitors.
I want to be better month after month, I want to be able to discover new areas.
Using AI tools makes sense to me. It’s important that you don’t believe everything the hype men are telling on Twitter, but it would also be a mistake to believe there is nothing valuable in this technology.
I kind of think CRT monitors were much better for developer ergonomics than LCD because of the tendency to set modern monitors much deeper into the desk and have to lean forward to see them. CRTs forced you to sit with better posture
It’s about the same for AI coding, I just get better results.
Another analog is using power tools to make jigs for hand tools. I’m constantly rigging up test or data wrangling harnesses to improve my ability to verify and refine solutions. It’s so ridiculously useful for improving outputs, even if it isn’t writing the code that makes it to production.
I have no idea what the frontier will look like in a few years but I don’t doubt local models like qwen will still be a staple of my workflows.
And for what it’s worth, there are people out there who lose their sawing ability because a safety brake totals their blade and needs to be replaced for something like $100. Sometimes we pay extra for features we value. We can always pull out the hand tools if we have to. In the mean time, make hay I guess.
With AI coding we're talking about people producing abstract artifacts that most people do not understand and do not know how to test. These aren't just strips of board. They are little machines. So you shouldn't be asking whether you'd trust a table saw to cut your boards, you should be asking whether you'd trust someone who has never cut boards to build your table saw.
Because that's what every AI usage I've experienced has been.
Faster, yes. Useful, yes. Not better "finish".
- Vendors get to know everything about you
- Chips are becoming more politicized; I fear artificial scarcity as with housing will be put on chips, driving up prices.
- It causes a lot of centralisation. No, I cannot run deepseek at home. I don´t have 100.000+ USD laying around. 1TB of VRAM is not chump change.
- It can be a threat to the flourishing of open source. There is no longer a reason for me to work with other devs to build something in public together. I just have the LLM write what I need. It isolates.
These are the only drawbacks. Eveything else is clearly the artisans' ego getting in the way. That being said, if a piece of code is critical infra onto which many other things hinge, I will still hand code it.
People used to drive manual. Now it’s all automatic transmission. Some cars even drive itself.
People used to proudly use Vi to write code. But now IDE is commonplace.
People used to write asm by hand. Transport Tycoon was written in assembly. But these days that would be insane.
Technological progress is an absolute thing. It produces too much convenience and wealth to ignore.
You want to be delivery service that takes 2 days instead of 30 minutes to bring you pizza so that you don't forget how to ride your horse..?
Your craft can be typing out code on a keyboard; or it can be building things in the best possible way with the best available tools.
But yes this is a very extreme position.
Not the hill I would die on.
And everyone having a calculator from grade 4 in school, hasn't made everyone an accountant.
But to be fair, no one has ever experience change as fast as our profession has.
When doing agentic development, you need to be in control, at least for now. Every frontier model will still do incredibly stupid stuff, and if you let it cook unchallenged, you'll have a codebase that doesn't scale. Claude will happily keep piling turds upon your tower of turds, but at some point, even an LLM will have a hard time working in it.
When you are at the wheel, the engineering hasn't changed. You're still solving all the same problems, but you can iterate a lot faster. Code is now ~free, and the cost of having a bad idea is now much cheaper, because you can quite literally speak the solution out loud and fix it in a few minutes.
When it comes to employment and other people paying you to code, though, not using AI is increasingly a non-starter for most of us.
Give me something with an opinion, personality and evidence of battle scars any day. There’s actually extra signal here that helps me process what I’m reading. When I understand where the author is coming from I can extrapolate, attenuate and compare/contrast the content with my existing mental model far better.
My mental AI detector would classify that passage as AI-generated with confidence around 85%. It would be 95% if the list had stopped at three items. Regardless of who wrote it, it's the same style.
https://www.penny-arcade.com/comic/2004/03/19/green-blackboa...