There's something so off-putting about academics giving industry advice when they haven't spent a day working as an engineer at a company.
> Care deeply about your craft. Refactor code until it is clear and elegant. Write good documentation for other humans to read. Have the courage to go slowly, especially when everyone else is telling you that you need to go fast and cut corners.
Outside of the bit on avoiding cutting corners, this advice seems like a straight path towards unemployment in a few years. The implication is that "your craft" is writing and polishing code, a skill which seems to be increasingly antiquated in favor of higher level system design. Who is going to read your carefully crafted documentation lol? The agents who replace you?
What gets me is the craft point. I've shipped more useful software in the last year than probably the previous five combined, and most of that is because I stopped treating code as the artifact and started treating the product as the artifact. The craft moved up a layer.
> until it is clear and elegant
New grads who spend weeks refactoring code are going to get lapped by new grads who ship something and iterate. There's just a faster feedback loop now.
There's a lot of ways to ship things & iterate without having any idea what you are building or doing technically, without building any tastes for how things work.
Those people are going to be the absolute most dangerous possible thing you can do to a company.
Maybe some day we can just totally give up the technicals to the machine, but I strongly doubt it. Every single model is both brilliant, but also a fool, no matter how frontier it is.
Yes, the feedback loops are faster. But you need to assess what's actually technically happening. Someone does. Maybe you offload the actual thinking up the chain, delegate taste understanding and judgement to only people up the chain, and make them all go mad dealing with endless slopcoding they are being hit with. But just as bad, that junior engineer is robbing themself too. Maybe they get away with not looking, but they sure aren't going to learn a lot.
I'm missing the link but there was a great submission maybe a month ago about two hypothetical grad students, I think in astronomy, where one failed and flailed and did things largely the old fashioned way, and the other used AI to get it done. The advisor couldn't really tell who was doing what. But at the end, one student had learned & gained wisdom, and the other had served as a glorified relay between the AI and the advisor and learned little. Same work output, but different human outcomes.
Junior engineers are really not that cheap. Relative to your capabilities you are not a bargain. You take a ton of valuable time from other people. If a company is hiring you, they either are truly fools lacking basic understanding, or they are in on the bargain that they want you to be getting better, are testing to see if you can become more useful. Sure it's great to show up and have impressive output, but you need to actually be learning and growing. You need to be participating in the feedback loop actively. Or you will be lapped by people who care & think like engineers.
> Those people are going to be the absolute most dangerous possible thing you can do to a company.
I hear you, but here's the thing: the companies don't give a shit about software quality any farther than it takes to keep you coming back as a customer. And it's actually been like this for a long time. They're going to hire people who can ship who-cares-how-buggy software as fast as possible. It's better for the bottom line.
And that pains my soul and pains me as a consumer (because we already had to put up with too much crap software before genAI started producing it in reams), but there's very limited money in the kind of quality you're talking about.
I hear stories from people interviewing now--the interviewers react negatively if you tell them you're working on keeping your programming skills fresh. They just want to know how many agents you can run at a time and how many lines of code you can generate per day.
Personally, I think someone skilled in software development working with genAI is going to be more productive than someone not skilled working with genAI, but I don't think that's even being selected for now.
Grim days.
The one thing that gives me hope is that every time we ask our graduates who are now in the field (and all work with AI) if we should drop classic CS education and only do AI, they all emphatically reply in the negative. Yes, we need some AI education in there, but they want the foundation, too.
This person is an educator. You should absolutely learn how to code by deep practice. You can easily learn how to use the slop machine in I don't know a week or something if the job demands it.
We have to ask ourselves what the purpose of refactoring is. People use that word like some magic incantation, as if the value of some particular instance of "refactoring" were self-evident. "What are you doing?" "Oh, I'm refactoring X." "-hushed tones- Ohhhh, yes, carry on, then..."
Refactoring improves code organization. It makes the code more maintainable, arguably and more reusable. And, from an academic POV, makes code more satisfying conceptually by aligning it with the model of a domain more clearly and conspicuously. Good stuff.
Great. Now, in industry, what matters is the result. Nobody cares if the result was produced by a witch casting magic spells or a grunt hitting a rock with another rock. Industry is practical. It cares about "craft" as far as it enables commercial success (and yes, short-term thinking can be bad, but guess what: you need to eat in the short-term!). Maintainability is a nice thing to have, because it does allow us to more quickly develop code. But how maintainable something needs to be, especially in relation to other competing concerns, has no fixed answer. It really depends on the situation.
Practical wisdom, known as prudence in the classical literature, is the foundation of all moral behavior. The right decision, the right concern, really does depend on the circumstances. You cannot derive from principles, from the armchair, what the right course of action is for everything. The general principles may be immutable and absolute and fixed, but the way in which they are applied in particular circumstances will vary.
Academia can insulate people from certain kinds of practical concerns, which is supposed to aid theoretical work, but this demands that the academic recognize his limits. He is not in a position to pass judgement on prudential matters, which is to say matters that are not strictly matters of principle, if he is not prepared to engage competently with the concrete reality of the situation.
Completely fair - but at least my PoV comes from having actually worked as a SWE, you know? I feel like the best understanding this fellow can have is purely secondhand from watching the success / failures of his students.
I also think I get doubly upset from advice like this because it’s given and marketed to impressionable young students. Even agreeing with all the moral points he’s made, I truly think this advice would set up a new grad for failure and have them focusing on the wrong skills for this market.
The bit about ignoring trends feels too head in the sand for my liking :/
Fads come and go in industry. This version of LLMs will come and go as well, as will the coding languages and paradigms we used before (and, presuming you want your code to actually run, still do with some decent frequency).
Will LLMs in their current ergonomics have staying power? Perhaps. Nobody can predict the future. But I don’t think it’s a given in the least
Automatic coding systems have way too much economic value to be considered a "fad". I don't think you need to be Nostradamus to predict that we're never going back to manual coding. Sure, the systems will evolve and improve, but they're certainly not going anywhere.
I worked at a FAANG in a senior role for around 6 years and I completely agree with the article. (I left before LLM/agent use became widespread, but I would have flamed out if it was forced upon me anyway.)
When I started studying CS, the "industry" thought students should be taught COBOL, and maybe some PL/I and Fortran, because obviously that was what the market wanted.
The point is to decide what success is for yourself. Learn everything you can about the thing you might decide to automate. But think before you automate and how you do so because it could cause more harm then good.
Programming is a practical skill, and its most common expression is industrial or commercial, not academic proofs of concept. The post addresses students who will enter industry; that's the focus of the professor's own post.
And I sympathize with many points being made here. However, the point of refactoring code is somewhat odd and detached from the real life constraints of programming in the wild.
Like, sure, in the ivory tower, you can confine yourself to nicely bounded problems and tidy little toy POCs. You can survive doing those things, because the selective pressures allow for it. I love those things, personally. They help me understand the nature of the thing. And in an academic settings, you can refine and refactor the hell out of those things to your heart's content (not that there is necessarily an objective end point to refactoring; code organization is subject to goals and constraints which can shift around).
But the reality of software in a commercial setting is not the tidy one you can expect in an academic setting. It's messy, subject to commercial pressures, to a hierarchy of values that doesn't place "refactoring" at the top of the list. And why would it? Whether you should refactor something is not just a question of whether it suits your conceptual tastes or even whether it is more maintainable. Unlike algorithms and principles and even techniques, software is not eternal. It is ephemeral. It's shelf-life is bounded. It is a piece of a larger business process. You're not refining some theory or some grasp of a Platonic ideal. You're mostly just putting into place plumbing to get something done. Whether you should refactor something, when you should refactor something, is a matter of prudential judgement, which is to say, of practical reason.
So, in light of that, there are actually quite absurd things to say given the difference between the privilege of academia and the gritty reality of industrial and commercial software development. If we were to force our professor into the world of industry, he would quickly lose his job or he would quickly learn that some of his strange idealism is silly and detached from the reality that his students will face.
> It's messy, subject to commercial pressures, to a hierarchy of values that doesn't place "refactoring" at the top of the list. And why would it?
Probably because it's a good way to be more profitable.
Code that's easier to understand is easier to: maintain, generate new features for, fix bugs, onboard new engineers, etc
Code that's well written: executes faster (saving computational costs), scales better, has higher uptimes/more robust, reduces bandwidth, and so on.
The thing is the business people will never understand this. Why would they? They're not programmers. They're not in the weeds. But that's what your job is as an engineer. To find all these invisible costs.
I'm pretty confident the industry is spending billions unnecessary. Hell, I'm sure Google alone is wasting over $100m/yr due to this.
Don't be penny wise and pound foolish. You're smarter than that. I know everyone here is smarter than that. So don't fall for the trap
I am well aware of stupidity in industry. However, I am also wise enough to recognize the opposite error. (I myself have academic tendencies and a background aligned with that. I have chosen jobs that payed less, because the subject matter was more interesting for me. I'm not some vulgar, money-chasing techbro here.) The via media demands that we recognize the distinction between general truths and practical realities. As I wrote elsewhere in this thread, yes, properly refactored code is easier to maintain, easier to read, easier to change, and theoretically, commercially preferable. It also makes programming more satisfying, helping retention. But that describes a feature of such code. It doesn't tell us what the right course of action is in a particular situation. The notion that refactoring is unconditionally the right course of action when code is not in some ideal state is simply wrong. It really does depend on the situation. Sometimes, refactoring is the wrong thing to do.
I'm not making some outrageous claim here. This follows from basic truths about the nature of what it means to be practical, and if industry is anything, it is practical.
Completely agree that it's off-putting. The author indeed has only ever worked in academia per his LinkedIn.
But disagree that this is a path to unemployment. At work we go very fast and yet I think fast is compatible with each of those points, just not in all situations.
Marc Brooker, distinguished eng at AWS, gives much more useful advice for industry, as you'd expect given his almost 30 years in industry.
From that guys LinkedIn he was in academia and then at AWS. I guess it's better than the professor but hardly someone who knows the ins and outs of the industry. For that you need someone who has had a multitude of jobs at various different types of organizations.
I sense that the frustration you feel is that professors are able to make choices based on their values, but the average person is not. That is broadly speaking, of course.
I think it is a great shame that we live in a modern world where we do we must to survive regardless of how it makes us feel. I suspect it is the root of much suffering.
Seriously. This thread is so depressing. It's like the entire software industry has given up and just accepted "increase speed forever at any cost" as some kind of iron law of software employment. Is nobody even pushing back anymore? Even offering token resistance? The 'bros have truly won. Our only imperative now is "Can we crush it in the market?"
I think you are making exactly his point. Practicing code as a craft, caring about how you do it, how well you do it, and what it’s ultimately used for is, as you correctly point out, not going to bring you profit or employment.
So maybe there’s something wrong with how we organise work?
I do sympathize with the viewpoint that many academics are not in a position to give good advice about industry since many of them either never worked in industry or had limited exposure via internships. Additionally, the values of academia are sometimes different from industry. Academia, at least in its purest form, is about advancing and disseminating knowledge, while industry is about serving customers through providing products and services.
With that said, I discovered that I’m an academic at heart after nine years in industry, though I left right before agentic coding took off. I got tired of “moving fast and breaking things,” of prioritizing shipping things and “the bottom line” over everything else.
With that said, agentic coding, in my opinion, only amplifies long-standing trends, that shipping matters more than craftsmanship. Even without LLMs, software engineering has long had a “git ‘er done!” attitude. To be fair, market effects matter greatly in software businesses. Quality matters insofar as avoiding completely unusable software, but many software companies succeed without building carefully-crafted software. Even Apple, which has a reputation for being perfectionistic, doesn’t make perfect software.
Academia has its own problems (publish-or-perish, low pay compared to other occupations that require heavy investments in education, politics, etc.), but it seems to allow more breathing room for computer scientists to focus on the craft of programming without as much pressure to ship (publish-or-perish aside).
I hope this is a pun on the content management system used to publish OP. It's forester[0], written in OCaml and parses TeX-like .tree files into semantic XML which uses browser XSLT to render the HTML.
View source on the page to get an idea.
Reminder of what the idealised web promise from decades ago was. Long gone. Very apt.
Doesn't matter who reads it. The point is that you will probably never learn to do "high level system design" well if you do not have enough experience writing and refactoring code yourself. It's like you wanting to become the chef of a kitchen and giving instructions without having ever prepped food.
There is indeed something useful about trying to write elegant code. Not because others read it. But because that's how you learn about the engineering tradeoffs and abstraction that exist everywhere.
He is not giving advice to the industry, he is giving advice to aspiring programmers and computer scientists. He has no experience in industry, but has produced lots of high quality software and research.
The author stated your concerns at the beginning of his post. He prefaced his post saying what the industry wants is the antithesis of what he believes in.
I generally agree with what he stated. We should clearly define our moral and technical redlines. Lines we will never cross because they will be tested every day.
Oh that’s such a high horse position lol - I try and learn as much as possible every day by shipping fast and profitably. Learning to be successful in industry is a completely valid (and common) goal.
I agree, some of this is awful advice for a entry level engineer:
> * Cultivate your ability to think deeply. Do whatever it takes to carve out distraction-free bubbles for yourself in both space and time. This might mean saying no to technologies or patterns of working that others say are critical or inevitable.
An entry level engineer is going to be inundated with a lot of technology they've never heard of and a lot of power structures and group dynamics that are new to them. They're not even in a position to be making these judgements until they actually learn about how professional software development actually works.
> * Be intentional about deciding your own moral and ethical boundaries up front. Don't settle for the lie of compromising your principles "just for now" until you can find something better.
That's great, but also, there are not many entry level roles where someone is going to be in a position to be making these kinds of decisions, other than avoiding a company altogether.
> * Care deeply about your craft. Refactor code until it is clear and elegant. Write good documentation for other humans to read. Have the courage to go slowly, especially when everyone else is telling you that you need to go fast and cut corners.
Yikes. A software engineering job is not a PhD program. If you are refactoring your code and someone is telling you to hurry up, you should probably wrap it up. You need to ship your code or you won't have a job.
Sounds to me like someone who enjoys programming as an intellectual pursuit, as a craft, as an art. I suspect there are more than a few students in the CS program that also feel that way. Clearly they're the intended recipient.
If programming is all about making the most money then by all means disregard everything he says.
Why do you think this is industry advice? I can't find anything here that indicates that it's the case? Maybe they just feel this is the right thing to do.
The stated audience is his students who are "imminently going out into the world (e.g. "The software industry" he is referring to) or continuing your studies."
Education is distinct from industry. The point of education is understanding and knowledge. The point of industry is practical effect and production. The aims are not the same.
And you can understand the principles governing something without knowing all the concrete particulars of an instantiation. In fact, you rarely do.
I know what you are saying. But, almost every major issue I've run into with various teams writing software in production required knowledge of all those particulars to fix.
I also believe learning the basics is essential before reviewing someone else's work. Whether that work is done by a human or machine.
The engineer who only does high level system design and never codes has existed for decades and is often the most useless and derided engineer in the org.
“A good way to describe myself is as a generative AI vegetarian. You can find a fuller explanation—and many, many links—at the above essay by Sean Boots, which I agree with almost 100%.”
—-
Given the capabilities of upcoming LLMs, I suspect that by mid-2027, most competent companies, outside specific niches, will not hire and might fire any non-senior “generative AI vegetarian” software developer.
Probably another viral marketing campaign to further pressure those meta employees to have their in office flatulence levels monitored with probes as they are pressured to vibe code more features faster.
> Who is going to read your carefully crafted documentation lol?
Everyone that uses or works in your codebase.
Look at how people use LLMs these days. People frequently use it on new codebases to get up to speed on the code. Frankly because it's a lot faster than grepping, profiling, and all the digging we'd normally do (though those still have benefits and you're still going to do them. Hell, the LLMs even do them). But how much of that could have been avoided had people just taken a few seconds to document their code? No one is saying sit down and document the whole thing but "add a few comments when you add new functions" or "update comments in places you touch". If it costs you more than a minute of your time you're probably doing it wrong.
I'm tired of these arguments. People are turning molehills into mountains. It's so incredibly myopic. We waste so much fucking time on things because we're trying to move fast. But no one seems to understand the difference between speed and velocity. It never mattered how fast you go, it has always been about velocity. Going fast in the wrong direction is harming you, not helping. If you don't have the time to know if you're headed in the right direction or not then you're probably not.
> Outside of the bit on avoiding cutting corners
But what your gripe is with is cutting corners. Not documenting? That's cutting corners. Not refactoring? That's cutting corners. Not spending time understanding the code at multiple scopes? That's cutting corners.
Those are all corners cut that end up wasting tons of man hours. Sure, they save you a few precious seconds or minutes now, but at the cost of hours or days in the future.
Here's the thing, if you don't take those shortcuts, then none of those tasks are hard. Even refactoring. But as soon as you start taking those shortcuts they start compounding. Then a year down the line your company is writing a blog post about how your code is 500x faster now that it's written in rust (or whatever the cool kids use). If it's 500x faster that's not because a language change, it's because tech debt. And like all debt it accumulates little by little and it's the compounding interest that really kills you.
Sorry, I'm tired of cleaning up everybody's messes. Go ahead, move fast and break things. It's a great way to learn (I do it too!), but don't make others clean up your mess.
Stop buying into this bullshit of needing to move so fast. It's the same anti-pattern scammers use to get you to make poor decisions. Stop scamming yourselves
this resonated for me, quite hard actually. there's the famous quote which has always stuck with me on this stuff slow is smooth, smooth is fast.
thinking about it a little more, i would personally prefer to use the term momentum rather than velocity or just plain speed -- we accrue more mass by adding code, features, etc. and shifting direction/increasing speed are both harder with greater mass.
This has been my experience with academia also. I have an MBA (gasp!) and the best profs were the ones who had real world experience.
Despite the common rhetoric you see in HN comments about how MBA programs only teach graduates how to cut costs by enshittifying, I actually found it a great education that made me a better engineer.
Anyway,
The best profs were the ones who'd worked in industry. One guy who taught finance worked on Wall Street and was fond of distinguishing between how the textbook taught a particular technique or fact, and how practitioners actually do it in real life. Got taught startup valuation by a guy who'd been a VC, competitive strategy by a guy who was a strategy consultant for companies you'd actually heard of, etc.
The worst profs were the ones like the guy who taught operations. He'd never worked a real job. Went straight from being a student to being a TA to a postdoc to a "research prof", whatever that means. All his examples and case studies were useless or overly simplistic to the point of being useless.
The fact that TFAuthor is concerned with polishing one's craft shows they're completely divorced from what actually happens outside the ivory tower. Typing code into a buffer has never been the hard part.
wtf w/ the <200 karma shit posters being top comments.
I think there is credence to his points.
Sadly, a childhood friend who teaches C/C++ at a community college where I grew up (and took said courses - not his) before college - would be a great sounding board on this.
And to the posters qualm about deeper knowledge, AI does not know nuance. It's great for a log of things...nuance is not one of them.
Do the people in this thread dunking on this article think that moving and delivering as quickly as humanly possible, just because it can physically be done now, is going to lead to positive results in the long term? There is a vast gulf of difference between the current industry climate and what this professor is encouraging his students to consider. I don't see how caring about the craft is mutually exclusive with delivering good products that create value. If you think slowing down a bit is going to lead to you taking a trip to the poorhouse, maybe you should examine your own anxieties and perceptions of what's happening around you and whether this current pace is sustainable for everybody.
Obviously it's up to the practitioner to figure out how to make commercial imperatives and craftsmanship align. Maybe remembering that professor's lesson will lead to better outcomes for humanity on a timeline greater than the next quarter. Who knows. I'm just an idealistic 20+ YoE nobody being left behind at this point with nothing of value to contribute.
> Be intentional about deciding your own moral and ethical boundaries up front. Don't settle for the lie of compromising your principles "just for now" until you can find something better.
my uk mechanical engineering bachelors degree had a required module on the ethics of engineering which has always stuck in the back of my mind. i think we went over the bhopal disaster as a case study one week, although it was about 16 years ago now so i can't be sure.
i've rarely seen any ethics modules in computer science departments, at least here in the uk. and i think we sorely need them in general.
edit -- so i guess it's a UK thing xD though i am glad to hear that you folks in the US enjoyed your ethics modules too
As others have said, my comp sci degree also had a required ethics course. But it’s also pretty silly to think that a single ethics course where people don’t pay attention is going to change the hearts and minds of students. No amount of discussion about therac is going to make someone question if they should really be working for palantir or raytheon
In my computer engineering undergrad ~8 years ago in the US, an ethics class was mandatory, but IIRC the CS curriculum did not have it, despite both leading to similar careers. My memory may be wrong though.
Edit: they do seem to have one now, so either I remembered wrong or they added it.
Edit 2: I remember enjoying my ethics class, we covered some of the usual examples, and also things like basic contract negotiations. But I think I still didn't register that these concerns were real at that time. It was easy to believe that I wouldn't be working on anything that impactful. This did change once I started work.
> But I think I still didn't register that these concerns were real at that time. It was easy to believe that I wouldn't be working on anything that impactful. This did change once I started work.
The case study i mentioned (it may not have been bhopal, but it was definitely based on something that happened in india) stands out for me because it really hit home about the impact and seriousness of some decisions we could end up making.
There was another time I remember the lecturer making a point of saying there was no single correct answer about something that caused a lengthy discussion. We would have to figure what's right/wrong out for ourselves going forward. That really stuck with me.
I was thinking about it differently. I understood the potential harm on paper, but I think I was still pretty immature. I thought I would be willing to put aside morals (eg working for companies like Palantir) to work on interesting cutting edge things.
But when I started working and found myself doing equally cutting edge research, but genuinely for the public benefit, I realized I definitely wouldn't be comfortable with putting aside my morals like that. Maybe I didn't really believe this was an option back then.
Yeah I was wondering about that… I got one, but prolly only because my uni put CS under the engineering school.
I don’t think scientists usually have mandatory ethics classes and mathematicians certainly don’t, so if it falls under either of those departments it might’ve gotten skipped!
Every ABET accredited CS course (almost every CS course in the US I think?) requires an Ethics in Computer Science credit. I remember going over a lot of case studies, including Therac 25, but our course also included a lot of general grounding in ethics and philosophy as well, which I enjoyed a lot.
ah, fair enough! maybe it is/was a uk thing (admittedly times might have changed a little since i did my masters/phd).
at the very least i have a wikipedia article on therac 25 to read through now. so thanks for that!
also, yea i remember really enjoying the ethics module too. lots of discussion and not always a clear answer. was very different to the rest of the "one correct maths answer" in a lot of the other modules.
I went from being a largely self taught software dev with a small 1-man software business to working as a nurse in the US, and a lot of the motivation to make that change was that I wanted to spend my time doing work that I felt genuinely made the world better. Tech has incredible potential for good but the actual industry itself in my eyes has extremely perverse incentives and no strong moral foundation like that which exists in nursing/healthcare. Nurses broadly consider themselves to be patient advocates and the voice for people who often can't have their own voice. As you can imagine, this culture is not in line with the modern pursuit of healthcare profits but yet nurses stay fighting the good fight. I see these battles play out nearly every day I go to work and while it's usually done professionally these are real battles with jobs on the line.
In a perfect world I think the software industry would have instilled these same virtues- software is just as (or more) capable of causing harm as poor healthcare. Yet we seem to be racing to a dystopian future at record speed courtesy of the tech industry, and our modern egalitarian societies will not survive that transition.
My Computer Engineering degree had an "ethics" course (really a course on "engineering communications", but it was considered to satisfy the ethics requirement for graduation). It was a semester on how to file memos, cargo-cult your resume, and tell recruiters what they wanted to hear. Not a word was said about considering the implications of the things you're hired to build. When defense contractors took over the entire ground floor of the engineering building to hold a recruiting fair, we were encouraged to go.
The only time ethics in engineering was ever mentioned to me was in a class on applied number theory (cryptography), taught by a professor who had previously worked for the EFF. He went off-topic to tell us that many problems, like how to hit a target with a missile, may fascinate and compel us as engineers, but we shouldn't let that distract us into building instruments of death.
That course was an elective, and it was entirely possible to complete my degree without hearing a single mention of ethics.
There are many reasons I look back on my academic experience with disdain, but this one stands out to me.
The 90ies weren't perfect, but it felt more idealistic to me, with the rise of open source software. People thought about ethics a bit more. It felt like the ultimate tide rising to empower people locally on their own computers, and that tide has been going out for some years. A bit with cloud computing, and now a lot more with LLM's. And the company a lot of SV people keep these days is pretty gross.
FWIW: I had a mandatory ethics class in my US program (Vanderbilt, a rich private school in the American south). It was mandated for all engineers AFAIR, and taught by an engineering prof.
Pretty good experience, too! Sometimes got distracted with general tech ethics rather than strictly professional ethics, but tbf that’s a very fun+timely topic
No disrespect to the person, but this seems to be written by someone who has spent their life in academic bubble, without having to deal with people and entities with diverging interests and the impact of time on any decisions. I'm sure many artists will love to spend more time perfecting their art, based on their subjective interests. However, if they prioritize that, without understanding what their customer wants, they will go bankrupt. Nourish your interests through your hobbies. If they align with money making capability, you are one of the lucky few. For a significant majority, they do not align.
"I do not and will not use LLMs, in any form, for any purpose. Although LLMs are fascinating from a purely technical perspective, I refuse to participate in or contribute to such systems that are built on massive exploitation of human labor and make profligate use of scarce resources. I also don't think they are actually very good for a lot of the applications people seem excited about. Even in cases where LLMs are technically good at a task, that does not necessarily mean their use for that task contributes positively to human flourishing.
A good way to describe myself is as a generative AI vegetarian. You can find a fuller explanation—and many, many links—at the above essay by Sean Boots, which I agree with almost 100%."
I remain hopeful that some day someone will train an LLM which is tolerable to people who take this stance (which I respect, much like I respect food vegetarians despite not being one myself).
I've been tracking models trained entirely on out-of-copyright data, for example. I've not yet seen one of those which appears generally useful and didn't chuck in a scrape of the web or get fine-tuned on examples generated by a non-vegetarian model.
Andrej Karpathy can train a GPT-2 class model for less than $80 now, so at least the environmental cost of training may drop to a point that it's acceptable to LLM vegetarians: https://twitter.com/karpathy/status/2017703360393318587
Why do I care? This post is a great example. If you're a professor of computer science I really want you to be able to tinker with this fascinating class of models without violating your principles.
I've explored I different out-of-copyright trained model Mr Chatterbox before but found it to have been mildly corrupted through the help of synthetic conversation pairs from Haiku and GPT-4o-mini - https://simonwillison.net/2026/Mar/30/mr-chatterbox/
Talkie isn't entirely pure either though: "Finally, we did another round of supervised fine-tuning, this time on rejection-sampled multi-turn synthetic chats between Claude Opus 4.6 and talkie, to smooth out persistent rough edges in its conversational abilities."
I don't get why it's so hard for you and others in this comment section to understand why people hate AI so much because it's not just the theft and environmental destruction. A college professor, especially one at a liberal arts school, is obviously not going to like something that enables you to outsource your thinking and steals your agency. I think that's a perfectly valid viewpoint; maybe talk to someone without STEM-brain who lives outside of SF for once.
I don't need computer science professors to like LLMs, but I still want them to be able to poke at them with a stick without feeling like they are violating their principles regarding energy usage and unlicensed training data.
> I don't need computer science professors to like LLMs, but I still want them to be able to poke at them with a stick without feeling like they are violating their principles regarding energy usage and unlicensed training data.
Why? Language models are interesting from a technical perspective, but so are tons of areas of CS. There's nothing inherently virtuous about using an LLM.
> Andrej Karpathy can train a GPT-2 class model for less than $80 now, so at least the environmental cost of training may drop to a point that it's acceptable to LLM vegetarians: https://twitter.com/karpathy/status/2017703360393318587
I suspect that even if you reduced the cost of training or any other real world metric, the goalposts would immediately move. It seems to me that it has never been about those things, but simply about the feeling of superiority one can attain by eschewing something seen as trending.
> built on massive exploitation of human labor and make profligate use of scarce resources
This kind of hyperbole repeated ad infinitum by haters online is not-constructive, IMO. I would be quite certain that the manufacture of whatever computing device the author is accessing the internet on used far more resources and exploited far more human labor than training an ML model ever did.
"where technology is used to distract, extract, surveil, and kill"
The first general purpose, programmable computer was designed in 1945 to calculate artillery firing tables for the US Army and was immediately used to help design nuclear weapons. Computers and all technology has always been, and will always be, used as a weapon (either directly or indirectly).
'Cultivate your ability to think deeply. Do whatever it takes to carve out distraction-free bubbles for yourself in both space and time.'
I find that when I get back into exercise and reading so much more of my life falls into place. These are things that I never have enough time for until I start doing them regularly at which point I realize that they actually enable me to have more time to do things, not less.
It is very weird how that happens. I hardly expected starting a marathon training program to drastically increase my day to day energy. But here we are.
Prof. Yorgey has done some great work over the years, and wrote one of my favourite papers*. Good on him for speaking up like this. I saw an engineer from Anthropic speak at my alma mater a little while ago and the overwhelming impression I took away from the session was, "if Anthropic are meant to be the good ones, we're really going to be in for a rough time."
>Cultivate your ability to think deeply. Do whatever it takes to carve out distraction-free bubbles for yourself in both space and time. This might mean saying no to technologies or patterns of working that others say are critical or inevitable.
Currently struggling hard to achieve this. We all know everything fights for our attention nowadays, but I can assure you that you don't have an idea of the degree this happens until you actively try to fight it.
I agree with the points made, even if personally I am okay with LLMs (as long as they're used with appropriate caution).
Especially relevant for students I think, since they are hurting themselves most by relying on LLMs. Just like how young children are forced to do math by hand instead of using calculators to build intuition and memory, students should aim to do things manually to build their skills.
Go make that toy website, game, OS, emulator or programming language. Read specifications and try implementing them yourself. You aren't in an environment that requires you to churn out features, you can explore!
I really love the encouragement. Honestly it resonates a lot with me. It shows that the craft itself is still beautiful, you just need to find the right people to mingle with.
But the real world and money blended in creates a weird corrupt mix, just like everything. Not to mention there is a real risk for people who are already has their feet in the industry but not yet senior enough to survive or to control, for example, the AI replacements. And more than likely, the seniority required is way higher than one would think. In the end, economic drives are the dominant forces.
The fact of the matter is that "the craft" is beautiful when you are free to work on academic projects that are concerned with knowledge. In practice, industrial and commercial code is rarely that beautiful. Look at the offering of dev tools designed to reign in the ugliness and help manage the chaos. I'm sort of old school in this regard, but for some time now, many devs rely on all sorts of tooling to write the code, tooling that removes a layer of contact with manual processes of programming and so forth.
It's important to distinguish between the practical and the theoretical. The flippant answers of "idealists" refuse to engage with the messy domain of facts, because it is aesthetically offensive or challenges their comfort or their nostalgia. The steam engine wasn't inevitable either, but people did choose it. How many today in this forum grumble about the loss of a world when the steam engine replaced old ways of working? The next generation won't have these sorts of hangups, just as we don't have them about steam engines. Or, if you like, how many pine for the days of assembly programming?
When something proves to be too useful industrially to opt out of, then it will be adopted. People will choose it. If you want to be Amish, go for it, but most people don't.
I think it fits. Look at the anonymous posts in here, the sheer volume of posts saying this person is failing their students, is a relic, a Luddite, etc.
He put his name and career on it. That takes courage in my opinion.
By understanding computers and enjoying the field you are in you will be more skilled then someone who says "tests pass", "worked on my machine", "maybe it's a good idea to run agents on my companies live prod database". Anyone can learn to slop it up, including someone who is passionate about writing code as a hobby.
I spent 20 years in industry before moving to academia, and this resonates for me. I'm not naive enough to think that we'll do the right thing here, but I can dream.
I use LLM, but LLM need a constraint, that human jobs to make constraint rules. why? because at the end, human use the software products. LLM can review a code when human input the prompt to review the code, LLM can pretend to be a customer when human input the prompt. My thoughts is LLM like big library, thats why human need to decide the constraint, human can say no when LLM give the code. Human can move fast to break things when LLM give code.
Site is struggling a bit, so here's the text of the essay if it doesn't load for you:
To my students [00FD]
April 27, 2026
Brent A. Yorgey
There have been times, especially this year, when I wonder despairingly what it is exactly that I am preparing you for. The software industry is going completely insane, not to mention the political climate. It feels almost unethical to train you as computer scientists only to send you out into a world where entry-level computing jobs are difficult to find; where intellectual property is not respected; where code quantity is valued over quality, and short-term profits over long-term sustainability; where technology is used to distract, extract, surveil, and kill, and designed to exploit some of our deepest cognitive biases and blind spots; where centuries of bias and discrimination are enshrined in systems trained on biased data; where scarce resources are consumed by profligate use of computing for uncertain benefits; where people are racing to create intelligent machines, but only in order to make them slaves.
I originally got into computing because of the beauty of ideas, the joy of creating, and the possibility of building tools to help people and foster human relationships. I still believe in those things, even though it seems like most of the industry does not. I'm writing this in the hope and knowledge that you believe in those things, too. There are things I want to say to you—things that are far more important than any content I might teach you, but things I'm never quite sure how or when to say in class. So I decided to write them here. I hope you will find something here that is helpful to reflect on, whether you are imminently going out into the world or continuing your studies.
* Don't believe self-serving lies about technologies being "inevitable" or "here to stay". You don't have to just go along with the dominant narrative. You can make deliberate choices and help others to do the same.
* Be intentional about deciding your own moral and ethical boundaries up front. Don't settle for the lie of compromising your principles "just for now" until you can find something better.
* Cultivate your ability to think deeply. Do whatever it takes to carve out distraction-free bubbles for yourself in both space and time. This might mean saying no to technologies or patterns of working that others say are critical or inevitable.
* Care deeply about your craft. Refactor code until it is clear and elegant. Write good documentation for other humans to read. Have the courage to go slowly, especially when everyone else is telling you that you need to go fast and cut corners.
* Care more about people, relationships, and justice than you do about profits, code, or productivity.
* Above all, be motivated by love instead of fear.
What an incredibly nihilistic thing to communicate to your students. “Our society is failing. I’m terrified, and you’re probably fucked. Love is love tho <3!”
Why not encourage your students to be curious about emerging technology, and to engage with society as an informed citizen?
This reeks of political activism, and it’s reminiscent of the general BlueSky-esque tone of the Correspondents Dinner shooter’s manifesto.
> It feels almost unethical to train you as computer scientists only to send you out into a world where entry-level computing jobs...
lol.
We millennials are in a position to start giving advice the way boomers used to do with us, now that school is looking more like a couple decades ago instead of just one.
But, unlike those boomers, we don't watch the nightly news: we snort it from a tiny screen all day long from sources hyper engineered to feed off our anxiety.
So we give all this super pessimistic advice.
"Back in my day, I got a job at google right after college and it was awesome! My code was elegant! You guys are FUCKED!"
I agree that AI is creating mega changes, many very bad, but that doesn't mean that it's a good idea or even true to tell GenZ people they're fucked. We don't know if they're fucked.
I think they could have a ton of fun with software and I think it's OK to be encouraging about that.
From an information theory perspective, LLMs are just regurgitating content from a loss-ily compressed training set.
It just turns out that like 95% of software we write is extremely repetitive rehashed shit globbed together. We just haven't found ways to abstract a lot of the redundant code well enough yet so here we are, stuck with the stupid robot.
That remaining 5% is stuff that's fully never been done before. If you ask an LLM to come up with a fully new sorting algorithm it's going to give you worthless garbage, maybe it'll get lucky if you burn a nuclear power plant worth of tokens in an infinite-keyboard-monkeys way.
All this is to say, if we want the field to actually progress we still need somebody with some knowledge about how a computer actually works.
For context, the author is an anti-AI radical. Maybe justified, maybe not — but definitely explains this essay.
The author is getting some grief in this thread from the Eng side, but I’d like to add a bit of grief from the direct opposite side: the philosophical one. It will never not baffle me to see academics assume they are the first people to ever think about topics like ‘what if technology was used for ill’!
The academics teach philosophy. A professors job is to profess after all.
I don't think he believes he is the first or only one to think this. He is just safe enough or at least hopes he is to speak out against the ills of technology. Do you know how many engineers cannot speak up right now for fear of losing their jobs? Lots.
> Have the courage to go slowly, especially when everyone else is telling you that you need to go fast and cut corners.
I've been struggling to figure out what "slower" would look like when working in industry. If everyone's working 2x faster, how do you slow down meaningfully without getting axed?
As I got older and more experienced, I didn't produce code faster. I just produced the right code. If you don't have to try five different things, and debug them along the way, you can be a lot faster without "going fast".
I've seen people work very quickly to create vaporware. I've seen people spend a week to change 2 lines of code and save a release. I don't know how people who practice engineering haven't seen these types of things happen.
I've even seen a guy spend most of his work hours as a mentor even though his title was something like senior engineer. If anyone fired him that company would tank so fast...
provided you have the financial freedom to, don't apply to jobs where this mentality is rewarded.
After getting my CS degree I deliberately went into a sector where I suspected this kind of attitude doesn't exist (defense in my case) because already then I felt the whole web/startup culture had very little to do with software engineering.
Growing up in a blue collar family, this smacks of the plumber on the job site complaining the shape of his piping wasn’t pretty enough and demanding extra time and pay to make it pretty for pretty’s sake.
Just get it to work reliably the cheapest and quickest way possible. This ‘craft’ stuff is just too much.
I lucked into starting very early what I planned on doing in retirement, which is teaching college; as a result I did that for a while with no real life experience. Later, I ended up at the same time starting a company for family-related (but kind of big time) web project.
And while I don't have a problem with career instructors/academics generally, they can be so dramatic. :)
I have no doom and gloom at all for my IT students. Opportunities and crises really are the same thing in the real world; I just tell them, just learn and enjoy learning the tech and keep an eye out for how you can be a problem solver.
It doesn't matter what we think, what ethics we have, because if we won't do what the evil company asks for management will just find an H-1B slave from the third world who will.
We need to discontinue the H-1B visa and have Americans programming again. Americans who are empowered to push back when management crosses an ethical line.
I doubt people using LLMs aggressively today and not understanding what the LLM is doing or why it works (or doesn’t) are positioning themselves for success. How long can one learn nothing before they fall behind those who kept learning?
> Care deeply about your craft. Refactor code until it is clear and elegant. Write good documentation for other humans to read. Have the courage to go slowly, especially when everyone else is telling you that you need to go fast and cut corners.
Outside of the bit on avoiding cutting corners, this advice seems like a straight path towards unemployment in a few years. The implication is that "your craft" is writing and polishing code, a skill which seems to be increasingly antiquated in favor of higher level system design. Who is going to read your carefully crafted documentation lol? The agents who replace you?
If a tree falls in the forest...
> until it is clear and elegant
New grads who spend weeks refactoring code are going to get lapped by new grads who ship something and iterate. There's just a faster feedback loop now.
Those people are going to be the absolute most dangerous possible thing you can do to a company.
Maybe some day we can just totally give up the technicals to the machine, but I strongly doubt it. Every single model is both brilliant, but also a fool, no matter how frontier it is.
Yes, the feedback loops are faster. But you need to assess what's actually technically happening. Someone does. Maybe you offload the actual thinking up the chain, delegate taste understanding and judgement to only people up the chain, and make them all go mad dealing with endless slopcoding they are being hit with. But just as bad, that junior engineer is robbing themself too. Maybe they get away with not looking, but they sure aren't going to learn a lot.
I'm missing the link but there was a great submission maybe a month ago about two hypothetical grad students, I think in astronomy, where one failed and flailed and did things largely the old fashioned way, and the other used AI to get it done. The advisor couldn't really tell who was doing what. But at the end, one student had learned & gained wisdom, and the other had served as a glorified relay between the AI and the advisor and learned little. Same work output, but different human outcomes.
Junior engineers are really not that cheap. Relative to your capabilities you are not a bargain. You take a ton of valuable time from other people. If a company is hiring you, they either are truly fools lacking basic understanding, or they are in on the bargain that they want you to be getting better, are testing to see if you can become more useful. Sure it's great to show up and have impressive output, but you need to actually be learning and growing. You need to be participating in the feedback loop actively. Or you will be lapped by people who care & think like engineers.
I hear you, but here's the thing: the companies don't give a shit about software quality any farther than it takes to keep you coming back as a customer. And it's actually been like this for a long time. They're going to hire people who can ship who-cares-how-buggy software as fast as possible. It's better for the bottom line.
And that pains my soul and pains me as a consumer (because we already had to put up with too much crap software before genAI started producing it in reams), but there's very limited money in the kind of quality you're talking about.
I hear stories from people interviewing now--the interviewers react negatively if you tell them you're working on keeping your programming skills fresh. They just want to know how many agents you can run at a time and how many lines of code you can generate per day.
Personally, I think someone skilled in software development working with genAI is going to be more productive than someone not skilled working with genAI, but I don't think that's even being selected for now.
Grim days.
The one thing that gives me hope is that every time we ask our graduates who are now in the field (and all work with AI) if we should drop classic CS education and only do AI, they all emphatically reply in the negative. Yes, we need some AI education in there, but they want the foundation, too.
Refactoring improves code organization. It makes the code more maintainable, arguably and more reusable. And, from an academic POV, makes code more satisfying conceptually by aligning it with the model of a domain more clearly and conspicuously. Good stuff.
Great. Now, in industry, what matters is the result. Nobody cares if the result was produced by a witch casting magic spells or a grunt hitting a rock with another rock. Industry is practical. It cares about "craft" as far as it enables commercial success (and yes, short-term thinking can be bad, but guess what: you need to eat in the short-term!). Maintainability is a nice thing to have, because it does allow us to more quickly develop code. But how maintainable something needs to be, especially in relation to other competing concerns, has no fixed answer. It really depends on the situation.
Practical wisdom, known as prudence in the classical literature, is the foundation of all moral behavior. The right decision, the right concern, really does depend on the circumstances. You cannot derive from principles, from the armchair, what the right course of action is for everything. The general principles may be immutable and absolute and fixed, but the way in which they are applied in particular circumstances will vary.
Academia can insulate people from certain kinds of practical concerns, which is supposed to aid theoretical work, but this demands that the academic recognize his limits. He is not in a position to pass judgement on prudential matters, which is to say matters that are not strictly matters of principle, if he is not prepared to engage competently with the concrete reality of the situation.
I also think I get doubly upset from advice like this because it’s given and marketed to impressionable young students. Even agreeing with all the moral points he’s made, I truly think this advice would set up a new grad for failure and have them focusing on the wrong skills for this market.
The bit about ignoring trends feels too head in the sand for my liking :/
Will LLMs in their current ergonomics have staying power? Perhaps. Nobody can predict the future. But I don’t think it’s a given in the least
Which is why they very carefully worded it more as 'LLMs in their current form', twice.
I recognize not everyone's work is [as] important, but we should still strive for excellence (and safety.)
The point is to decide what success is for yourself. Learn everything you can about the thing you might decide to automate. But think before you automate and how you do so because it could cause more harm then good.
To me it was actually not clear what his point was.
"Above all, be motivated by love instead of fear."
Sounds great. But not that practical.
making that money, getting that job title, being at that company, working on that project -- are these success?
or is success simply doing the best job possible when writing code?
Programming is a practical skill, and its most common expression is industrial or commercial, not academic proofs of concept. The post addresses students who will enter industry; that's the focus of the professor's own post.
And I sympathize with many points being made here. However, the point of refactoring code is somewhat odd and detached from the real life constraints of programming in the wild.
Like, sure, in the ivory tower, you can confine yourself to nicely bounded problems and tidy little toy POCs. You can survive doing those things, because the selective pressures allow for it. I love those things, personally. They help me understand the nature of the thing. And in an academic settings, you can refine and refactor the hell out of those things to your heart's content (not that there is necessarily an objective end point to refactoring; code organization is subject to goals and constraints which can shift around).
But the reality of software in a commercial setting is not the tidy one you can expect in an academic setting. It's messy, subject to commercial pressures, to a hierarchy of values that doesn't place "refactoring" at the top of the list. And why would it? Whether you should refactor something is not just a question of whether it suits your conceptual tastes or even whether it is more maintainable. Unlike algorithms and principles and even techniques, software is not eternal. It is ephemeral. It's shelf-life is bounded. It is a piece of a larger business process. You're not refining some theory or some grasp of a Platonic ideal. You're mostly just putting into place plumbing to get something done. Whether you should refactor something, when you should refactor something, is a matter of prudential judgement, which is to say, of practical reason.
So, in light of that, there are actually quite absurd things to say given the difference between the privilege of academia and the gritty reality of industrial and commercial software development. If we were to force our professor into the world of industry, he would quickly lose his job or he would quickly learn that some of his strange idealism is silly and detached from the reality that his students will face.
Code that's easier to understand is easier to: maintain, generate new features for, fix bugs, onboard new engineers, etc
Code that's well written: executes faster (saving computational costs), scales better, has higher uptimes/more robust, reduces bandwidth, and so on.
The thing is the business people will never understand this. Why would they? They're not programmers. They're not in the weeds. But that's what your job is as an engineer. To find all these invisible costs.
I'm pretty confident the industry is spending billions unnecessary. Hell, I'm sure Google alone is wasting over $100m/yr due to this.
Don't be penny wise and pound foolish. You're smarter than that. I know everyone here is smarter than that. So don't fall for the trap
I am well aware of stupidity in industry. However, I am also wise enough to recognize the opposite error. (I myself have academic tendencies and a background aligned with that. I have chosen jobs that payed less, because the subject matter was more interesting for me. I'm not some vulgar, money-chasing techbro here.) The via media demands that we recognize the distinction between general truths and practical realities. As I wrote elsewhere in this thread, yes, properly refactored code is easier to maintain, easier to read, easier to change, and theoretically, commercially preferable. It also makes programming more satisfying, helping retention. But that describes a feature of such code. It doesn't tell us what the right course of action is in a particular situation. The notion that refactoring is unconditionally the right course of action when code is not in some ideal state is simply wrong. It really does depend on the situation. Sometimes, refactoring is the wrong thing to do.
I'm not making some outrageous claim here. This follows from basic truths about the nature of what it means to be practical, and if industry is anything, it is practical.
But disagree that this is a path to unemployment. At work we go very fast and yet I think fast is compatible with each of those points, just not in all situations.
Marc Brooker, distinguished eng at AWS, gives much more useful advice for industry, as you'd expect given his almost 30 years in industry.
https://brooker.co.za/blog/2026/03/25/ic-junior.html
I think it is a great shame that we live in a modern world where we do we must to survive regardless of how it makes us feel. I suspect it is the root of much suffering.
So maybe there’s something wrong with how we organise work?
With that said, I discovered that I’m an academic at heart after nine years in industry, though I left right before agentic coding took off. I got tired of “moving fast and breaking things,” of prioritizing shipping things and “the bottom line” over everything else.
With that said, agentic coding, in my opinion, only amplifies long-standing trends, that shipping matters more than craftsmanship. Even without LLMs, software engineering has long had a “git ‘er done!” attitude. To be fair, market effects matter greatly in software businesses. Quality matters insofar as avoiding completely unusable software, but many software companies succeed without building carefully-crafted software. Even Apple, which has a reputation for being perfectionistic, doesn’t make perfect software.
Academia has its own problems (publish-or-perish, low pay compared to other occupations that require heavy investments in education, politics, etc.), but it seems to allow more breathing room for computer scientists to focus on the craft of programming without as much pressure to ship (publish-or-perish aside).
I hope this is a pun on the content management system used to publish OP. It's forester[0], written in OCaml and parses TeX-like .tree files into semantic XML which uses browser XSLT to render the HTML.
View source on the page to get an idea.
Reminder of what the idealised web promise from decades ago was. Long gone. Very apt.
[0] https://www.forester-notes.org/index/index.xml
There is indeed something useful about trying to write elegant code. Not because others read it. But because that's how you learn about the engineering tradeoffs and abstraction that exist everywhere.
I generally agree with what he stated. We should clearly define our moral and technical redlines. Lines we will never cross because they will be tested every day.
> * Cultivate your ability to think deeply. Do whatever it takes to carve out distraction-free bubbles for yourself in both space and time. This might mean saying no to technologies or patterns of working that others say are critical or inevitable.
An entry level engineer is going to be inundated with a lot of technology they've never heard of and a lot of power structures and group dynamics that are new to them. They're not even in a position to be making these judgements until they actually learn about how professional software development actually works.
> * Be intentional about deciding your own moral and ethical boundaries up front. Don't settle for the lie of compromising your principles "just for now" until you can find something better.
That's great, but also, there are not many entry level roles where someone is going to be in a position to be making these kinds of decisions, other than avoiding a company altogether.
> * Care deeply about your craft. Refactor code until it is clear and elegant. Write good documentation for other humans to read. Have the courage to go slowly, especially when everyone else is telling you that you need to go fast and cut corners.
Yikes. A software engineering job is not a PhD program. If you are refactoring your code and someone is telling you to hurry up, you should probably wrap it up. You need to ship your code or you won't have a job.
If programming is all about making the most money then by all means disregard everything he says.
>I do not and will not use the internet, in any form, for any purpose.
And you can understand the principles governing something without knowing all the concrete particulars of an instantiation. In fact, you rarely do.
I also believe learning the basics is essential before reviewing someone else's work. Whether that work is done by a human or machine.
“A good way to describe myself is as a generative AI vegetarian. You can find a fuller explanation—and many, many links—at the above essay by Sean Boots, which I agree with almost 100%.”
—-
Given the capabilities of upcoming LLMs, I suspect that by mid-2027, most competent companies, outside specific niches, will not hire and might fire any non-senior “generative AI vegetarian” software developer.
edit, I see, a new slang:
https://news.ycombinator.com/item?id=47928885
Look at how people use LLMs these days. People frequently use it on new codebases to get up to speed on the code. Frankly because it's a lot faster than grepping, profiling, and all the digging we'd normally do (though those still have benefits and you're still going to do them. Hell, the LLMs even do them). But how much of that could have been avoided had people just taken a few seconds to document their code? No one is saying sit down and document the whole thing but "add a few comments when you add new functions" or "update comments in places you touch". If it costs you more than a minute of your time you're probably doing it wrong.
I'm tired of these arguments. People are turning molehills into mountains. It's so incredibly myopic. We waste so much fucking time on things because we're trying to move fast. But no one seems to understand the difference between speed and velocity. It never mattered how fast you go, it has always been about velocity. Going fast in the wrong direction is harming you, not helping. If you don't have the time to know if you're headed in the right direction or not then you're probably not.
But what your gripe is with is cutting corners. Not documenting? That's cutting corners. Not refactoring? That's cutting corners. Not spending time understanding the code at multiple scopes? That's cutting corners.Those are all corners cut that end up wasting tons of man hours. Sure, they save you a few precious seconds or minutes now, but at the cost of hours or days in the future.
Here's the thing, if you don't take those shortcuts, then none of those tasks are hard. Even refactoring. But as soon as you start taking those shortcuts they start compounding. Then a year down the line your company is writing a blog post about how your code is 500x faster now that it's written in rust (or whatever the cool kids use). If it's 500x faster that's not because a language change, it's because tech debt. And like all debt it accumulates little by little and it's the compounding interest that really kills you.
Sorry, I'm tired of cleaning up everybody's messes. Go ahead, move fast and break things. It's a great way to learn (I do it too!), but don't make others clean up your mess.
Stop buying into this bullshit of needing to move so fast. It's the same anti-pattern scammers use to get you to make poor decisions. Stop scamming yourselves
thinking about it a little more, i would personally prefer to use the term momentum rather than velocity or just plain speed -- we accrue more mass by adding code, features, etc. and shifting direction/increasing speed are both harder with greater mass.
Despite the common rhetoric you see in HN comments about how MBA programs only teach graduates how to cut costs by enshittifying, I actually found it a great education that made me a better engineer.
Anyway,
The best profs were the ones who'd worked in industry. One guy who taught finance worked on Wall Street and was fond of distinguishing between how the textbook taught a particular technique or fact, and how practitioners actually do it in real life. Got taught startup valuation by a guy who'd been a VC, competitive strategy by a guy who was a strategy consultant for companies you'd actually heard of, etc.
The worst profs were the ones like the guy who taught operations. He'd never worked a real job. Went straight from being a student to being a TA to a postdoc to a "research prof", whatever that means. All his examples and case studies were useless or overly simplistic to the point of being useless.
The fact that TFAuthor is concerned with polishing one's craft shows they're completely divorced from what actually happens outside the ivory tower. Typing code into a buffer has never been the hard part.
I think there is credence to his points.
Sadly, a childhood friend who teaches C/C++ at a community college where I grew up (and took said courses - not his) before college - would be a great sounding board on this.
And to the posters qualm about deeper knowledge, AI does not know nuance. It's great for a log of things...nuance is not one of them.
Obviously it's up to the practitioner to figure out how to make commercial imperatives and craftsmanship align. Maybe remembering that professor's lesson will lead to better outcomes for humanity on a timeline greater than the next quarter. Who knows. I'm just an idealistic 20+ YoE nobody being left behind at this point with nothing of value to contribute.
my uk mechanical engineering bachelors degree had a required module on the ethics of engineering which has always stuck in the back of my mind. i think we went over the bhopal disaster as a case study one week, although it was about 16 years ago now so i can't be sure.
i've rarely seen any ethics modules in computer science departments, at least here in the uk. and i think we sorely need them in general.
edit -- so i guess it's a UK thing xD though i am glad to hear that you folks in the US enjoyed your ethics modules too
'We should teach our Students what Industry doesn’t want', Kevin Ryan, https://dl.acm.org/doi/pdf/10.1145/3377814.3381719
'Are you sure your software will not kill anyone?', Nancy Leveson, https://dspace.mit.edu/handle/1721.1/136281.2
Edit: they do seem to have one now, so either I remembered wrong or they added it.
Edit 2: I remember enjoying my ethics class, we covered some of the usual examples, and also things like basic contract negotiations. But I think I still didn't register that these concerns were real at that time. It was easy to believe that I wouldn't be working on anything that impactful. This did change once I started work.
The case study i mentioned (it may not have been bhopal, but it was definitely based on something that happened in india) stands out for me because it really hit home about the impact and seriousness of some decisions we could end up making.
There was another time I remember the lecturer making a point of saying there was no single correct answer about something that caused a lengthy discussion. We would have to figure what's right/wrong out for ourselves going forward. That really stuck with me.
But when I started working and found myself doing equally cutting edge research, but genuinely for the public benefit, I realized I definitely wouldn't be comfortable with putting aside my morals like that. Maybe I didn't really believe this was an option back then.
I don’t think scientists usually have mandatory ethics classes and mathematicians certainly don’t, so if it falls under either of those departments it might’ve gotten skipped!
at the very least i have a wikipedia article on therac 25 to read through now. so thanks for that!
also, yea i remember really enjoying the ethics module too. lots of discussion and not always a clear answer. was very different to the rest of the "one correct maths answer" in a lot of the other modules.
In a perfect world I think the software industry would have instilled these same virtues- software is just as (or more) capable of causing harm as poor healthcare. Yet we seem to be racing to a dystopian future at record speed courtesy of the tech industry, and our modern egalitarian societies will not survive that transition.
The only time ethics in engineering was ever mentioned to me was in a class on applied number theory (cryptography), taught by a professor who had previously worked for the EFF. He went off-topic to tell us that many problems, like how to hit a target with a missile, may fascinate and compel us as engineers, but we shouldn't let that distract us into building instruments of death.
That course was an elective, and it was entirely possible to complete my degree without hearing a single mention of ethics.
There are many reasons I look back on my academic experience with disdain, but this one stands out to me.
Pretty good experience, too! Sometimes got distracted with general tech ethics rather than strictly professional ethics, but tbf that’s a very fun+timely topic
A good way to describe myself is as a generative AI vegetarian. You can find a fuller explanation—and many, many links—at the above essay by Sean Boots, which I agree with almost 100%."
I've been tracking models trained entirely on out-of-copyright data, for example. I've not yet seen one of those which appears generally useful and didn't chuck in a scrape of the web or get fine-tuned on examples generated by a non-vegetarian model.
Andrej Karpathy can train a GPT-2 class model for less than $80 now, so at least the environmental cost of training may drop to a point that it's acceptable to LLM vegetarians: https://twitter.com/karpathy/status/2017703360393318587
Why do I care? This post is a great example. If you're a professor of computer science I really want you to be able to tinker with this fascinating class of models without violating your principles.
UPDATE: Huh, speaking of potentially vegetarian models, I just saw https://talkie-lm.com/introducing-talkie on the HN homepage https://news.ycombinator.com/item?id=47927903
I've explored I different out-of-copyright trained model Mr Chatterbox before but found it to have been mildly corrupted through the help of synthetic conversation pairs from Haiku and GPT-4o-mini - https://simonwillison.net/2026/Mar/30/mr-chatterbox/
Talkie isn't entirely pure either though: "Finally, we did another round of supervised fine-tuning, this time on rejection-sampled multi-turn synthetic chats between Claude Opus 4.6 and talkie, to smooth out persistent rough edges in its conversational abilities."
I don't need computer science professors to like LLMs, but I still want them to be able to poke at them with a stick without feeling like they are violating their principles regarding energy usage and unlicensed training data.
Why? Language models are interesting from a technical perspective, but so are tons of areas of CS. There's nothing inherently virtuous about using an LLM.
I suspect that even if you reduced the cost of training or any other real world metric, the goalposts would immediately move. It seems to me that it has never been about those things, but simply about the feeling of superiority one can attain by eschewing something seen as trending.
* real programmers manage memory, it's a craft
* real programmers don't drag and drop
* real programmers don't use intellisense
* real programmers don't need stack overflow
* real programmers don't tab-complete
* real programmers don't need copilot
* real programmers don't use llms <- you are here
This kind of hyperbole repeated ad infinitum by haters online is not-constructive, IMO. I would be quite certain that the manufacture of whatever computing device the author is accessing the internet on used far more resources and exploited far more human labor than training an ML model ever did.
How constructive are ad hominem arguments?
The first general purpose, programmable computer was designed in 1945 to calculate artillery firing tables for the US Army and was immediately used to help design nuclear weapons. Computers and all technology has always been, and will always be, used as a weapon (either directly or indirectly).
I find that when I get back into exercise and reading so much more of my life falls into place. These are things that I never have enough time for until I start doing them regularly at which point I realize that they actually enable me to have more time to do things, not less.
http://ozark.hendrix.edu/~yorgey/forest/009L/index.xml
* Monoids: Theme and variations (functional pearl): http://ozark.hendrix.edu/~yorgey/pub/monoid-pearl.pdf
Currently struggling hard to achieve this. We all know everything fights for our attention nowadays, but I can assure you that you don't have an idea of the degree this happens until you actively try to fight it.
Especially relevant for students I think, since they are hurting themselves most by relying on LLMs. Just like how young children are forced to do math by hand instead of using calculators to build intuition and memory, students should aim to do things manually to build their skills.
Go make that toy website, game, OS, emulator or programming language. Read specifications and try implementing them yourself. You aren't in an environment that requires you to churn out features, you can explore!
But the real world and money blended in creates a weird corrupt mix, just like everything. Not to mention there is a real risk for people who are already has their feet in the industry but not yet senior enough to survive or to control, for example, the AI replacements. And more than likely, the seniority required is way higher than one would think. In the end, economic drives are the dominant forces.
It's important to distinguish between the practical and the theoretical. The flippant answers of "idealists" refuse to engage with the messy domain of facts, because it is aesthetically offensive or challenges their comfort or their nostalgia. The steam engine wasn't inevitable either, but people did choose it. How many today in this forum grumble about the loss of a world when the steam engine replaced old ways of working? The next generation won't have these sorts of hangups, just as we don't have them about steam engines. Or, if you like, how many pine for the days of assembly programming?
When something proves to be too useful industrially to opt out of, then it will be adopted. People will choose it. If you want to be Amish, go for it, but most people don't.
It was important to say, but I very much doubt there was any courage involved.
He put his name and career on it. That takes courage in my opinion.
A one-word summary of TFA would be: "wisdom".
Build your own job-portable software libraries. Yes, you might need a lawyer.
Start now.
Not everything is about making money anyways.
In the case of present-day LLMs, the vast majority of the public finds them to be more harmful than beneficial.
Why accept a decreasing quality of live instead of sensible regulation?
Examples of ridiculous and incorrect beliefs once held by majorities:
- Spontaneous generation
- "Miasma" causes disease
- Earth is at the centre of the universe
- The heart is the seat of thought and the brain is useless
- Cold weather causes colds
Don't trust "the vast majority" to get anything right, ever.
This suggests to me the underlying concern is "but I won't get paid for my craft!".
Hell hath no fury like a vested interest masquerading as a moral principle?
Why not encourage your students to be curious about emerging technology, and to engage with society as an informed citizen?
This reeks of political activism, and it’s reminiscent of the general BlueSky-esque tone of the Correspondents Dinner shooter’s manifesto.
lol.
We millennials are in a position to start giving advice the way boomers used to do with us, now that school is looking more like a couple decades ago instead of just one.
But, unlike those boomers, we don't watch the nightly news: we snort it from a tiny screen all day long from sources hyper engineered to feed off our anxiety.
So we give all this super pessimistic advice.
"Back in my day, I got a job at google right after college and it was awesome! My code was elegant! You guys are FUCKED!"
I agree that AI is creating mega changes, many very bad, but that doesn't mean that it's a good idea or even true to tell GenZ people they're fucked. We don't know if they're fucked.
I think they could have a ton of fun with software and I think it's OK to be encouraging about that.
From an information theory perspective, LLMs are just regurgitating content from a loss-ily compressed training set.
It just turns out that like 95% of software we write is extremely repetitive rehashed shit globbed together. We just haven't found ways to abstract a lot of the redundant code well enough yet so here we are, stuck with the stupid robot.
That remaining 5% is stuff that's fully never been done before. If you ask an LLM to come up with a fully new sorting algorithm it's going to give you worthless garbage, maybe it'll get lucky if you burn a nuclear power plant worth of tokens in an infinite-keyboard-monkeys way.
All this is to say, if we want the field to actually progress we still need somebody with some knowledge about how a computer actually works.
The author is getting some grief in this thread from the Eng side, but I’d like to add a bit of grief from the direct opposite side: the philosophical one. It will never not baffle me to see academics assume they are the first people to ever think about topics like ‘what if technology was used for ill’!
I don't think he believes he is the first or only one to think this. He is just safe enough or at least hopes he is to speak out against the ills of technology. Do you know how many engineers cannot speak up right now for fear of losing their jobs? Lots.
I've been struggling to figure out what "slower" would look like when working in industry. If everyone's working 2x faster, how do you slow down meaningfully without getting axed?
As I got older and more experienced, I didn't produce code faster. I just produced the right code. If you don't have to try five different things, and debug them along the way, you can be a lot faster without "going fast".
I've even seen a guy spend most of his work hours as a mentor even though his title was something like senior engineer. If anyone fired him that company would tank so fast...
After getting my CS degree I deliberately went into a sector where I suspected this kind of attitude doesn't exist (defense in my case) because already then I felt the whole web/startup culture had very little to do with software engineering.
Just get it to work reliably the cheapest and quickest way possible. This ‘craft’ stuff is just too much.
And while I don't have a problem with career instructors/academics generally, they can be so dramatic. :)
I have no doom and gloom at all for my IT students. Opportunities and crises really are the same thing in the real world; I just tell them, just learn and enjoy learning the tech and keep an eye out for how you can be a problem solver.
You'll be fine.
We need to discontinue the H-1B visa and have Americans programming again. Americans who are empowered to push back when management crosses an ethical line.
It’ll be interesting to see