I think if your job is to assemble a segment of a car based on a spec using provided tools and pre-trained processes, it makes sense if you worry that giant robot arms might be installed to replace you.
But if your job is to assemble a car in order to explore what modifications to make to the design, experiment with a single prototype, and determine how to program those robot arms, you’re probably not thinking about the risk of being automated.
I know a lot of counter arguments are a form of, “but AI is automating that second class of job!” But I just really haven’t seen that at all. What I have seen is a misclassification of the former as the latter.
This is actually a really good description of the situation. But I will say, as someone that prided myself on being the second one you described, I am becoming very concerned about how much of my work was misclassified. It does feel like a lot of work I did in the second class is being automated where maybe previously it overinflated my ego.
I keep on wondering how much of the AI embrace is marketing driven. Yes, it can produce value and cut corners. But it seems like self driving by 2016 Musk prediction. Which never happened. With IPO/Stock valuations closely tied to hype, I wonder if we are all witnessing a giant bubble in the making
How much of this is mass financial engineering than real value. Reading a lot of nudges how everyone should have Google or other AI stock in their portfolio/retirement accounts
Maybe we haven't seen much economic value or productivity increase given all the AI hypes. I don't think we can deny the fact that programming has been through a paradigm shift where humans aren't the only ones writing code and the amount of code written by humans I would say is decreasing.
There's nothing to wonder about. It's obviously marketing.
The whole narrative of "inevitability" is the stock behavior of tech companies who want to push a product onto the public. Why fight the inevitable? All you can do is accept and adapt.
And given how many companies ask vendors whether their product "has AI" without having the slightest inkling of what that even means or whether it even makes sense, as if it were some kind of magical fairy dust - yeah, the stench of hype is thick enough you could cut it with a knife.
Of course, that doesn't mean it lacks all utility.
I realize many are disappointed (especially by technical churn, star-based-development JS projects on github without technical rigour). I don't trust any claim on the open web if I don't know the technical background of the person making it.
However I think - Nadh, ronacher, the redis bro - these are people who can be trusted. I find Nadh's article (OP) quite balanced.
> Software development, as it has been done for decades, is over.
I'm pretty sure the way I was doing things in 2005 was completely different compared to 2015. Same for 2015 and 2025. I'm not old enough to know how they were doing things in 1995, but I'm pretty sure there very different compared to 2005.
For sure, we are going through some big changes, but there is no "as it has been done for decades".
I don't think things have changed that much in the time I've been doing it (roughly 20 years). Tools have evolved and new things were added but the core workflow of a developer has more or less stayed the same.
I don't think that's true, at least for everywhere I've worked.
Agile has completely changed things, for better or for worse.
Being a SWE today is nothing like 30 years ago, for me. I much preferred the earlier days as well, as it felt far more engineered and considered as opposed to much of the MVP 'productivity' of today.
I also wonder what those people have been doing all this time... I also have been mostly working as a developer for about 20 years and I don't think much has changed at all.
I also don't feel less productive or lacking in anything compared to the newer developers I know (including some LLM users) so I don't think I am obsolete either.
At some point I could straight-up call functions from the Visual Studio debugger Watch window instead of editing and recompiling. That was pretty sick.
Yes I know, Lisp could do this the whole time. Feel free to offer me a Lisp job drive-by Lisp person.
Talk is never cheap. Communicating your thoughts to people without the exact same kind of expertise as you is the most important skill.
This quote is from Torvalds, and I'm quite sure that if he weren't able to write eloquent English no one would know Linux today.
Code is important when it's the best medium to express the essence of your thoughts. Just like a composer cannot express the music in his head with English words.
talk is even cheaper, still show me the code, people claim 10x productivity that translates to 10x of work done in a month, even with Opus 4.5 out since November 2025 I haven't seen signs of this. AI makes the level of complexity with modern systems bearable, it was getting pretty bad before and AI kinda saved us. A non-trivial React app is still a pain to write. Also creating a harness for a non-deterministic api that AI provides is also pain. At least we don't have to fight through typing errors or search through relevant examples before copying and pasting. AI is good at automating typing, the lack of reasoning and the knowledge cutoff still makes coding very tedious though.
The original phrase "talk is cheap" is generally used to mean "it's easy to say a whole lot of shit and that talk often has no real value." So this cleaver headline is telling me the code has even less value than the talk. That alone betrays a level of ignorance I would expect from the author's work. I go to read the article and it confirmed my suspicion.
> One can no longer know whether such a repository was “vibe” coded by a non-technical person who has never written a single line of code, or an experienced developer, who may or may not have used LLM assistance.
I am talking about what it means to invert that phrase.
I see a lot of the same (well thought out) pushback on here whenever these kinds of blind hype articles pop up.
But my biggest objection to this "engineering is over" take is one that I don't see much. Maybe this is just my Big Tech glasses, but I feel like for a large, mature product, if you break down the time and effort required to bring a change to production, the actual writing of code is like... ten, maybe twenty percent of it?
Sure, you can bring "agents" to bear on other parts of the process to some degree or another. But their value to the design and specification process, or to live experiment, analysis, and iteration, is just dramatically less than in the coding process (which is already overstated). And that's without even getting into communication and coordination across the company, which is typically the real limiting factor, and in which heavy LLM usage almost exclusively makes things worse.
Takes like this seem to just have a completely different understanding of what "software development" even means than I do, and I'm not sure how to reconcile it.
To be clear, I think these tools absolutely have a place, and I use them where appropriate and often get value out of them. They're part of the field for good, no question. But this take that it's a replacement for engineering, rather than an engineering power tool, consistently feels like it's coming from a perspective that has never worked on supporting a real product with real users.
I'm not sure you're actually in disagreement with the author of this piece at all.
They didn't say that software engineering is over - they said:
> Software development, as it has been done for decades, is over.
You argue that writing code is 10-20% of the craft. That's the point they are making too! They're framing the rest of it as the "talking", which is now even more important than it was before thanks to the writing-the-code bit being so much cheaper.
> Software development, as it has been done for decades, is over.
Simon I guess vb-8558's comment inn here is something which is really nice (definitely worth a read) and they mention how much coding has changed from say 1995 to 2005 to 2015 to 2025
Directly copying line from their comment here : For sure, we are going through some big changes, but there is no "as it has been done for decades".
My (point?) is that this pure mentality of code is cheap show me the talk is weird/net negative (even if I may talk more than I code) simply because code and coding practices are something that I can learn over my experience and hone in whereas talk itself constitutes to me as non engineers trying to create software and that's all great but not really understanding the limitations (that still exist)
So the point I am trying to make is that I feel as if when the OP mentioned code is 10-20% of the craft, they didn't mean the rest is talk. They meant all the rest are architectural decisions & just everything surrounding the code. Quite frankly, the idea behind Ai/LLM's is to automate that too and convert it into pure text and I feel like the average layman significantly overestimates what AI can and cannot do.
So the whole notion of show me the talk atleast in a more non engineering background as more people try might be net negative not really understanding the tech as is and quite frankly even engineers are having a hard time catching up with all which is happening.
I do feel like that the AI industry just has too many words floating right now. To be honest, I don't want to talk right now, let me use the tool and see how it goes and have a moment of silence. The whole industry is moving faster than the days till average js framework days.
To have a catchy end to my comment: There is just too much talk nowadays. Show me the trust.
I do feel like information has become saturated and we are transitioning from the "information" age to "trust" age. Human connections between businesses and elsewhere matter the most right now more than ever. I wish to support projects which are sustainable and fair driven by passion & then I might be okay with AI use case imo.
> Takes like this seem to just have a completely different understanding of what "software development" even means than I do, and I'm not sure how to reconcile it.
You're absolutely right about coding being less than 20% of the overall effort. In my experience, 10% is closer to the median. This will get reconciled as companies apply LLMs and track the ROI. Over a single year the argument can be made that "We're still learning how to leverage it." Over multiple years the 100x increase in productivity claims will be busted.
We're still on the upslope of Gartner's hype cycle. I'm curious to see how rapidly we descend into the Trough of Disillusionment.
Yeah in a lot of ways, my assertion is that @
“Code is cheap” actually means the opposite of what everyone thinks it does. Software Engineer is even more about the practices we’ve been developing over the past 20 or so years, not less
Like Linus’ observation still stands. Show me that the code you provided does exactly what you think it should. It’s easy to prompt a few lines into an LLM, it’s another thing to know exactly the way to safely and effectively change low level code.
Liz Fong-Jones told a story on LinkedIn about this at HoneyComb, she got called out for dropping a bad set of PR’s in a repo, because she didn’t really think about the way the change was presented.
Before getting through to the article, I had in fact completely misinterpreted the intent of the headline; I assumed it was anti-LLM-hype because of "code is cheap". Code is, indeed, cheap. Not because of LLMs, but because it has always been so.
I think what we're witnessing is hundreds of thousands, if not millions, of people who aren't software engineers, who have played around and dabbled with the idea of coding but who are not professionals, suddenly having their world expanded. As a hobbyist, they can suddenly create proof-of-concepts for all kinds of things without bothering to learn anything. And, not knowing better, they think that this is what software engineering is: just writing code for cool toys. If it compiles, it's good enough, right?
Of course, the difficult part of the profession has always been the engineering. You need to turn that code into a product that survives contact with the real world, that survives contact with thousands or millions of users, that is secure, performant, as bug-free as possible, that is easily extendable and maintainable for years or decades to come. That is and has always been the difficult part, and LLMs have shown zero indication whatsoever of being able to replace professionals in this role.
And so here we have a blog post written about Linus merging an LLM-generated commit on his toy project, once again completely missing the point of the difference between a hobby and a profession, and ignorantly proclaiming that the former will replace the latter.
Historically, it would take a reasonably long period of consistent effort and many iterations of refinement for a good developer to produce 10,000 lines of quality code that not only delivered meaningful results, but was easily readable and maintainable. While the number of lines of code is not a measure of code quality—it is often the inverse—a codebase with good quality 10,000 lines of code indicated significant time, effort, focus, patience, expertise, and often, skills like project management that went into it. Human traits.
Now, LLMs can not only one-shot generate that in seconds,
Evidence please. Ascribing many qualities to LLM code that I haven't (personally) seen at that scale. I think if you want to get an 'easily readable and maintainable' codebase of 10k lines with an LLM you need somebody to review its contributions very closely, and it probably isn't going to be generated with a 1 shot prompt.
Okay I was writing a comment to simon (and I have elaborated some there but I wanted this to be something catchy to show how I feel and something people might discuss with too)
Both Code and talk are cheap. Show me the trust. Show me how I can trust you. Show me your authenticity. Show me your passion.
Code used to be the sign of authenticity. This is whats changing. You can no longer guarantee that large amounts of code let's say are now authentic, something which previously used to be the case (for the most part)
I have been shouting into the void many times about it but Trust seems to be the most important factor.
Essentially, I am speaking it from a consumer perspective but suppose that you write AI generated code and deploy it. Suppose you talked to AI or around it. Now I can do the same too and create a project sometimes (mostly?) more customizable to my needs for free/very-cheap.
So you have to justify why you are charging me. I do feel like that's only possible if there is something additional added to value. Trust, I trust the decision that you make and personally I trust people/decisions who feel like they take me or my ideas into account. So, essentially not ripping me off while actively helping. I don't know how to explain this but the most thing which I hate is the feeling of getting ripped off. So justifiable sustainable business who is open/transparent about the whole deal and what he gets and I get just gets my respect and my trust and quite frankly, I am not seeing many people do that but hopefully this changes.
I am curious now what you guys of HN think about this & what trust means to you in this (new?) ever-changing world.
Like y'know I feel like everything changes all the time but at the same time nothing changes at the same time too. We are still humans & we will always be humans & we are driven by our human instincts. Perhaps the community I envision is a more tight knit community online not complete mega-sellers.
This "Code is cheap. Show me the talk." punchline gets overused as a bait these days. It is an alright article but that's a lot of words to tell us something we already know.
Also credit where credit is due. Origin of this punchline:
> Ignoring outright bad code, in a world where functional code is so abundant that “good” and “bad” are indistinguishable, ultimately, what makes functional AI code slop or non-slop?
I'm sorry, but this is an indicator for me that the author hasn't had a critical eye for quality in some time. There is massive overlap between "bad" and "functional." More than ever. The barrier-to-entry to programming got irresponsibly low for a time there, and it's going to get worse. The toolchains are not in a good way. Windows and macOS are degrading both in performance and usability, LLVM still takes 90% of a compiler's CPU time in unoptimized builds, Notepad has AI (and crashes,) simple social (mobile) apps are >300 MB download/installs when eight years ago they were hovering around a tenth of that, a site like Reddit only works on hardware which is only "cheap" in the top 3 GDP nations in the world... The list goes on. Whatever we're doing, it is not scaling.
I'd think there'll be a dip in code quality (compared to human) initially due to "AI machinery" due to its immaturity. But over-time on a mass-scale - we are going to see an improvement in the quality of software artifacts.
It is easier to 'discipline' the top 5 AI agents in the planet - rather than try to get a million distributed devs ("artisans") to produce high quality results.
It's like in the clothing or manufacturing industry I think. Artisans were able to produce better individual results than the average industry machinery, at least initially. But overtime - industry machinery could match the average artisan or even beat the average, while decisively beating in scale, speed, energy efficiency and so on.
> it is easier to 'discipline' the top 5 AI agents in the planet - rather than try to get a million distributed devs ("artisans") to produce high quality results.
Your take essentially is "let's live in a shoe box, packaging pipelines produce them cheaply en masse, who needs slow poke construction engineers and architects anymore"
In January 2026, prototype code is cheap. Shitty production code is cheap. If that's all you need—which is sometimes the case—then go for it.
But actually good code, with a consistent global model for what is going on, still won't come from Opus 4.5 or a Markdown plan. It still comes from a human fighting entropy.
Getting eyes on the code still matters, whether it's plain old AI slop, or fancy new Opus 4.5 "premium slop." Opus is quite smart, and it does its best.
But I've tried seriously using a number of high-profile, vibe-coded projects in the last few weeks. And good grief what unbelievable piles of shit most of them are. I spend 5% of the time using the vibe-coded tool, and 95% of the time trying to uncorrupt my data. I spend plenty of time having Opus try to look at the source to figure out what went wrong in 200,000 lines of vibe-coded Go. And even Opus is like, "This never worked! It's broken! You see, there's a race condition in the daemonization code that causes the daemon to auto-kill itself!"
And at that point, I stop caring. If someone can't be bothered to even read the code Opus generates, I can't be bothered to debug their awful software.
>> Remember the old adage, “programming is 90% thinking and 10% typing”? It is now, for real.
> Proceeds to write literal books of markdown to get something meaningful
>> It requires no special training, no new language or framework to learn, and has practically no entry barriers—just good old critical thinking and foundational human skills, and competence to run the machinery.
> Wrote a paragraph about how it is important to have serious experience to understand the generated code prior to that
>> For the first time ever, good talk is exponentially more valuable than good code. The ramifications of this are significant and disruptive. This time, it is different.
> This time is different bro I swear, just one more model, just one more scale-up, just one more trillion parameters, bro we’re basically at AGI
AI was never the problem we have been having a downgrade in software in general AI just amplifies how badly you can build software. The real problem is people who just dont care about the craft just pushing out human slop, whether it be because the business goes “we can come back to that dont worry” or what have you. At least with AI me coming back to something is right here and right now, not never or when it causes a production grade issue.
I for one am quite happy to outsource this kind of simply memorisation to a machine. Maybe it's the thin end of the slippery slope? It doesn't FEEL like it is but...
I think if your job is to assemble a segment of a car based on a spec using provided tools and pre-trained processes, it makes sense if you worry that giant robot arms might be installed to replace you.
But if your job is to assemble a car in order to explore what modifications to make to the design, experiment with a single prototype, and determine how to program those robot arms, you’re probably not thinking about the risk of being automated.
I know a lot of counter arguments are a form of, “but AI is automating that second class of job!” But I just really haven’t seen that at all. What I have seen is a misclassification of the former as the latter.
This is actually a really good description of the situation. But I will say, as someone that prided myself on being the second one you described, I am becoming very concerned about how much of my work was misclassified. It does feel like a lot of work I did in the second class is being automated where maybe previously it overinflated my ego.
I keep on wondering how much of the AI embrace is marketing driven. Yes, it can produce value and cut corners. But it seems like self driving by 2016 Musk prediction. Which never happened. With IPO/Stock valuations closely tied to hype, I wonder if we are all witnessing a giant bubble in the making
How much of this is mass financial engineering than real value. Reading a lot of nudges how everyone should have Google or other AI stock in their portfolio/retirement accounts
Maybe we haven't seen much economic value or productivity increase given all the AI hypes. I don't think we can deny the fact that programming has been through a paradigm shift where humans aren't the only ones writing code and the amount of code written by humans I would say is decreasing.
No need to wonder, just look at the numbers - investments versus revenue are hugely disparate, growth is plateauing.
There's nothing to wonder about. It's obviously marketing.
The whole narrative of "inevitability" is the stock behavior of tech companies who want to push a product onto the public. Why fight the inevitable? All you can do is accept and adapt.
And given how many companies ask vendors whether their product "has AI" without having the slightest inkling of what that even means or whether it even makes sense, as if it were some kind of magical fairy dust - yeah, the stench of hype is thick enough you could cut it with a knife.
Of course, that doesn't mean it lacks all utility.
I realize many are disappointed (especially by technical churn, star-based-development JS projects on github without technical rigour). I don't trust any claim on the open web if I don't know the technical background of the person making it.
However I think - Nadh, ronacher, the redis bro - these are people who can be trusted. I find Nadh's article (OP) quite balanced.
> Software development, as it has been done for decades, is over.
I'm pretty sure the way I was doing things in 2005 was completely different compared to 2015. Same for 2015 and 2025. I'm not old enough to know how they were doing things in 1995, but I'm pretty sure there very different compared to 2005.
For sure, we are going through some big changes, but there is no "as it has been done for decades".
I don't think things have changed that much in the time I've been doing it (roughly 20 years). Tools have evolved and new things were added but the core workflow of a developer has more or less stayed the same.
I don't think that's true, at least for everywhere I've worked.
Agile has completely changed things, for better or for worse.
Being a SWE today is nothing like 30 years ago, for me. I much preferred the earlier days as well, as it felt far more engineered and considered as opposed to much of the MVP 'productivity' of today.
I also wonder what those people have been doing all this time... I also have been mostly working as a developer for about 20 years and I don't think much has changed at all.
I also don't feel less productive or lacking in anything compared to the newer developers I know (including some LLM users) so I don't think I am obsolete either.
At some point I could straight-up call functions from the Visual Studio debugger Watch window instead of editing and recompiling. That was pretty sick.
Yes I know, Lisp could do this the whole time. Feel free to offer me a Lisp job drive-by Lisp person.
1995 vs 2005 was definitely a larger change than subsequent decades; in 1995 most information was gathered through dead trees or reverse engineering.
Talk is never cheap. Communicating your thoughts to people without the exact same kind of expertise as you is the most important skill.
This quote is from Torvalds, and I'm quite sure that if he weren't able to write eloquent English no one would know Linux today.
Code is important when it's the best medium to express the essence of your thoughts. Just like a composer cannot express the music in his head with English words.
talk is even cheaper, still show me the code, people claim 10x productivity that translates to 10x of work done in a month, even with Opus 4.5 out since November 2025 I haven't seen signs of this. AI makes the level of complexity with modern systems bearable, it was getting pretty bad before and AI kinda saved us. A non-trivial React app is still a pain to write. Also creating a harness for a non-deterministic api that AI provides is also pain. At least we don't have to fight through typing errors or search through relevant examples before copying and pasting. AI is good at automating typing, the lack of reasoning and the knowledge cutoff still makes coding very tedious though.
The original phrase "talk is cheap" is generally used to mean "it's easy to say a whole lot of shit and that talk often has no real value." So this cleaver headline is telling me the code has even less value than the talk. That alone betrays a level of ignorance I would expect from the author's work. I go to read the article and it confirmed my suspicion.
Did you get very far in? They're referring to a pretty specific contextual usage of the phrase (Linus, back in 2000), not the adage as a whole.
I think I made it to about here haha
> One can no longer know whether such a repository was “vibe” coded by a non-technical person who has never written a single line of code, or an experienced developer, who may or may not have used LLM assistance.
I am talking about what it means to invert that phrase.
I read the whole thing, and GP is right. Code is important, whether it is generated or handwritten. At least until true AGI is here.
It's directly an inversion of https://www.goodreads.com/quotes/437173-talk-is-cheap-show-m...
I see a lot of the same (well thought out) pushback on here whenever these kinds of blind hype articles pop up.
But my biggest objection to this "engineering is over" take is one that I don't see much. Maybe this is just my Big Tech glasses, but I feel like for a large, mature product, if you break down the time and effort required to bring a change to production, the actual writing of code is like... ten, maybe twenty percent of it?
Sure, you can bring "agents" to bear on other parts of the process to some degree or another. But their value to the design and specification process, or to live experiment, analysis, and iteration, is just dramatically less than in the coding process (which is already overstated). And that's without even getting into communication and coordination across the company, which is typically the real limiting factor, and in which heavy LLM usage almost exclusively makes things worse.
Takes like this seem to just have a completely different understanding of what "software development" even means than I do, and I'm not sure how to reconcile it.
To be clear, I think these tools absolutely have a place, and I use them where appropriate and often get value out of them. They're part of the field for good, no question. But this take that it's a replacement for engineering, rather than an engineering power tool, consistently feels like it's coming from a perspective that has never worked on supporting a real product with real users.
I'm not sure you're actually in disagreement with the author of this piece at all.
They didn't say that software engineering is over - they said:
> Software development, as it has been done for decades, is over.
You argue that writing code is 10-20% of the craft. That's the point they are making too! They're framing the rest of it as the "talking", which is now even more important than it was before thanks to the writing-the-code bit being so much cheaper.
> Software development, as it has been done for decades, is over.
Simon I guess vb-8558's comment inn here is something which is really nice (definitely worth a read) and they mention how much coding has changed from say 1995 to 2005 to 2015 to 2025
Directly copying line from their comment here : For sure, we are going through some big changes, but there is no "as it has been done for decades".
Recently Economic Media made a relevant video about all of this too: How Replacing Developers With AI is Going Horribly Wrong [https://www.youtube.com/watch?v=ts0nH_pSAdM]
My (point?) is that this pure mentality of code is cheap show me the talk is weird/net negative (even if I may talk more than I code) simply because code and coding practices are something that I can learn over my experience and hone in whereas talk itself constitutes to me as non engineers trying to create software and that's all great but not really understanding the limitations (that still exist)
So the point I am trying to make is that I feel as if when the OP mentioned code is 10-20% of the craft, they didn't mean the rest is talk. They meant all the rest are architectural decisions & just everything surrounding the code. Quite frankly, the idea behind Ai/LLM's is to automate that too and convert it into pure text and I feel like the average layman significantly overestimates what AI can and cannot do.
So the whole notion of show me the talk atleast in a more non engineering background as more people try might be net negative not really understanding the tech as is and quite frankly even engineers are having a hard time catching up with all which is happening.
I do feel like that the AI industry just has too many words floating right now. To be honest, I don't want to talk right now, let me use the tool and see how it goes and have a moment of silence. The whole industry is moving faster than the days till average js framework days.
To have a catchy end to my comment: There is just too much talk nowadays. Show me the trust.
I do feel like information has become saturated and we are transitioning from the "information" age to "trust" age. Human connections between businesses and elsewhere matter the most right now more than ever. I wish to support projects which are sustainable and fair driven by passion & then I might be okay with AI use case imo.
> Takes like this seem to just have a completely different understanding of what "software development" even means than I do, and I'm not sure how to reconcile it.
You're absolutely right about coding being less than 20% of the overall effort. In my experience, 10% is closer to the median. This will get reconciled as companies apply LLMs and track the ROI. Over a single year the argument can be made that "We're still learning how to leverage it." Over multiple years the 100x increase in productivity claims will be busted.
We're still on the upslope of Gartner's hype cycle. I'm curious to see how rapidly we descend into the Trough of Disillusionment.
Yeah in a lot of ways, my assertion is that @ “Code is cheap” actually means the opposite of what everyone thinks it does. Software Engineer is even more about the practices we’ve been developing over the past 20 or so years, not less
Like Linus’ observation still stands. Show me that the code you provided does exactly what you think it should. It’s easy to prompt a few lines into an LLM, it’s another thing to know exactly the way to safely and effectively change low level code.
Liz Fong-Jones told a story on LinkedIn about this at HoneyComb, she got called out for dropping a bad set of PR’s in a repo, because she didn’t really think about the way the change was presented.
Before getting through to the article, I had in fact completely misinterpreted the intent of the headline; I assumed it was anti-LLM-hype because of "code is cheap". Code is, indeed, cheap. Not because of LLMs, but because it has always been so.
I think what we're witnessing is hundreds of thousands, if not millions, of people who aren't software engineers, who have played around and dabbled with the idea of coding but who are not professionals, suddenly having their world expanded. As a hobbyist, they can suddenly create proof-of-concepts for all kinds of things without bothering to learn anything. And, not knowing better, they think that this is what software engineering is: just writing code for cool toys. If it compiles, it's good enough, right?
Of course, the difficult part of the profession has always been the engineering. You need to turn that code into a product that survives contact with the real world, that survives contact with thousands or millions of users, that is secure, performant, as bug-free as possible, that is easily extendable and maintainable for years or decades to come. That is and has always been the difficult part, and LLMs have shown zero indication whatsoever of being able to replace professionals in this role.
And so here we have a blog post written about Linus merging an LLM-generated commit on his toy project, once again completely missing the point of the difference between a hobby and a profession, and ignorantly proclaiming that the former will replace the latter.
They're also great for writing design docs, which is another significant time sink for SWEs.
From the article:
Evidence please. Ascribing many qualities to LLM code that I haven't (personally) seen at that scale. I think if you want to get an 'easily readable and maintainable' codebase of 10k lines with an LLM you need somebody to review its contributions very closely, and it probably isn't going to be generated with a 1 shot prompt.Okay I was writing a comment to simon (and I have elaborated some there but I wanted this to be something catchy to show how I feel and something people might discuss with too)
Both Code and talk are cheap. Show me the trust. Show me how I can trust you. Show me your authenticity. Show me your passion.
Code used to be the sign of authenticity. This is whats changing. You can no longer guarantee that large amounts of code let's say are now authentic, something which previously used to be the case (for the most part)
I have been shouting into the void many times about it but Trust seems to be the most important factor.
Essentially, I am speaking it from a consumer perspective but suppose that you write AI generated code and deploy it. Suppose you talked to AI or around it. Now I can do the same too and create a project sometimes (mostly?) more customizable to my needs for free/very-cheap.
So you have to justify why you are charging me. I do feel like that's only possible if there is something additional added to value. Trust, I trust the decision that you make and personally I trust people/decisions who feel like they take me or my ideas into account. So, essentially not ripping me off while actively helping. I don't know how to explain this but the most thing which I hate is the feeling of getting ripped off. So justifiable sustainable business who is open/transparent about the whole deal and what he gets and I get just gets my respect and my trust and quite frankly, I am not seeing many people do that but hopefully this changes.
I am curious now what you guys of HN think about this & what trust means to you in this (new?) ever-changing world.
Like y'know I feel like everything changes all the time but at the same time nothing changes at the same time too. We are still humans & we will always be humans & we are driven by our human instincts. Perhaps the community I envision is a more tight knit community online not complete mega-sellers.
Thoughts?
This "Code is cheap. Show me the talk." punchline gets overused as a bait these days. It is an alright article but that's a lot of words to tell us something we already know.
Also credit where credit is due. Origin of this punchline:
https://nitter.net/jason_young1231/status/193518070341689789...
https://programmerhumor.io/ai-memes/code-is-cheap-show-me-th...
> Ignoring outright bad code, in a world where functional code is so abundant that “good” and “bad” are indistinguishable, ultimately, what makes functional AI code slop or non-slop?
I'm sorry, but this is an indicator for me that the author hasn't had a critical eye for quality in some time. There is massive overlap between "bad" and "functional." More than ever. The barrier-to-entry to programming got irresponsibly low for a time there, and it's going to get worse. The toolchains are not in a good way. Windows and macOS are degrading both in performance and usability, LLVM still takes 90% of a compiler's CPU time in unoptimized builds, Notepad has AI (and crashes,) simple social (mobile) apps are >300 MB download/installs when eight years ago they were hovering around a tenth of that, a site like Reddit only works on hardware which is only "cheap" in the top 3 GDP nations in the world... The list goes on. Whatever we're doing, it is not scaling.
This is the "artisanal clothing argument".
I'd think there'll be a dip in code quality (compared to human) initially due to "AI machinery" due to its immaturity. But over-time on a mass-scale - we are going to see an improvement in the quality of software artifacts.
It is easier to 'discipline' the top 5 AI agents in the planet - rather than try to get a million distributed devs ("artisans") to produce high quality results.
It's like in the clothing or manufacturing industry I think. Artisans were able to produce better individual results than the average industry machinery, at least initially. But overtime - industry machinery could match the average artisan or even beat the average, while decisively beating in scale, speed, energy efficiency and so on.
> This is the "artisanal clothing argument".
> it is easier to 'discipline' the top 5 AI agents in the planet - rather than try to get a million distributed devs ("artisans") to produce high quality results.
Your take essentially is "let's live in a shoe box, packaging pipelines produce them cheaply en masse, who needs slow poke construction engineers and architects anymore"
In January 2026, prototype code is cheap. Shitty production code is cheap. If that's all you need—which is sometimes the case—then go for it.
But actually good code, with a consistent global model for what is going on, still won't come from Opus 4.5 or a Markdown plan. It still comes from a human fighting entropy.
Getting eyes on the code still matters, whether it's plain old AI slop, or fancy new Opus 4.5 "premium slop." Opus is quite smart, and it does its best.
But I've tried seriously using a number of high-profile, vibe-coded projects in the last few weeks. And good grief what unbelievable piles of shit most of them are. I spend 5% of the time using the vibe-coded tool, and 95% of the time trying to uncorrupt my data. I spend plenty of time having Opus try to look at the source to figure out what went wrong in 200,000 lines of vibe-coded Go. And even Opus is like, "This never worked! It's broken! You see, there's a race condition in the daemonization code that causes the daemon to auto-kill itself!"
And at that point, I stop caring. If someone can't be bothered to even read the code Opus generates, I can't be bothered to debug their awful software.
Lots of words to say that “now” communicating in regular human language is important.
What soft-skill buzzword will be the next one as the capital owners take more of the supposed productivity profits?
>> Remember the old adage, “programming is 90% thinking and 10% typing”? It is now, for real.
> Proceeds to write literal books of markdown to get something meaningful
>> It requires no special training, no new language or framework to learn, and has practically no entry barriers—just good old critical thinking and foundational human skills, and competence to run the machinery.
> Wrote a paragraph about how it is important to have serious experience to understand the generated code prior to that
>> For the first time ever, good talk is exponentially more valuable than good code. The ramifications of this are significant and disruptive. This time, it is different.
> This time is different bro I swear, just one more model, just one more scale-up, just one more trillion parameters, bro we’re basically at AGI
code is cheap, show me the prompt
OK, fuck it, show me the demo (without staging it). show me the result.
AI was never the problem we have been having a downgrade in software in general AI just amplifies how badly you can build software. The real problem is people who just dont care about the craft just pushing out human slop, whether it be because the business goes “we can come back to that dont worry” or what have you. At least with AI me coming back to something is right here and right now, not never or when it causes a production grade issue.
Regardless, knowing syntax of programming language or remember some library API, is a dead business.
I for one am quite happy to outsource this kind of simply memorisation to a machine. Maybe it's the thin end of the slippery slope? It doesn't FEEL like it is but...
Long blog posts are cheap. Show me the prompt.