The framing of "is vibe coding a job requirement" conflates two things: the skill of coding-by-prompting, and the skill of knowing what you need to build.
The second one is genuinely underrated. Knowing your problem well enough to describe a working solution, the inputs, the logic, the outputs, who uses it, is hard to automate. Generating the actual app from that description is increasingly not.
We've been using Lyzr Architect (architect.new) for this; you describe the agentic app you want in plain English, it generates a full-stack React frontend + multi-agent backend and deploys a live URL. The "vibe coding" is more like a product spec conversation than an IDE session. The people who are best at it aren't coders, they're people who understand their problem deeply.
I admit that vibe coding was kind of a clickbaity way to frame this, but I couldn't think of a better way to describe it. That might just underscore my ignorance in this domain.
One problem I personally have here is that I write code as a way to reason through and think about a problem. It's hard for me to grasp the best solution in a space until I try some things out first.
You're describing two different jobs though, "what you need to build" is supposed to be done by the army of "product managers/owners" or whatever they're called, rather than letting the programmer do that, and the product managers/owners whole reason for existing is figuring out what to build, what not to build and how the thing should work.
If you end up having engineers do the work of product people, you'd end up with the typical "engineered mess" that might be very fast, and lots of complicated abstractions so 80% of the codebase can be reused, but no user can make sense of it.
Add in LLMs that tend to never push back on adding/changing things, and you wend up with deep technical debt really quickly.
I don't think vibe coding is becoming mandatory, but writing software using AI assistance is. I find Salvatore Sanfilippo's distinction between vibe coding and 'automatic programming' useful. [0]
We do not interview for this nor care about it, despite using agentic and code complete tooling heavily. It's not a deep technical skill like C++ that requires years of hands-on experience. Spend a few weeks getting comfortable with Claude Code and you're probably at about parity with most devs.
That seems like sort of a red flag to me to have that as a job requirement.
Oh definitely seems like it. In Australia, at least, I am seeing job ads from recruiters with titles like "AI Engineer" or asking for "LLM-assisted development" or "agentic development" and so on.
I noticed that some of these roles come from businesses that recently had layoffs and were now asking their staff to "do more with less" so not exactly places people would be eager to work at, unless they have to.
I don't know if this is the new norm but this craziness is not helped by the increase in the number of "AI influencers" pushing the hype. Unfortunately, I've been seeing this on HN a lot recently.
We won’t hire anybody moving forward who doesn’t have hands-on agentic programming experience. We’re in the traditionally slower moving GovTech space. I have to imagine this is now a common expectation in many sectors.
Teams where I work can use Claude Code, Codex, Cursor, and Copilot CLI. Internally, it seems like Claude Code and Codex are the more popular tools being used by most software teams.
If you’re new to these tools, I highly recommend trying to build something with them during your free time. This space has evolved rapidly the past few months. Anthropic is offering a special spring break promotion where you can double the limits on weeknights and weekends for any of its subscription plans until the end of March.
Random question but what are some issues your facing with these ? I am just curious because everyone in my org uses them but act like it doesn't bring them any productivity gains probably they are scared to admit that it's actually been super helpful otherwise they are out of a job.
> Random question but what are some issues your facing with these ?
I’ve seen some folks who are quite productive with these tools, but there is a lot more slop too. On my team on same the code base you see two different team members producing vastly different results.
> We won’t hire anybody moving forward who doesn’t have hands-on agentic programming experience.
This doesn't make a lot of sense to me even as someone who uses agentic programming.
I would understand not hiring people who are against the idea of agentic programming, but I'd take a skilled programmer (especially one who is good at code review and debugging) who never touched agentic/LLM programming (but knows they will be expected to use it) over someone with less overall programming experience (but some agentic programming experience) every single time.
I think people vastly oversell using agents as some sort of skill in its own right when the reality is that a skilled developer can pick up how to use the flows and tools on the timescale of hours/days.
I suspect it’s not about agentic coding being a special skill, and more about why a competent programmer wouldn’t have tried it by this point, and whether that is a sign of ideological objections that could cause friction with the team. Not saying I agree with that thinking, but I definitely see why a hiring manager could think that way.
On the contrary. That's the only kind of competent software engineer in 2026. Competent engineers don't hand things off to the tool that generates terrible code really quickly.
Right. Using Claude Code & friends is not some esoteric skill that needs years in the trenches to learn which magical incantations to utter.
You prompt it. That's it. Yes, there are better and worse ways of prompting; yes, there are techniques and SKILLs and MCP servers for maximizing usability, and yes, there are right ways to vibe code and wrong ways to vibe code. But it's not hard. At all.
And the last person I want to work with is the expert vibe coder who doesn't know the fundamentals well enough to have coded the same thing by hand.
Yeah, will they take someone who has two months of hands-on with Claude Code, just not someone with zero? Come on, I'll take a great programmer with zero who knows they need to use it over a mediocre programmer who's been doing it since Claude Code released and I expect to be better off for doing so within 2 weeks.
The old copy/paste from StackOverflow was essentially vibe coding, it just took a bit more effort. I saw plenty of people Google their way to code that technically worked, without having any idea how or why.
If someone has been doing that for 10 years and learning nothing, that would be a huge red flag. One that will likely become more common has LLM usage increases.
I'd say that a this point, if your job involves computers and you aren't at least familiar with how you can leverage AI tools, you're basically admitting that you really enjoy the art of working with one hand tied behind your back.
That's not vibe coding. Imagine if you were hiring a chef and a candidate came in who'd never used a stove. Sure, technically there are other ways to heat food, but it would be a bit odd.
Depending on your standards, seems like a potential indicator of companies to avoid?
Personally I still believe that despite AI being moderately useful and getting better over time, it's mostly only feasible for boilerplate work. I do wonder about these people claiming to produce millions of lines of code per day with AI, like what are you actually building? If it's then Nth CRUD app then yeah, I see why... Chances are in the grand scheme of things, we don't really need that company to exist.
In roles that require more technical/novel work, AI just doesn't make the cut in my experience. Either it totally falls over or produces such bad results that it'd be quicker for a skilled dev to make it manually from scratch. I'd hope these types of companies are not hiring based on AI usage.
When doing more technical / novel work you can’y vibe code it but you can still use ai tools to make things faster. Having Claude implement small chunks with strict direction and oversight is underrated in my opinion. Or just using it to search the codebase (where is the code that does x), implement tests, and do code review are all helpful. There is a lot of e-peen measuring around vibe coding but I think it’s really not the must useful workflow, it’s just chasing a dream.
Here’s another question. Has anyone been able to get an agent to produce reliable high quality code?
My first experience with it was a year ago and the tests it produced were so horrendously hard to maintain that I kinda gave up, but I imagine that things have gotten a lot better in the last year.
In the same way that typing is a job requirement. It's just how you interface with the code now.
A decent company wouldn't necessarily look for someone who can type faster or commit 100x more code like the vibers do, but look into how you understand the code.
I think recruiters would like to see what candidates will do when get some free time. It is not really a lifestyle. It is a short amount of time. Maybe couple of weekends, before they get some other work or get laid off.
E.g., Nobody wants to continue working with someone who create sound effects, movie player, operating system, etc.
This is personal. I wouldnt like to work with someone who is creating sound effects or rap songs etc. Every sound is some piece of pain. Doing html-web stuff is more useful. Internet is a good thing. Noone knows how the internet works still. Sound-movie makes it understanding even more difficult. It is the click sound of browser originally.
Well the rap and audio plugins are created by some people. It's ok, but I am not ok with it. Having a musician career on Spotify is a lie. It is only vibe draggingdrop or code. I prefer my coworkers doing html-programming work kind of vibe coding.
Its a valuable tool that I'd argue everyone is still figuring out how to do it well and the best practices keep changing rapidly. Even more so than everyone was figuring out how to do software well in the first place. Almost all of the best practices are made up, not validated, and kind of magical thinking.
I couldn't imagine wanting to hire someone who doesn't use LLMs for coding unless they are bringing something very special to the table. It accelerates many coding tasks significantly. But you have to know the limitations to use them efficiently and that only comes with experience.
I don't understand why anyone would want to work for a firm that will not allow their employees to do their entire job properly, but will instead insist that they delegate a large portion of it via natural language of all things to some stochastic parrot from hell and hope after enough iterations of this that it will all somehow turn out okay. This sounds like a complete nightmare! To be frank, this entire situation is absurd and honestly sours me on the entire field.
At my company, we ask everyone in the hiring process about how they have used any kind of agentic coding tools.
We're not concerned about hiring for the 'skill' of using these things, but more as a culture check - we are a very AI-forward company, and we are looking for people who are excited to incorporate AI into their workflow. The best evidence for such excitement is when they have already adopted these tools.
Among the team, the expectation is that most code is being produced with AI, but there is no micromanager checking how much everyone is using the AI coding tools.
Assume that the people falling over themselves to issue apocalyptic proclamations in response to such a question are... likely not representative. (And may not even be software engineers.)
But man, I'm sure glad I left FAANG when I did. All this hysterical squawking over AI sounds utterly insufferable. If Claude was forced upon me at my job I would have likely crashed out at some point.
In 2026 this is being replaced by agent co-ordination. So the requirement becomes - experience co-ordinating multiple coding and or chat models in a long project spread over multiple machines.
Building software with llm is easier than you imagine. I'd be surprised if you just don't pick it up. No need to lie, just open codex or claude and give it a try.
I feel like people genuinely don't understand what vibe coding means.
Just cause you're using an LLM doesn't mean you're "vibe coding".
I regularly use LLMs at work, but I don't "vibe-code", which is where you're just saying garbage to the model and blindly clicking accept on whatever is spit out from it.
I design, think about architecture, write out all of my thoughts, expected example inputs, expected example outputs, etc. I write out pretty extensive prompts that capture all of that, and then request for an improved prompt. I review that improved prompt to make sure it aligns with the requirements I've gathered.
I read the output like I'm doing a deep code review, and if I don't understand some code I make sure to figure it out before moving forward. I make sure that the change set is within the scope of the problem I'm trying to solve.
Excluding the pieces that augment the workflow, this is all the same stuff you would normally do. You're an engineer solving problems and that domain you do it in happens to involve software and computers.
Writing out code has always been a means to an end. The productivity gains if you actually give LLMs a shot and learn to use the tools are real. So yes, pretty soon it's going to become expected from most places that you use the tools. The same way you've been expected to use a specific language, framework, or any other tool that greatly improves productivity.
The accountability asymmetry feels like the real problem. The person prompting claims completion; the reviewer absorbs the cleanup. That gap exists because there's no record of what the agent actually decided — just the output, not the sequence that produced it. If you had a trace of tool calls and decision points, at least you'd know where the slop came from and who should own it. Right now review is just guessing backwards.
I think AI assisted coding is a new job requirement, and I think the more I do it the more I’m convinced it’s going to wreck productivity. And it’s not because these tools aren’t as good as people say they are, it’s because they’re too good.
Everyone talking about vibe coding all your dependencies and the problem is that the people who are good with these tools and do get 50% or greater productivity benefits won’t be able to empathize with the people who are bad with these tools and create all the slop.
I think AI encourages people to take side quests to solve easy problems and not focus on hard problems.
That without domain expertise problems will compound themselves. But I dunno, I agree that they’re here to stay.
I'm already seeing people becoming enamored and proud of output quantity over quality. There were always people off focusing on tangents and low value efforts. Now they can drown an entire team or org in low effort(or low valu) slop.
It'll require stronger and more frequent push back to keep under control.
Agentic AI-assisted coding is an intrinsic part of the job now. Companies would be leaving lots of money on the table if they didn't take advantage of the 10x/50x/100x productivity gains. If you don't have the skills, learn. Shape up or ship out.
It depends on the task. There are certain tasks which are too tedious, time-consuming, and error-prone to be valuable for humans to perform, and couldn't be automated effectively until LLMs came along. Eric Raymond has cited a shortening of some tasks from weeks to hours. Andreas Kling managed to lift the JS runtime for his browser Ladybird to Rust from C++ in a couple of weeks, some 25,000 lines of code.
The productivity gains are real, and in some cases they are enormous. It is actively, profoundly stupid to pass on them. You need to learn how to work with AI.
Oh for sure, there are _many_ things we defer or value too much for the time they'd have taken otherwise.
But my point is, those are, by definition, lower value. Check back in a big company how much their revenue growth is (which is ultimately the only metric that's hard to game), then the situation changes.
Otherwise, im sure diff per person per day went up 10x. Output in the sense I am talking about is different.
The framing of "is vibe coding a job requirement" conflates two things: the skill of coding-by-prompting, and the skill of knowing what you need to build. The second one is genuinely underrated. Knowing your problem well enough to describe a working solution, the inputs, the logic, the outputs, who uses it, is hard to automate. Generating the actual app from that description is increasingly not. We've been using Lyzr Architect (architect.new) for this; you describe the agentic app you want in plain English, it generates a full-stack React frontend + multi-agent backend and deploys a live URL. The "vibe coding" is more like a product spec conversation than an IDE session. The people who are best at it aren't coders, they're people who understand their problem deeply.
I admit that vibe coding was kind of a clickbaity way to frame this, but I couldn't think of a better way to describe it. That might just underscore my ignorance in this domain.
One problem I personally have here is that I write code as a way to reason through and think about a problem. It's hard for me to grasp the best solution in a space until I try some things out first.
You're describing two different jobs though, "what you need to build" is supposed to be done by the army of "product managers/owners" or whatever they're called, rather than letting the programmer do that, and the product managers/owners whole reason for existing is figuring out what to build, what not to build and how the thing should work.
If you end up having engineers do the work of product people, you'd end up with the typical "engineered mess" that might be very fast, and lots of complicated abstractions so 80% of the codebase can be reused, but no user can make sense of it.
Add in LLMs that tend to never push back on adding/changing things, and you wend up with deep technical debt really quickly.
Edit: Ugh, apparently you wrote your comment just to push your platform (https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu...) which is so trite, apparently HN is for people to push ads about their projects now...
I don't think vibe coding is becoming mandatory, but writing software using AI assistance is. I find Salvatore Sanfilippo's distinction between vibe coding and 'automatic programming' useful. [0]
[0] https://antirez.com/news/159
We do not interview for this nor care about it, despite using agentic and code complete tooling heavily. It's not a deep technical skill like C++ that requires years of hands-on experience. Spend a few weeks getting comfortable with Claude Code and you're probably at about parity with most devs. That seems like sort of a red flag to me to have that as a job requirement.
Oh definitely seems like it. In Australia, at least, I am seeing job ads from recruiters with titles like "AI Engineer" or asking for "LLM-assisted development" or "agentic development" and so on.
I noticed that some of these roles come from businesses that recently had layoffs and were now asking their staff to "do more with less" so not exactly places people would be eager to work at, unless they have to.
I don't know if this is the new norm but this craziness is not helped by the increase in the number of "AI influencers" pushing the hype. Unfortunately, I've been seeing this on HN a lot recently.
We won’t hire anybody moving forward who doesn’t have hands-on agentic programming experience. We’re in the traditionally slower moving GovTech space. I have to imagine this is now a common expectation in many sectors.
Teams where I work can use Claude Code, Codex, Cursor, and Copilot CLI. Internally, it seems like Claude Code and Codex are the more popular tools being used by most software teams.
If you’re new to these tools, I highly recommend trying to build something with them during your free time. This space has evolved rapidly the past few months. Anthropic is offering a special spring break promotion where you can double the limits on weeknights and weekends for any of its subscription plans until the end of March.
Random question but what are some issues your facing with these ? I am just curious because everyone in my org uses them but act like it doesn't bring them any productivity gains probably they are scared to admit that it's actually been super helpful otherwise they are out of a job.
> Random question but what are some issues your facing with these ?
I’ve seen some folks who are quite productive with these tools, but there is a lot more slop too. On my team on same the code base you see two different team members producing vastly different results.
> On my team on same the code base you see two different team members producing vastly different results.
And if they use LLMs to assist, does the same thing happen?
> We won’t hire anybody moving forward who doesn’t have hands-on agentic programming experience.
This doesn't make a lot of sense to me even as someone who uses agentic programming.
I would understand not hiring people who are against the idea of agentic programming, but I'd take a skilled programmer (especially one who is good at code review and debugging) who never touched agentic/LLM programming (but knows they will be expected to use it) over someone with less overall programming experience (but some agentic programming experience) every single time.
I think people vastly oversell using agents as some sort of skill in its own right when the reality is that a skilled developer can pick up how to use the flows and tools on the timescale of hours/days.
I suspect it’s not about agentic coding being a special skill, and more about why a competent programmer wouldn’t have tried it by this point, and whether that is a sign of ideological objections that could cause friction with the team. Not saying I agree with that thinking, but I definitely see why a hiring manager could think that way.
If you aren't taking advantage of it, you are not a competent software engineer in 2026.
On the contrary. That's the only kind of competent software engineer in 2026. Competent engineers don't hand things off to the tool that generates terrible code really quickly.
Claude has been a big boost to my sense of competency. I get to point out so many poor solutions in slop PRs now
Right. Using Claude Code & friends is not some esoteric skill that needs years in the trenches to learn which magical incantations to utter.
You prompt it. That's it. Yes, there are better and worse ways of prompting; yes, there are techniques and SKILLs and MCP servers for maximizing usability, and yes, there are right ways to vibe code and wrong ways to vibe code. But it's not hard. At all.
And the last person I want to work with is the expert vibe coder who doesn't know the fundamentals well enough to have coded the same thing by hand.
Yeah, will they take someone who has two months of hands-on with Claude Code, just not someone with zero? Come on, I'll take a great programmer with zero who knows they need to use it over a mediocre programmer who's been doing it since Claude Code released and I expect to be better off for doing so within 2 weeks.
In fact, they want 10 years of vibe coding experience
The old copy/paste from StackOverflow was essentially vibe coding, it just took a bit more effort. I saw plenty of people Google their way to code that technically worked, without having any idea how or why.
If someone has been doing that for 10 years and learning nothing, that would be a huge red flag. One that will likely become more common has LLM usage increases.
They want people who not get scared of getting their hands dirty. There is something like perfectionism trap. It is very difficult to manage that.
hey 10 years for a junior position your going to need like 25 years for a senior level position.
I'd say that a this point, if your job involves computers and you aren't at least familiar with how you can leverage AI tools, you're basically admitting that you really enjoy the art of working with one hand tied behind your back.
That's not vibe coding. Imagine if you were hiring a chef and a candidate came in who'd never used a stove. Sure, technically there are other ways to heat food, but it would be a bit odd.
lol what. This is more like a chef who reheats tv dinners and sometimes taste them in case theres mold
Bizarre perspective. Imagine if you were hiring a chef and you found out they mostly do not go in the kitchen?
Depending on your standards, seems like a potential indicator of companies to avoid?
Personally I still believe that despite AI being moderately useful and getting better over time, it's mostly only feasible for boilerplate work. I do wonder about these people claiming to produce millions of lines of code per day with AI, like what are you actually building? If it's then Nth CRUD app then yeah, I see why... Chances are in the grand scheme of things, we don't really need that company to exist.
In roles that require more technical/novel work, AI just doesn't make the cut in my experience. Either it totally falls over or produces such bad results that it'd be quicker for a skilled dev to make it manually from scratch. I'd hope these types of companies are not hiring based on AI usage.
When doing more technical / novel work you can’y vibe code it but you can still use ai tools to make things faster. Having Claude implement small chunks with strict direction and oversight is underrated in my opinion. Or just using it to search the codebase (where is the code that does x), implement tests, and do code review are all helpful. There is a lot of e-peen measuring around vibe coding but I think it’s really not the must useful workflow, it’s just chasing a dream.
Out of curiosity, can you elaborate on the size and nature of the "small chunks"? I'm very curious for examples!
Here’s another question. Has anyone been able to get an agent to produce reliable high quality code?
My first experience with it was a year ago and the tests it produced were so horrendously hard to maintain that I kinda gave up, but I imagine that things have gotten a lot better in the last year.
In the same way that typing is a job requirement. It's just how you interface with the code now.
A decent company wouldn't necessarily look for someone who can type faster or commit 100x more code like the vibers do, but look into how you understand the code.
I think recruiters would like to see what candidates will do when get some free time. It is not really a lifestyle. It is a short amount of time. Maybe couple of weekends, before they get some other work or get laid off.
E.g., Nobody wants to continue working with someone who create sound effects, movie player, operating system, etc.
> E.g., Nobody wants to continue working with someone who create sound effects, movie player, operating system, etc.
What do you mean by this?
This is personal. I wouldnt like to work with someone who is creating sound effects or rap songs etc. Every sound is some piece of pain. Doing html-web stuff is more useful. Internet is a good thing. Noone knows how the internet works still. Sound-movie makes it understanding even more difficult. It is the click sound of browser originally.
So when you bring this up with your coworkers, do they tell you just what to do with or where to put your value judgments of their hobbies?
Well the rap and audio plugins are created by some people. It's ok, but I am not ok with it. Having a musician career on Spotify is a lie. It is only vibe draggingdrop or code. I prefer my coworkers doing html-programming work kind of vibe coding.
Its a valuable tool that I'd argue everyone is still figuring out how to do it well and the best practices keep changing rapidly. Even more so than everyone was figuring out how to do software well in the first place. Almost all of the best practices are made up, not validated, and kind of magical thinking.
I couldn't imagine wanting to hire someone who doesn't use LLMs for coding unless they are bringing something very special to the table. It accelerates many coding tasks significantly. But you have to know the limitations to use them efficiently and that only comes with experience.
I don't understand why anyone would want to work for a firm that will not allow their employees to do their entire job properly, but will instead insist that they delegate a large portion of it via natural language of all things to some stochastic parrot from hell and hope after enough iterations of this that it will all somehow turn out okay. This sounds like a complete nightmare! To be frank, this entire situation is absurd and honestly sours me on the entire field.
At my company, we ask everyone in the hiring process about how they have used any kind of agentic coding tools.
We're not concerned about hiring for the 'skill' of using these things, but more as a culture check - we are a very AI-forward company, and we are looking for people who are excited to incorporate AI into their workflow. The best evidence for such excitement is when they have already adopted these tools.
Among the team, the expectation is that most code is being produced with AI, but there is no micromanager checking how much everyone is using the AI coding tools.
Jesus Christ. Imagine reading these comments even just a year ago.
Don’t know/care about coding with AI? You’re unhireable now. Grim.
HN has a disproportionate quantity of 23yo founders trying to vibe-code the "Stripe for pickleball gyms" or whatever. Don't fret about it
It's not a skill. Seriously. It's pretty simple to get started, just open codex/claude, and give it a shot.
Assume that the people falling over themselves to issue apocalyptic proclamations in response to such a question are... likely not representative. (And may not even be software engineers.)
But man, I'm sure glad I left FAANG when I did. All this hysterical squawking over AI sounds utterly insufferable. If Claude was forced upon me at my job I would have likely crashed out at some point.
In 2026 this is being replaced by agent co-ordination. So the requirement becomes - experience co-ordinating multiple coding and or chat models in a long project spread over multiple machines.
Building software with llm is easier than you imagine. I'd be surprised if you just don't pick it up. No need to lie, just open codex or claude and give it a try.
dead internet theory is really taking grip
I feel like people genuinely don't understand what vibe coding means.
Just cause you're using an LLM doesn't mean you're "vibe coding".
I regularly use LLMs at work, but I don't "vibe-code", which is where you're just saying garbage to the model and blindly clicking accept on whatever is spit out from it.
I design, think about architecture, write out all of my thoughts, expected example inputs, expected example outputs, etc. I write out pretty extensive prompts that capture all of that, and then request for an improved prompt. I review that improved prompt to make sure it aligns with the requirements I've gathered.
I read the output like I'm doing a deep code review, and if I don't understand some code I make sure to figure it out before moving forward. I make sure that the change set is within the scope of the problem I'm trying to solve.
Excluding the pieces that augment the workflow, this is all the same stuff you would normally do. You're an engineer solving problems and that domain you do it in happens to involve software and computers.
Writing out code has always been a means to an end. The productivity gains if you actually give LLMs a shot and learn to use the tools are real. So yes, pretty soon it's going to become expected from most places that you use the tools. The same way you've been expected to use a specific language, framework, or any other tool that greatly improves productivity.
The accountability asymmetry feels like the real problem. The person prompting claims completion; the reviewer absorbs the cleanup. That gap exists because there's no record of what the agent actually decided — just the output, not the sequence that produced it. If you had a trace of tool calls and decision points, at least you'd know where the slop came from and who should own it. Right now review is just guessing backwards.
I think AI assisted coding is a new job requirement, and I think the more I do it the more I’m convinced it’s going to wreck productivity. And it’s not because these tools aren’t as good as people say they are, it’s because they’re too good.
Everyone talking about vibe coding all your dependencies and the problem is that the people who are good with these tools and do get 50% or greater productivity benefits won’t be able to empathize with the people who are bad with these tools and create all the slop.
I think AI encourages people to take side quests to solve easy problems and not focus on hard problems.
That without domain expertise problems will compound themselves. But I dunno, I agree that they’re here to stay.
I'm already seeing people becoming enamored and proud of output quantity over quality. There were always people off focusing on tangents and low value efforts. Now they can drown an entire team or org in low effort(or low valu) slop.
It'll require stronger and more frequent push back to keep under control.
[dead]
[dead]
Agentic AI-assisted coding is an intrinsic part of the job now. Companies would be leaving lots of money on the table if they didn't take advantage of the 10x/50x/100x productivity gains. If you don't have the skills, learn. Shape up or ship out.
There's no 100x productivity gain. No company shipped their products 100x faster except their very first versions.
It depends on the task. There are certain tasks which are too tedious, time-consuming, and error-prone to be valuable for humans to perform, and couldn't be automated effectively until LLMs came along. Eric Raymond has cited a shortening of some tasks from weeks to hours. Andreas Kling managed to lift the JS runtime for his browser Ladybird to Rust from C++ in a couple of weeks, some 25,000 lines of code.
The productivity gains are real, and in some cases they are enormous. It is actively, profoundly stupid to pass on them. You need to learn how to work with AI.
Oh for sure, there are _many_ things we defer or value too much for the time they'd have taken otherwise.
But my point is, those are, by definition, lower value. Check back in a big company how much their revenue growth is (which is ultimately the only metric that's hard to game), then the situation changes.
Otherwise, im sure diff per person per day went up 10x. Output in the sense I am talking about is different.