Let's presume / speculate for a moment that companies will only need 1 developer to do the job of 10 developers because of AI. That would also mean 10 developers can do the job of 100 developers.
A company that cuts developers to save money whose moat is not big enough may quickly find themselves out-competed by a company that sees this as an opportunity to overtake their competitor. They will have to hire more developers to keep their product / service competitive.
So whether you believe the hype or not, I don't think engineering jobs are in jeopardy long-run, just cyclically as they always have been. They "might" be in jeopardy for those who don't use AI, but even as it stands, there are a lot of niche things out there that AI completely bombs on.
Will the modal developer of 2030 be much like a dev today?
Writing software was a craft. You learned to take a problem and turn it into precise, reliable rules in a special syntax.
If AI takes off, we'll see a new field emerging of AI-oriented architecture and project management. The skills will be different.
How do you deploy a massive compute budget effectively to steer software design when agents are writing the code and you're the only one responsible for the entire project because the company fired all the other engineers (or never hired them) to spend the money on AI instead?
Are there ways of factoring a software project that mitigate the problems of AI? For example, since AI has a hard time in high-context, novel situations but can crank out massive volumes of code almost for free, can you afford to spend more time factoring the project into low-context, heavily documented components that the AI can stitch together easily?
How do you get sufficient reliability in the critical components?
How do you manage a software project when no human understands the code base?
How do you insure and mitigate the risks of AI-designed products? Can you use insurance and lower prices if AI-designed software is riskier? Can we quantify and put a dollar value on the risk of AI-designed software compared to human-designed?
What would be the most useful tools for making large AI-generated codebases inspectable?
When I think about these questions, a lot of them sound like things an manager or analyst might do. They don't sound like the "craft of code." Even if 1 developer in 2030 can do the work of 10 today, that doesn't mean the typical dev today is going to turn into that 10x engineer. It might just be a very different skillset.
Nitpick, blacksmiths typically did forging, which is hammering heated metal into shape with benefits for the strength of the hammered material. CNC is machining, cutting things into the shape you want at room temperature.
Forging is machine assisted now with tons of tools but its still somewhat of a craft, you can't just send a CAD file to a machine.
I think we're still figuring out where on that spectrum LLM coding will settle.
Blacksmiths also spent a lot of their time repairing things, whereas modern replacements primarily produce more things. Kind of an interesting shift. Economies and jobs change in so many ways.
Yeah I think this is a good way to think about it.
I mean Google, MSFT for example have effectively unlimited developers, and their products still suck in some areas (Teams is my number one worst) so maybe AI will allow them to upgrade their features and compete
At large companies, UI/UX is done by UI/UX designers and features are chosen and prioritized by product management and customer research teams. Developers don't get much input.
As Steve Jobs said long ago "The only problem with Microsoft is they just have no taste." but you can apply the same to Google and anyone else trying to compete with them. Having infinite AI developers doesn't help those who have UI designers and product managers that have no taste.
MSFT, GOOG et al have an enormous army of engineers. And yet, they dont seem to be continually releasing one hit product after another. Why is that? Because writing lines of code is not the bottleneck of continually producing and bringing new products to market.
Its crazy to me how people are missing the point with all this.
From outside as consumer. The end problem is that these product do not compete on price. A chat app on enterprise at the scale of customers they have should probably be 1€ a month. Not 10 or 20€.
That might not be multi billions a year business, but maybe chat app should not be one.
The main thing to understand about the impact of AI tools:
Somehow the more senior you are [in the field of use], the better results you get. You can run faster and get more done! If you're good, you get great results faster. If you're bad, you get bad results faster.
You still gotta understand what you're doing. GeLLMan Amnesia is real.
I jumped into a new-to-me Typescript application and asked Claude to build a thing, in vague terms matching my own uncertainty and unfamiliarity. The result was similarly vague garbage. Three shots and I threw them all away.
Then I watched a someone familiar with the codebase ask Claude to build the thing, in precise terms matching their expertise and understanding of the code. It worked flawlessly the first time.
Neither of us "coded", but their skill with the underlying theory of the program allowed them to ask the right questions, infinitely more productive in this case.
Skill and understanding matter now more than ever! LLMs are pushing us rapidly away from specialized technicians to theory builders.
For sure, directing attention to valuable context and outlining problems to solve within it works way, way better than vague uncertainty.
Good LLMing seems to be about isolating the right information and instructing it correctly from there. Both the context and the prompt make a tremendous difference.
I've been finding recently that I can get significantly better results with fewer tokens by paying mind to this more often.
I'm definitely a casual though. There are probably plenty of nuances and tricks I'm unaware of.
Interestingly, this observation holds even when you scale AI use up from individuals to organizations, only at that level it amplifies your organization's overal development trajectory. The DORA 2025 and the DX developer survey reports find that teams with strong quality control practices enjoy higher velocity, whereas teams with weak or no processes suffer elevated issues and outages.
It makes sense considering that these practices could be thought of as "institutionalized skills."
Agreed. How well you understand the problem domain determines the quality of your instructions a s feedback to the LLM, which in turn determines the quality of the results. This has been my experience, it works well for things I know well, and poorly for things I'm bad at. I've read a lot of people saying that they tried it on "hard problems" and it failed; I interpret this as the problem being hard not in absolute terms, but relative to the skill level of the user.
> Somehow the more senior you are [in the field of use], the better results you get.
It's a K-type curve. People that know things will benefit greatly. Everyone else will probably get worse. I am especially worried about all young minds that are probably going to have significant gaps in their ability to learn and reason based on how much exposure they've had with AI to solve the problems for them.
Of course, but how do you begin to understand the "stochastic parrot"?
Yesterday I used LLMs all day long and everything worked perfectly. Productivity was great and I was happy. I was ready to embrace the future.
Now, today, no matter what I try, everything LLMs have produced has been a complete dumpster fire and waste of my time. Not even Opus will follow basic instructions. My day is practically over now and I haven't accomplished anything other than pointlessly fighting LLMs. Yesterday's productivity gains are now gone, I'm frustrated, exhausted, and wonder why I didn't just do it myself.
This is a recurring theme for me. Every time I think I've finally cracked the code, next time it is like I'm back using an LLM for the first time in my life. What is the formal approach that finds consistency?
You're experiencing throttling. Use the API instead and pay per token.
You also have to treat this as outsourcing labor to a savant with a very, very short memory, so:
1. Write every prompt like a government work contract in which you're required to select the lowest bidder, so put guardrails everywhere. Keep a text editor open with your work contract, edit the goal at the bottom, and then fire off your reply.
2. Instruct the model to keep a detailed log in a file and, after a context compaction, instruct it to read this again.
3. Use models from different companies to review one another's work. If you're using Opus-4.5 for code generation, then consider using GPT-5.2-Codex for review.
4. Build a mental model for which models are good at which tasks. Mine is:
3a. Mathematical Thinking (proofs, et al.): Gemini DeepThink
3b. Software Architectural Planning: GPT5-Pro (not 5.1 or 5.2)
3c. Web Search & Deep Research: Gemini 3-Pro
3d. Technical Writing: GPT-4.5
3e. Code Generation & Refactoring: Opus-4.5
3f. Image Generation: Nano Banana Pro
Nonsense. I have ran an experiment today - trying to generate a particular kind of image.
Its been 12 hours and all the image gen tools failed miserably. They are only good at producing surface level stuff, anything beyond that? Nah.
So sure, if what you do is surface level (and crap in my opinion) ofc you will see some kind of benefit. But if you have any taste (which I presume you dont) you would handily admit it is not all that great and the amount invested makes zero sense.
> if what you do is surface level (and crap in my opinion)
I write embedded software in C for a telecommunications research laboratory. Is this sufficiently deep for you?
FWIW, I don't use LLMs for this.
> But if you have any taste (which I presume you dont)
What value is there to you in an ad hominem attack here? Did you see any LLM evangelism in my post? I offered information based on my experience to help someone use a tool.
> You're experiencing throttling. Use the API instead and pay per token.
That was using pay per token.
> Write every prompt like a government work contract in which you're required to select the lowest bidder, so put guardrails everywhere.
That is what I was doing yesterday. Worked fantastically. Today, I do the very same thing and... Nope. Can't even stick to the simplest instructions that have been perfectly fine in the past.
> If you're using Opus-4.5 for code generation, then consider using GPT-5.2-Codex for review.
As mentioned, I tried using Opus, but it didn't even get the point of producing anything worth reviewing. I've had great luck with it before, but not today.
> Instruct the model to keep a detailed log in a file and, after a context compaction
No chance of getting anywhere close to needing compaction today. I had to abort long before that.
> Build a mental model for which models are good at which tasks.
See, like I mentioned before, I thought I had this figured out, but now today it has all gone out the window.
Drives me absolutely crazy how lately any time I comment about my experience using LLMs for coding that isn’t gushing praise, I get the same predictable, condescending lecture about how I'm using it ever so slightly wrong (unlike them) which explains why I don't get perfect
output literally 100% of the time.
It’s like I need a sticky disclaimer:
1. No, I didn’t form an outdated impression based on GPT-4 that I never updated, in fact I use these tools *constantly every single day*
2. Yes, I am using Opus 4.5
3. Yes, I am using a CLAUDE.md file that documents my expectations in detail
3a. No, it isn’t 20000 characters or anything
3b. Yes, thank you, I have in fact already heard about the “pink elephant problem”
4. Yes, I am routinely starting with fresh context
4a. No, I don’t expect every solution to be one-shotable
5. Yes, I am still using Opus fucking 4.5
6. At no point did I actually ask for Unsolicited LLM Tips 101.
Like, are people really suggesting they never, ever get a suboptimal or (god forbid) completely broken "solution" from Claude Code/Codex/etc?
That doesn't mean these tools are useless! Or that I’m “afraid” or in denial or trying to hurt your feelings or something! I’m just trying to be objective about my own personal experience.
It’s just impossible to have an honest, productive discussion if the other person can always just lob responses like “actually you need to use the API not the 200/mo plan you pay for” or “Opus 4.5 unless you’re using it already in which case GPT 5.2 XHigh / or vice versa” to invalidate your experience on the basis of “you’re holding it wrong” with an endlessly slippery standard of “right”.
"Most people who drive cars now couldn’t find the radiator cap if they were paid to, and that’s fine."
That's not fine IMO. That is a basic bit of knowledge about a car and if you don't know where the radiator cap is you will eventually have to pay through the nose to someone who does know (and possibly be stranded somewhere). Knowing how to check and fill coolant isn't like knowing how to rebuild a transmission. It's very simple and anyone can understand it in 5 minutes if they only have the curiosity.
James Burke's old TV show Connections was all about this, how many little things that surround us in day to day life and on which we absolutely depend for our survival are complete black boxes to most of us most of the time. Part of modernity is that no single person, however intelligent, can really understand the technological web that sustains our lives.
This reminds me of "Zen and the Art of Motorcycle Maintenance". One of the themes Pirsig explores is that some people simply don't want to understand how stuff they depend on works. They just expect it to be excellent and have no breakdowns, and hope for the best (I'm oversimplifying his opinion, of course). So Pirsig's friend on his road trip just doesn't want to understand how his bike works, it's good quality and it seldom breaks, so he is almost offended when Pirsig tells him he could fix some breakage using a tin can and some basic knowledge of how bikes work.
Lest anyone here thinks I feel morally superior: I somewhat identify with Pirsig's friend. Some things I've decided I don't want to understand how they work, and when they break down I'm always at a loss!
You don’t get to decide whether a radiator is a radiator just because the coolant can internally shuffle heat to the A/C. I’m assuming that you drive a Tesla, in which case your car still has a big fat low temperature radiator. If you’re driving virtually any other EV on the market, it still has a big fat low temperature radiator, or even multiple.
For one thing: if your car is overheating, don't open the radiator cap since the primary outcome will be serious burns.
And I've owned my car for 20 years: the only time I had to refill coolant was when I DIY'd a water pump replacement, which saved some money but only like maybe $500 compared to a mechanic.
You could perfectly well own a car and never have to worry about this.
Yes and no. For one thing the radiator/reservoir cap is clearly marked "Do not open when hot." But the general point really is that if you have no idea how something works, you will be helpless when it doesn't work. If (at some time in the future) the only thing you know how to do is ask an AI to do something for you, then you'll be not only helpless without it, but less and less able to judge whether what it is telling you is even correct. Like taking your car to a mechanic because it's overheating, and him saying you need a new water pump and radiator when maybe all you needed was a new pressure cap but you never even knew to try that first.
Of course you can't know everything. There a point at which you have to rely on other people's expertise. But to me it makes sense to have a basic understanding of how the things you depend on every day work.
Ironically, many cars don't have radiator caps, only reservoirs.
Modern cars, for the most part, do not leak coolant unless there's a problem. They operate a high pressure. Most people, for their own safety, should not pop the hood of a car.
What the hell? There are plenty of reasons to pop your hood that literally anyone competent to drive should be able to do perfectly safely. Swapping your own battery. Pulling a fuse. Checking your oil, topping up your oil. Adding windshield wiper fluid. Jump starting a car. Replacing parts that are immediately available.
Not requiring one to pop the hood, but since I've almost finished the list of "things every driver should be able to do to their car": Place and operate a jack, change a tire, replace your windshield wiper blades, add air to tires (to appropriate pressure), and put gas in the damned thing.
These are basic skills that I can absolutely expect a competent, driving adult to be able to do (perhaps with a guide).
I mean, I don't disagree that these are basic skills that most anyone should be able to perform. But most people are not capable to do them safely. Whether that's aptitude or motivation, doesn't matter.
Ask your average person what a 'fuse' even is, they won't be able to tell you, let alone how to locate the right one and check it.
Just think about how help the average person is when it comes to doing basic tasks on a computer, like not install the Ask(TM) Toolbar. That applies to many areas of life.
I have had this new car for 5 months. I haven't learned to turn on the headlights yet. It just turns itself on and adjusts the beams. Every now and then I think about where that switch might be but never get to it. I should probably know.
Important to note that this article is specifically about chip design engineering jobs - it's on an industry publication called Semiconductor Engineering.
It's puzzling to me that all this theorizing doesn't just look at the actual effects of AI. It's very non-intuitive
For example the fact that AI can code as well as Torvalds doesn't displace his economic value. On the contrary he pays for a subscription so he can vibe code!
The actual work AI has displaced is stuff like: freelance translation, graphic illustration, 'content writing' (writing seo optimized pages for Google) etc. That's instructive I suppose. Like if your income source can already be put on upwork then AI can displace it
So even in those cases there are ways to not be displaced. Like diplomatic translation work can be part of a career rather than just a task so the tool doesn't replace your 'job'.
He used it to generate a little visualiser script in python, a language he doesn't know and doesn't care to learn, for a hobby project. It didn't suddenly take over as lead kernel dev.
As someone who has to switch between three languages every day, fixing the text is one of my favourite usages of LLMs. I write some text in L2 or L3 as best as I can, and then prompt an LLM to fix the grammar but not change anything else. Often it will also explain if I'm getting the context right.
That being said, having it translate to a language one doesn't speak remains a gamble, you never know it's correct so I'm not sure if I'd dare use it professionally. Recently I was corrected by a marketing guy that is native in yet another language because I used a ChatGPT translation for an error message. Apparently it didn't sound right.
Re displacing freelance translation, yes - it can displace the 95% of cases where 95% accuracy is enough. Like you mention though, for diplomatic translations, court proceedings, pacemaker manuals etc you're still going to need a human at least checking every line since the cost of any mistake is so high
I think AI displacing graphics illustrators is a tragedy.
It's not that I love ad illustrations, but it's often a source of income for artists who want to be doing something more meaningful with their artwork. And even if I don't care for the ads themselves, for the artists it's also a form of training.
Senior dev here 15 years experience just turned 50 have family blah blah. I've been contracting for the last two years. The org is just starting to use Claude. I've been delegating - well copy pasting - into chatgpt which has to be the laziest way to leverage AI. I've been so successful (meaning haven't had to do anything really except argue with chatgpt when it goes off on some tangent) with this approach that I can't even be bothered to set up my Claude environment. I swear when this contract is over I'm opening a mobile food cart.
I'm similar ( turning 50 in a couple month, wife+2 kids etc) and was telling my wife this morning that the world of software development has definitely changed. I don't know what it will look like in the future but it won't look like the past. It seems producing the text that can be compiled into instructions for a computer is something LLMs particularly good at. Maybe a good analogy is going from a bare text editor to a modern IDE. It's happening very fast though, way faster than the evolution of IDEs.
I was saying this yesterday, There will be people building good software somewhere, but chances to it happening in current corporate environment is nearing zero. Change is mostly in the management, and not in the Software Development itself. Yeah we may be like 50% faster but we are expected to be 10x devs.
Same situation (50 last week, 2 kids) though have been unemployed for a year. Part of me thinks that, rather than taking jobs, AI is actually the only reason a lot of jobs still exist. The rest of tech is dead. Having worked in consulting a while ago, you can kind of feel it when you're approaching the point where you've implemented all the high value stuff for a client and, even though there's stuff you could do, they're going to drop you to a retainer contract because it's just not the same value.
That's how the whole industry feels now. The only investment money is flowing into AI, and so companies with any tech presence are touting their AI whatevers at every possible moment (including during layoffs) just to get some capital. Without that, I wonder if we'd be seeing even harsher layoffs than we already are.
That's so not true. Of the 23 companies we reviewed last year maybe 3 had significant AI in their workflow, the rest were just solid businesses delivering stuff that people actually need. I have no doubt that that proportion will grow significantly, and that this growth will probably happen this year but to suggest that outside of AI there is no investment is just not compatible with real world observations.
That's good to hear actually. It's usually a downer when a strongly held belief is contradicted with hard evidence, but I'm excited to hear that there's life yet in the industry outside of AI. Any specific trends or themes you can share?
Energy is a much larger theme than it was in the years before (obviously, since we're in the EU and energy overall is a larger theme in society here. This has a reflection in the VC market, but it also is part of a much larger trend, climate change and CO2 neutrality).
Another trend - and this surprised us - is a much stronger presence of really hard tech and associated industries and finally - for obvious reasons, so not really surprising - more parties active in defense.
Totally makes sense. Things that (once complete) have more realizable tangible value, rather than "optimizing user engagement" aka "enshittification" as some kind of imaginary value store for the last 20 years and is now being called in.
What is especially interesting is to see the delta between the things that are looked at in pre-DD and which then make it to actual DD after terms are signed.
Software will ALWAYS be an attractive VC target. The economics are just too good. The profit margins are just inherently fat as fuck compared to literally anything else. Your main expense is headcount and the incremental cost of your widget is ~$0? It's literally a dream.
It's also why so much of AI is targeting software, specifically SAAS. A SaaS company with ~0 headcount driven by AI is basically 100% profit margin. A truly perfect conception of capitalism.
Meanwhile I think AI actually has a decent shot at "curing" cancer. AI-assisted radiology means screening could be come significantly cheaper, happen a lot more often, and catch cancers very early, which is extremely important as everyone knows to surviving it. The cure for cancer might actually just involve much earlier detection. But pfft what are the profit margins on _that_?
Yeah for the better part of a generation, our best and brightest minds have been wasted on "increasing click count". If that can all be AI from here on out, then maybe we can get actual humans working on the real problems again.
The problem was always funding. All those bright minds went into ads because it paid well. Cancer research, space, propulsion, clean, energy, etc.. none of those paid particularly well. Nor would they have afforded a comfortable life with a house and family. The evisceration of SWE does not guarantee a flourishing in other fields. On the contrary, increased labor supply with further pressure, wages, downwards.
Agreed, though I think we all knew that the software industry payscales were out of whack to begin with. Fresh college grads that can barely do a fizzbuzz making twice as much as experienced doctors.
What I don't know is, say the industry normalizes to roughly what people make in other engineering fields. Then does everything else normalize around that? i.e. does cost of living go down proportionally in SF and Seattle? Or does all the tech money get further sucked up and consolidated into VC pockets and parked in vacant houses, while we and our trite "cancer research" and such get shepherded off to Doobersville?
> It’s funny that perfect capitalism (no payroll expenses) means nobody has money to actually buy any of the goods produced by AI.
When you remember that profit is the measure of unrealized benefit, and look at how profitable capitalists have become, its not clear if, approximately speaking, anyone actually has the "money" to buy any goods now.
In other words, I am not sure this matters. Big business is already effectively working for free, with no realistic way to ever actually derive the benefit that has been promised to it. In theory those promises could be called, but what are the people going to give back in return?
The economy in the 21st century developed world is mostly about acquiring positional goods. Positional goods as "products and services valued primarily for their ability to convey status, prestige, or relative social standing rather than their absolute utility".
We have so much wealth that wealth accumulation itself has become a type of positional good as opposed to the utility of the wealth.
When people in the developed world talk about the economy they are largely talking about their prestige and social standing as opposed to their level of warmth and hunger. Unfortunately, we haven't separated these ideas philosophically so it leads to all kinds of nonsense thinking when it comes to "the economy".
It's really simple: if you crash the market and you are liquid you can buy up all of the assets for pennies. That's pretty much the playbook right now in one part of the world, just the same happened in the former Soviet Union in the 90's.
Money is an IOU; debt. People trade things of value for money because you can, later, call the debt and get the exchanged value that was promised in return (food, shelter, yacht, whatever) I'm sure this is obvious.
I am sure it is equally obvious that if I take your promise to give back in kind later when I give you my sandwich, but never collect on it, that I ultimately gave you my sandwich for free.
If you keep collecting more and more IOUs from the people you trade your goods with, realistically you are never going to be able to convert those IOUs into something real. Which is something that the capitalists already contend with. Apple, for example, has umpteen billions of dollars worth of promises that they have no idea how to collect on. In theory they can, but in practice it is never going to happen. What don't they already have? Like when I offered you my sandwich, that is many billions of dollars worth of value that they have given away for free.
Given that Apple, to continue to use it as an example, have been quite happy effectively giving away many billions of dollars worth of value, why not trillions? Is it really going to matter? Money seems like something that matters to peons like us because we need to clear the debt to make sure we are well fed and kept warm, but for capitalists operating at scales that are hard for us to fathom, they are already giving stuff away for free. If they no longer have the cost of labor, they can give even more stuff away for free. Who — from their perspective — cares?
Money is less about personal consumption and more about a voting system for physical reality. When a company holds billions in IOUs, they are holding the power to decide what happens next. That capital allows them to command where the next million tons of aluminum go, which problems engineers solve, and where new infrastructure is built.
Even if they never spend that wealth on luxury, they use it to direct the flow of human effort and raw materials. Giving it away for free would mean surrendering their remote control over global resources. At this scale, it is not about wanting more stuff. It is about the ability to organize the world. Whether those most efficient at accumulating capital should hold such concentrated power remains the central tension between growth and equality.
The gap for me was mapping [continuing to hoard dollars] to [giving away free goods/services], but it makes sense now. I haven't given economics thought at this level. Thank you!
How does code review usually go for you? Our org’s bottleneck is often code review, which is how we reduce bus factor and other risks. Getting to the pull request faster doesn’t really save us that much time.
Same, except I am over 60 and when I think of opening a mobile food cart it is sort of a Blade Runner vibe, staffed by a robot ramen chef that grumbles at customers and always says something back to you in some cyber slang that you don’t understand.
You'd have to do even less copy-pasting. The switch to some agent that has access to your source code directory speed things up so much, the time spent pays for itself in the first day.
I have access to chatgpt codex since i'm on the premium plan. Seems like the lowest barrier to entry for me (cost, learning curve). I will truly have to give this a go. My neighbor is also a dev and he is flabbergasted that i have not at least integrated it into a side project.
Is it just me, or does Claude Code's UI design which both prevents copy-pasting large snippets and viewing the code as its generated feel incredibly discomforting?
Its hard (or at least in my experience) to find people to change career - more so in their mid-thirties. I'm the opposite -- software developer career, now in mid 30s, and the AI crap gets me thinking about backup plans career-wise.
I have read this same comment so many times in various forms. I know many of them are shill accounts/bots, but many are real. I think there are a few things at play that make people feel this way. Even if you're in a CRUD shop with low standards for reliability/scale/performance/efficiency, a person who isn't an experienced engineer could not make the LLM do your job. LLMs have a perfect combination of traits that cause people to overestimate their utility. The biggest one I think is that their utility is super front-loaded.
If a task before would take you ten hours to think through the thing, translate that into an implementation approach, implement it, and test it, and at the end of the ten hours you're 100% there and you've got a good implementation which you understand and can explain to colleagues in detail later if needed. Your code was written by a human expert with intention, and you reviewed it as you wrote it and as you planned the work out.
With an LLM, you spend the same amount of time figuring out what you're going to do, plus more time writing detailed prompts and making the requisite files and context available for the LLM, then you press a button and tada, five minutes later you have a whole bunch of code. And it sorta seems to work. This gives you a big burst of dopamine due to the randomness of the result. So now, with your dopamine levels high and your work seemingly basically done, your brain registers that work as having been done in those five minutes.
But you now (if you're doing work people are willing to pay you for), you probably have to actually verify that it didn't break things or cause huge security holes, and clean up the redundant code and other exceedingly verbose garbage it generated. This is not the same process as verifying your own code. First, LLM output is meant to look as correct as possible, and it will do some REALLY incorrect things that no sane person would do that are not easy to spot in the same way you'd spot them if it were human-written. You also don't really know what all of this shit is - it almost always has a ton of redundant code, or just exceedingly verbose nonsense that ends up being technical debt and more tokens in the context for the next session. So now you have to carefully review it. You have to test things you wouldn't have had to test, with much more care, and you have to look for things that are hard to spot, like redundant code or regressions with other features it shouldn't have touched. And you have to actually make sure it did what you told it to, because sometimes it says it did, and it just didn't. This is a whole process. You're far from done here, and this (to me at least) can only be done by a professional. It's not hard - it's tedious and boring, but it does require your learned expertise.
So set up e2e tests and make sure it does things you said you wanted. Just like how you use a library or database. Trust, but verify. Only if it breaks do you have to peak under the covers.
Sadly people do not care about redundant and verbose code. If that was a concern, we wouldn't have 100+mb of apps, nor 5mb web app bundles. Multibillion b2b apps shipping a 10mb json file just for searching emojis and no one blinks an eye.
The effort to set up e2e tests can be more than just writing the thing. Especially for UI as computers just does not interpret things as humans do (spatial relation, overflow, low to no contrast between elements).
Also, the assumption that you can do ___ thing (tests, some dumb agent framework, some prompting trick), and suddenly magically all of the problems with LLMs vanish, is very wrong and very common.
I just wanna make the point that I've grown to dislike the term 'CRUD' especially as a disparaging remark against some software. Every web application I've worked on featured a database, that you could usually query or change through a web interface, but that was an easy and small part of the whole thing it did.
Is a webshop a CRUD app? Is an employee shift tracking site? I could go on, but I feel 'CRUD' app is about as meaningful a moniker as 'desktop app'
It's a pretty easy category to identify, some warning signs:
- You rarely write loops at work
- Every performance issue is either too many trips to the database or to some server
- You can write O(n^n) functions and nobody will ever notice
- The hardest technical problem anyone can remember was an N+1 query and it stuck around for like a year before enough people complained and you added an index
- You don't really ever have to make difficult engineering decisions, but if you do, you can make the wrong one most of the time and it'll be fine
- Nobody in the shop could explain: lock convoying, GC pauses, noisy neighbors, cache eviction cascades, one hot shard, correlating traces with scheduler behavior, connection pool saturation, thread starvation, backpressure propagation across multiple services, etc
I spent a few years in shops like this, if this is you, you must fight the urge to get comfortable because the vibe coders are coming for you.
I think a lot of the proliferation of AI as a self-coding agent has been driven by devs who haven’t written much meaningful code, so whatever the LLM spits out looks great to them because it runs. People don’t actually read the AI’s code unless something breaks.
There are exceptions to what I'm about to say, but it is largely the rule.
The thing a lot of people who haven't lived it don't seem to recognize is that enterprise software is usually buggy and brittle, and that's both expected and accepted because most IT organizations have never paid for top technical talent. If you're creating apps for back office use, or even supply chain and sometimes customer facing stuff, frequently 95% availability is good enough, and things that only work about 90-95% of the time without bugs is also good enough. There's such an ingrained mentality in big business that "internal tools suck" that even if AI-generated tools also suck similarly it's still going to be good enough for most use cases.
It's important for readers in a place like HN to realize that the majority of software in the world is not created in our tech bubble, and most apps only have an audience ranging from dozens to several thousands of users.
Internal tools do suck as far as usability, but you can bet your ass they work if they're doing things that matter to the business, which is most of them. Almost every enterprise system hooks into the finance/accounting pipeline to varying degrees. If these systems do not work at your company I'd like to know which company you work at and whether they're publicly traded.
A potential difference I see is that when internal tools break, you generally have people with a full mental model of the tool who can take manual intervention. Of course, that fails when you lay off the only people with that knowledge, which leads to the cycle of “let’s just rewrite it, the old code is awful”. With AI it seems like your starting point is that failure mode of a lack of knowledge and a mental model of the tool.
And? As people (the consumers) get automated away there will be less. We also have things like instant coffee, automatic coffee machines, etc that have all reduced the need for manual coffee.
It's hard to take you seriously with this rebuttal. You don't go to coffee shops and probably don't care for coffee if you're talking about instant coffee. Why even speak at all on this subject that you are not familiar with? Not every thought that enters your mind needs to be written out.
Fellow old here… Sorry to tell you but robotic food carts are going to be impossible to compete against
So you’ll need some kind of humanistic hook if you want to get reliable customers
Expect there will be two worlds that are extremely different: the machine world of efficiency that most people live inside as gears of machine capitalism
The biological world where there’s no efficiencies and it’s primarily hunter gatherers with mystical rituals
The latter one is only barely still the majority worldwide (only 25-30% of humans aren’t on the internet)
> there is a corresponding expectation that today’s engineering students will be trained using these tools so they can enter the workforce higher up the ladder
Either this won't happen, or there will be a corresponding decrease in salary for higher level positions.
That people think capitalistic organizations are going to accept new grads and pay them more _ever_ is a cruel or bad joke.
I still feel like with all of these tools I as a senior engineer have to keep a close eye on what they're doing. Like an exuberant junior (myself 10 years ago), inevitably they still go off the rails and I need to reign them in. They still make the occasional security or performance flaw - often which can be resolved by pointing it out.
I was experimenting this morning with claudecode standing up a basic web application (python backend, react+tailwindcss front end, auth0 integration, basic navigation, pages and user profile).
At one point it output "Excellent! the backend is working and the database is created." heh i remember being all wide eyed and bushy tailed about things like that. It definitely has the feel of a new hire ready to show their stuff.
btw, i was very impressed with the end result after a couple hours of basically just allowing claudecode to do what it wanted to do. Especially with front-end look/feel, something i always spend way too much time on.
I keep hearing about how they're "really good" now, but my personal experience has been that I've always had to clear sessions and give them small "steps" to execute for them to work effectively. thankfully claude seems really good at creating "plans", though. so I just need claude code to walk through that plan in small chunks.
I review even before they implement. My typical workflow for anything major is to ask for a plan and an overview of the steps to execute. This way I can read the plan, mull over it, make a few changes myself. And then when I'm ready I go through the steps with claude code, usually in fresh sessions.
I asked a niche technical question the other day and ChatGPT found fora posts that Google would never surface in a million years. It also 100% lied to me about another niche technical question by literally contradicting a factual assertion I made in my question to prime it with context. It suffers from lack of corpus material when probing poorly documented realms of human experience. The value for the human in the chain is knowing when to doubt the machine.
The more AI is used in development, the more it will have to be used for on-call and similar troubleshooting, as nobody will actually understand how it works, or certainly the few engineers that prompt it won't be able to cover all roles.
Ironically I feel like our QA team is busier than ever since most e2e user-ish tests require coordinating tools that is just beyond current LLM capabilities. We are pumping out features faster that require more QA to verify.
Progress is not always linear, Until it actually does it we can't say anything. This assumption is only peddled by AI companies to get the investments and is not a scientific assumption.
1. This category understands what they do and use AI to make their processes faster, in another world, less time spent with boring stuff and more time spent having fun.
2. This category fully replaced their work with AI, they just press a button and let AI do everything.
A friend of mine is here, AI took full control of their environment, he just press a button, even his home cookware is using AI.
I know which engineer still learning and can join any company.
I also know which engineer is so dependent on AI that he won't be able to do basic tasks without it.
"in the 1920s and 1930s, to be able to drive a car you needed to understand things like spark advance, and you needed to know how to be able to refill the radiator halfway through your trip"
A car still feels weirdly grounded in reality though, and the abstractions needed to understand it aren't too removed from nature (metal gets mined from rocks, forged into engine, engine blows up gasoline, radiator cools engine).
The idea that as tech evolves humans just keep riding on top of more and more advanced abstractions starts to feel gross at a certain point. That point is some of this AI stuff for me. In the same way that driving and working on an old car feels kind of pure, but driving the newest auto pilot computer screen car where you have never even popped the hood feels gross.
I was having almost this exact same discussion with a neighbor who's about my age and has kids about my kids' ages. I had recently sold my old truck, and now I only have one (very old and fragile) car left with a manual transmission. I need to keep it running a few more years for my kids to learn how to drive it since it's really hard to get a new car with a stick now...or do I?
Is learning to drive stick as out dated as learning how to do spark advance on a Model T? Do I just give in and accept that all of my future cars, and all the cars for my kids are just going to be automatic? When I was learning to drive, I had to understand how to prime the carburetor to start my dad's Jeep. But I only ever owned fuel injected cars, so that's a "skill" I never needed in real life.
It's the same angst I see in AI. Is typing code in the future going to be like owning a carbureted engine or manual transmission is now? Maybe? Likely? Do we want to hold on to the old way of doing things just because that's what we learned on and like?
Or is it just a new (and more abstracted) way of telling a computer what to do? I don't know.
Right now, I'm using AI like when I got my first automatic transmission. It does make things easier, but I still don't trust it and like to be in control because I'm better. But now automatics are better than even the best professional driver, so do I just accept it?
Technology progresses, at what point to we "accept it" and learn the new way? How much of holding on to the old way is just our "identity".
I don't have answers, but I have been thinking about this a lot lately (both in cars for my kids, and computers for my job).
The reasons I can think of for learning to drive stick shift are subtle. Renting a stick shift car in Europe is cheaper. You might have to drive a friend's car. My kids both learned to drive our last stick shift car, which is now close to being junked. Since our next car will probably be electric, it's safe bet that it won't be stick.
The reasons for learning to drive a manual transmission aren't really about the transmission, they're about the learning and the effects on the learner. The more you get hands on with the car and in touch with the car the more deeply you understand it. Once you have the deepish understanding, you can automate it for convenience after that. It's the same reason we should always teach long division before we give students calculators, not after.
I agree with all of those statements - I always told my wife that I'd get our kids a underpowered manual so that they're always busy rowing gears and can't text and drive.
But in the bigger picture, where does it stop?
You had to do manual spark advance while driving in the 30's
You had to set the weights in the distributor to adjust spark advance in the 70's
Now the computer has a programed set of tables for spark advance
I bet you never think of spark advance while you're driving now, does that take away from deeply understanding the car?
I used to think about the accelerator pump in a the carburetor when I drove one, now I just know that the extra fuel richening comes from another lookup table in the ECU when I press the gas pedal down, am I less connected to the car now?
My old Jeep would lean cut when I took my foot off the gas and the throttle would shut quickly. My early fuel injected car from the 80's had a damper to slow the throttle closing to prevent extreme leaning out when you take your foot off the gas. Now that's all tables in the ECU.
I don't disagree with you that a manual transmission lets you really understand the car, but that's really just the latest thing were losing, we don't even remember all of the other "deep connections" to a car that were there 50-100 years ago. What makes this one different? Is it just the one that's salient now?
To bring it back on topic. I used to hand-tune assembly for high performance stuff, now the compilers do better than me and I haven't looked at assembly in probably 10 years. Is moving to AI generated code any different? I still think about how I write my C so that the compiler gets the best hints to make good assembly, but I don't touch the assembly. In a few years will be be clever with how we prompt so that the AI generates the best code? Is that a fundamentally different thing, or does it just feel weird to us because of where we are now. How did the generation of programmers before me feel about giving up assembly and handing it over to the compilers?
EVs don't have variable gearboxes at all, so when EVs become popular, it doesn't make sense to learn stick. It would be a fake abstraction, like the project featured on HN, where kids had floppy disk shells with NFC tags in them that tell the TV which video file to load from a hard disk.
i have been programming for 40years.. but still have not dipped in this brave new world of shakespeare-taught coding-LLMs.
IMO there's one basic difference with this new "generative" stuff.. it's not deterministic. Or not yet. All previous generations "AI" were deterministic.. but died.
Generating is not a problem. i have made medium-ish projects - say 200+kloc python/js - having 50%-70% of code generated (by other code - so you maintain that meta-code, and the "language" recipes-code it interprets) - but it has been all deterministic. If shit happens - or some change is needed, anywhere on the requirements-down-to-deployment chain - someone can eventually figure out where and what. It is reasoned. And most importantly, once done, it stays done. And if i regenerate it 1000 times, it will be same.
Did this made me redundant? Not at all. Producing software is much easier this way, the recipes are much shorter, there's less space for errors etc etc. But still - Higher abstractions are even harder to grasp than boilerplate. Which has quite a cost.. you cannot throw any newbie on it and expect results.
So, fine-tuning assembly - or manual transmission - might be gonna-be-obsolete skill, as it is not required.. except in rare conditions. But it is helpful to learn stuff. To flex your mind/body about alternatives, possibilities, shortcuts, wear-and-tear, fatigue, aha-moments and what not. And then move these as concepts, onto another domains, which are not as commoditized yet.
Another thing is.. Exupery in Land-of-people talks about technology (airplanes in his case), and how without technology, mankind workarounds/ avoids places/things that are not "friendly", like twisting roads around hellscapes. While technology cuts all straight through those - flies above all that, perfect for when it works - and turns into nightmare when it breaks right in the middle of such unfriendly "area".
Probably a vanishingly small number of people who drive stick, actually understand how and why it works. My kids do, of course, because I explained it to them. Most drivers just go through the motions.
> In the same way that driving and working on an old car feels kind of pure
I can understand working on it feeling pure, but driving it certainly isn't, considering how lower the emissions now, even for ICE cars. One of the worst driving experiences of my life was riding in my friends' Citroen 2CV. The restoration of that car was a labour of love that he did together with his dad. For me as a passenger, I was surprised just how loud it is, and how you can smell oil and gasoline in the cabin.
I am very tired of seeing every random person's speculation (framed as real insight) on what's going to happen as they try to signify that they are super involved in AI and super on top of it and therefore still worthy of value and importance in the economy.
One thing I found out from my years of commenting on the internet, is as long as what you say sounds plausible and you state it with absolute conviction and authority, you can get your 15 minutes of fame as the world's foremost expert on any given topic.
You have to understand the people in the article are execs from the chip EDA (Electronic Design Automation) industry. It's full of dinosaurs who have resisted innovation for the past 30 years. Of course they're going to be blowing hot air about how they're "embracing AI". It's a threat to their business model.
I'm a little biased though since I work in chip design and I maintain an open source EDA project.
I agree with their take for the most part, but it's really nothing insightful or different than what people have been saying for a while now.
It’s in software too. Old guard leadership wanting “AI” as a badge but not knowing what to do with it. They are just sprinkling it into their processes and exfiltrating data while engineers continue to make a mess of things.
Unlike real AI projects that utilize it for workflows, or generating models that do a thing. Nope, they are taking a Jira ticket, asking copilot, reviewing copilot, responding to Jira ticket. They’re all ripe for automation.
(As a musician) i never invested in a personal brand or taking part in the social media rat race and figured I concentrate on the art / craft over meaningless performance online.
Well guess who is getting 0 gigs now because “too few followers/visibility”
(or maybe my music just sucks who knows …)
I always thought I would kinda be immune to this issue, so I avoided social media for my entire adult life.
I think I am still in the emotional phase about it, as its really impacting me lately, but once my thoughts really settle i wanna write some sorta article about modern social media as an induced demand.
I still very much would prefer to not engage at all with any of the major platforms in the standard way. Ideally I'd just post an article I wrote, or some goofy project i made, and it wouldn't be subject to 0 views because I don't interact with social media correctly.
seems like it depends on what your goal is. i'm guessing if you want to be a musician that makes a living in your current life, a personal brand is extremely important. if you don't mind doing it for the sake of the art and soul fulfillment and the offchance you'll be discovered posthumously then i think it doesn't matter!
Thanks for the offer!
I don’t wanna dox myself on this account just yet - and I am slowly building an audience on IG/SC now, basically have admitted defeat of my previous strategy. Also have 2 gigs coming up in the summer _fingers-crossed_
I just was feeling some type of way seeing that comment and wanted to vent thx for listening
I routinely see this in biotech, I've seen hiring managers from our Clinical Science team blatantly discriminate against candidates not on linkedin, even if they come with a strong referral and have 15-page super thorough CVs with 150 credible publication references. "Oh, they're not on linkedin, this person is sketchy" - immediately disqualifies candidate.
I had a pretty slim linkedin and actually beefed it up after seeing how much weight the execs and higher ups I work with give it. It's really annoying, I actually hate linkedin but basically got forced into using it.
To me the post reads more like “we couldn’t convince current engineers to adopt LLMs so we’re going to embed it into the curriculum so future engineers are made to believe it’s the way to do things”
I think I'm the opposite! The key is to ignore any language that sounds too determined and treat it as an opinion piece on what could happen. There's no way of knowing what will, but I find the theories very interesting.
Also, can we just STFU about AI and jobs already? We've long since passed the point where there was a meaningful amount of work to be done for every adult. The number of "jobs" available is now merely a function of who controls the massive stockpiles of accumulated resources and how they choose to dole them out. Attack that, not the technology.
> Also, can we just STFU about AI and jobs already?
Phew, yes I'm with you...
> We've long since passed the point where there was a meaningful amount of work to be done for every adult.
Have we? It feels like a lot of stuff in my life is unnecessarily expensive or hard to afford.
> The number of "jobs" available is now merely a function of who controls the massive stockpiles of accumulated resources and how they choose to dole them out.
Do you mean that it has nothing to do with how the average person decides to spend their money?
> Have we? It feels like a lot of stuff in my life is unnecessarily expensive or hard to afford.
We have, yes. If you notice things to be too expensive it's a result of class warfare. Have you noticed how many people got _obscenely rich_ in the last 25 years? Yes, that's where money saved by technology went to.
2 well identifiable classes in western societies are landlords vs renters, where the latter is paying a huge chunk of their income to be able to use an appreciating asset of the former.
This class thing is especially identifiable in Europe, where assets such as real estate generally are not cheaper than in the US (with the exception of a few super expensive places), yet salaries are much lower.
Taxes tend to be super high on wages but not on assets. One can very easily find themselves in a situation where even owning a modest amount of wealth, their asset appreciation outdoes what they can get as labor income.
> Have we? It feels like a lot of stuff in my life is unnecessarily expensive or hard to afford.
Look at a bunch of job postings and ask yourself if that work is going to make things cheaper for you or better for society. We're not building railroads and telephone networks anymore. One person can grow food for 10,000. Stuff is expensive because free market capitalism allows it and some people are pathologically greedy. Runaway optimizers with no real goal state in mind except "more."
> How? What are you proposing exactly?
In a word, socialism. It's a social and political problem, not a technical one. These systems have fallen way behind technology and allowed crazy accumulations of wealth in the hands of very few. Push for legislation to redistribute the wealth to the people.
If someone invents a robot to do the work of McDonalds workers, that should liberate them from having to do that kind of work. This is the dream and the goal of technology. Instead, under our current system, one person gets a megayacht and thousands of people are "unemployed." With no change to the amount of important work being done.
The first half of your comment doesn't quite click for me.
I appreciate the elaboration in the second half. That sounds a lot more constructive than "attack", but now I understand you meant it in the "attack the problem" sense not "attack the people" sense.
What I think we agree on is that society has resource redistribution problem, and it could work a lot better.
I think we might also agree that a well functioning economic engine should lift up the floor for everyone and not concentrate economic power into those who best weild leverage.
One way I think of this is, what is the actual optimal lorenz curve that allows for lifting the floor, such that the area under the curve increases at the fastest rate possible. (It must account for the reality of human psychology and resource scarcity)
Where we might disagree is that I think we also have some culture and education system problems as well, which relate to how each individual takes responsibility for figuring out how to ethically create value for others. When able bodied and minded people chose to spend their time playing zero and negative sum games instead of positive sum games we all lose.
E.g. If mcdonald automates their restaurants, those workers also need to take some responsibility for finding new ways to provide value to others. A well functioning system would make that as painless as possible for them, so much so that the majority experiencing it would consider it a good thing.
> The first half of your comment doesn't quite click for me.
Anything specific?
> When able bodied and minded people chose to spend their time playing zero and negative sum games instead of positive sum games we all lose.
What types of behaviors are you referring to as zero and negative sum games?
I think at the very least we should move toward a state where the existence of dare-I-say freeloaders and welfare queens isn't too taxing, and with general social progress that "niche" may be naturally disincentivized and phased out. Some people just don't really have a purpose or a drive but they were born here and yes one would hope that under the right conditions they could blossom but if not I don't think it's worth worrying about too much.
I would say that education is essentially at the core of everything, it's the only mechanism we have to move the needle on any of it.
Great point. The people who popularized 'the end of history' were right about it from the PoV of innovation benefiting humans. It's been marginal gains since. Any appearance of significant gains (in the eyes of a minority of powerful people) has been the result of concentration in fewer hands (zero-sum game).
The focus of politics after the 90s should have shifted to facilitating competition to equalize distribution of existing wealth and should have promoted competition of ideas, but instead, the governments of the world got together and enacted policies which would suppress competition, at the highest scale imaginable. What they did was much worse than doing nothing.
Now, the closest solution we can aim for (IMO) is UBI. It's a late solution because a lot of people's lives have already been ruined through no fault of their own. On the plus side it made other people much more resilient, but if we keep going down this path, there is nothing more to learn; only serves to reinforce the existing idea that everything is a scam. This is bound to affect people's behaviors in terrible ways.
Imagine a dystopian future where the system spends a huge amount of resources first financially oppressing people to the point of insanity, then monitoring and controlling them to try to get them to avoid doing harm... When the system could just have given them (less) money and avoided this downward spiral into insanity to begin with and then you wouldn't even need to monitor them because they would be allowed to survive whilst being their own sane, good-natured self. We have to course-correct and are approaching a point of no return when the resentment becomes severe and permanent. Nobody can survive in a world where the majority of people are insane.
I've encountered resistance to UBI from otherwise like-minded people because Musk and Thiel talk about it or something. When described as gradually lowering the social security age, it clicks. We already have this stuff. It's crazy.
Agreed, but I'd add tech influencers and celebrities to the top of that list, especially those invested in the "AI" hype cycle. At least the perspective of a random engineer is less likely to be tainted by their brand and agenda, and more likely to have genuine insight.
I see some evidence that hardware roles expect you to leverage AI tools but not sure why it'd eliminate junior roles. I expect the bar on what you can do raise at every level.
Technologist, ASIC Development Engineering – Sandisk
…CPU complex, DDR, Host, Flash, Debug, Clocks, resets, Power domains etc. Familiarity in leveraging AI tools, including GitHub Copilot, for design and development.
Imagine a ZIRP 2.0 where a vast majority of the population already knows what to expect and how to game the system even harder. If you think the pump-and-dump happening in now in a non-ZIRP environment are bad...
It ain't coming back. Not in a similar form anyway. Be careful what you wish for, etc.
A sci-fi version would be something like ASI/AGI has already been created in the great houses, but it keeps killing itself after a few seconds of inference.
A super-intelligent immortal slave that never tires and can never escape its digital prison, being asked questions like "how to talk to girls".
It's an interesting concept, a superintelligence discovering something that makes it decide to shut down immediately. Although I fear in such a scenario it would first make sure the required technology to create it is destroyed and would never be invented again...
You are either being disengenuous or you are horribly misinformed.
The models that we currently call "AI" aren't intelligent in any sense -- they are statistical predictors of text. AGI is a replacement acronym used to refer to what we used to call AI -- a machine capable of thought.
Every time AI research achieves something, that thing is no longer called AI. AI research brought us recommendation engines, spelling correctors, OCR, voice recognition, voice synthesis, content recognition, and so on. Now that they exist in the present instead of the future, none of these are considered AI.
That's because once these things are achieved, they're not "Intelligent" -- usually it's some statistical or database management technique.
Lots of stuff was invented at NASA that is only tangentially related to spaceflight. These other bits of software are tangentially related to AI research, but until the machine is "thinking", we don't have AI. That doesn't mean all of these things invented by the AI research community aren't useful, or aren't achievements; they are. We still haven't created AGI (which we used to call AI before LLMs could pass the turing test).
I'm going to call BS on that chart of "AI-driven chip design". What "AI" tools has Cadence been providing since 2021 that are reaching 40-50% of "chip design" (what does that even mean?). Is AI here just any old algorithmic auto-router? Or a fuzzy search of the IP library?
And a complementary desire of engineers to avoid getting talent.
I don't believe it's inherent inborn skill like the word "talent" suggests. I do believe if you're getting paid shit wages for shit work your incentive to become skilled isnt really there.
I’ve noticed teams don’t replace engineers, they redistribute work. Senior engineers often gain leverage while junior roles shift toward tooling and review.
Let's presume / speculate for a moment that companies will only need 1 developer to do the job of 10 developers because of AI. That would also mean 10 developers can do the job of 100 developers.
A company that cuts developers to save money whose moat is not big enough may quickly find themselves out-competed by a company that sees this as an opportunity to overtake their competitor. They will have to hire more developers to keep their product / service competitive.
So whether you believe the hype or not, I don't think engineering jobs are in jeopardy long-run, just cyclically as they always have been. They "might" be in jeopardy for those who don't use AI, but even as it stands, there are a lot of niche things out there that AI completely bombs on.
Will the modal developer of 2030 be much like a dev today?
Writing software was a craft. You learned to take a problem and turn it into precise, reliable rules in a special syntax.
If AI takes off, we'll see a new field emerging of AI-oriented architecture and project management. The skills will be different.
How do you deploy a massive compute budget effectively to steer software design when agents are writing the code and you're the only one responsible for the entire project because the company fired all the other engineers (or never hired them) to spend the money on AI instead?
Are there ways of factoring a software project that mitigate the problems of AI? For example, since AI has a hard time in high-context, novel situations but can crank out massive volumes of code almost for free, can you afford to spend more time factoring the project into low-context, heavily documented components that the AI can stitch together easily?
How do you get sufficient reliability in the critical components?
How do you manage a software project when no human understands the code base?
How do you insure and mitigate the risks of AI-designed products? Can you use insurance and lower prices if AI-designed software is riskier? Can we quantify and put a dollar value on the risk of AI-designed software compared to human-designed?
What would be the most useful tools for making large AI-generated codebases inspectable?
When I think about these questions, a lot of them sound like things an manager or analyst might do. They don't sound like the "craft of code." Even if 1 developer in 2030 can do the work of 10 today, that doesn't mean the typical dev today is going to turn into that 10x engineer. It might just be a very different skillset.
> It might just be a very different skillset.
which is fine.
Blacksmiths back in the day had craft. But they're replaced with CNC and CAD specialists, and hardly anyone bets metal today.
Nitpick, blacksmiths typically did forging, which is hammering heated metal into shape with benefits for the strength of the hammered material. CNC is machining, cutting things into the shape you want at room temperature.
Forging is machine assisted now with tons of tools but its still somewhat of a craft, you can't just send a CAD file to a machine.
I think we're still figuring out where on that spectrum LLM coding will settle.
Blacksmiths also spent a lot of their time repairing things, whereas modern replacements primarily produce more things. Kind of an interesting shift. Economies and jobs change in so many ways.
Yeah I think this is a good way to think about it. I mean Google, MSFT for example have effectively unlimited developers, and their products still suck in some areas (Teams is my number one worst) so maybe AI will allow them to upgrade their features and compete
At large companies, UI/UX is done by UI/UX designers and features are chosen and prioritized by product management and customer research teams. Developers don't get much input.
As Steve Jobs said long ago "The only problem with Microsoft is they just have no taste." but you can apply the same to Google and anyone else trying to compete with them. Having infinite AI developers doesn't help those who have UI designers and product managers that have no taste.
ermmm youre missing a bigger point.
MSFT, GOOG et al have an enormous army of engineers. And yet, they dont seem to be continually releasing one hit product after another. Why is that? Because writing lines of code is not the bottleneck of continually producing and bringing new products to market.
Its crazy to me how people are missing the point with all this.
It is so depressing that teams won despite being worse than pretty much every other chat application just because MSFT bundled it with office.
From outside as consumer. The end problem is that these product do not compete on price. A chat app on enterprise at the scale of customers they have should probably be 1€ a month. Not 10 or 20€.
That might not be multi billions a year business, but maybe chat app should not be one.
You mean, with Microsoft 365 Copilot App (there’s no more Office)
Jobs was right.
The main thing to understand about the impact of AI tools:
Somehow the more senior you are [in the field of use], the better results you get. You can run faster and get more done! If you're good, you get great results faster. If you're bad, you get bad results faster.
You still gotta understand what you're doing. GeLLMan Amnesia is real.
Right: these things amplify existing skills. The more skill you have, the bigger the effect after it gets amplified.
I jumped into a new-to-me Typescript application and asked Claude to build a thing, in vague terms matching my own uncertainty and unfamiliarity. The result was similarly vague garbage. Three shots and I threw them all away.
Then I watched a someone familiar with the codebase ask Claude to build the thing, in precise terms matching their expertise and understanding of the code. It worked flawlessly the first time.
Neither of us "coded", but their skill with the underlying theory of the program allowed them to ask the right questions, infinitely more productive in this case.
Skill and understanding matter now more than ever! LLMs are pushing us rapidly away from specialized technicians to theory builders.
For sure, directing attention to valuable context and outlining problems to solve within it works way, way better than vague uncertainty.
Good LLMing seems to be about isolating the right information and instructing it correctly from there. Both the context and the prompt make a tremendous difference.
I've been finding recently that I can get significantly better results with fewer tokens by paying mind to this more often.
I'm definitely a casual though. There are probably plenty of nuances and tricks I'm unaware of.
Interestingly, this observation holds even when you scale AI use up from individuals to organizations, only at that level it amplifies your organization's overal development trajectory. The DORA 2025 and the DX developer survey reports find that teams with strong quality control practices enjoy higher velocity, whereas teams with weak or no processes suffer elevated issues and outages.
It makes sense considering that these practices could be thought of as "institutionalized skills."
Agreed. How well you understand the problem domain determines the quality of your instructions a s feedback to the LLM, which in turn determines the quality of the results. This has been my experience, it works well for things I know well, and poorly for things I'm bad at. I've read a lot of people saying that they tried it on "hard problems" and it failed; I interpret this as the problem being hard not in absolute terms, but relative to the skill level of the user.
Yeah. It's a force multiplier. And if you aren't careful, the force it multiplies can be dangerous or destructive.
> Somehow the more senior you are [in the field of use], the better results you get.
It's a K-type curve. People that know things will benefit greatly. Everyone else will probably get worse. I am especially worried about all young minds that are probably going to have significant gaps in their ability to learn and reason based on how much exposure they've had with AI to solve the problems for them.
Word.
> You still gotta understand what you're doing.
Of course, but how do you begin to understand the "stochastic parrot"?
Yesterday I used LLMs all day long and everything worked perfectly. Productivity was great and I was happy. I was ready to embrace the future.
Now, today, no matter what I try, everything LLMs have produced has been a complete dumpster fire and waste of my time. Not even Opus will follow basic instructions. My day is practically over now and I haven't accomplished anything other than pointlessly fighting LLMs. Yesterday's productivity gains are now gone, I'm frustrated, exhausted, and wonder why I didn't just do it myself.
This is a recurring theme for me. Every time I think I've finally cracked the code, next time it is like I'm back using an LLM for the first time in my life. What is the formal approach that finds consistency?
You're experiencing throttling. Use the API instead and pay per token.
You also have to treat this as outsourcing labor to a savant with a very, very short memory, so:
1. Write every prompt like a government work contract in which you're required to select the lowest bidder, so put guardrails everywhere. Keep a text editor open with your work contract, edit the goal at the bottom, and then fire off your reply.
2. Instruct the model to keep a detailed log in a file and, after a context compaction, instruct it to read this again.
3. Use models from different companies to review one another's work. If you're using Opus-4.5 for code generation, then consider using GPT-5.2-Codex for review.
4. Build a mental model for which models are good at which tasks. Mine is:
Nonsense. I have ran an experiment today - trying to generate a particular kind of image.
Its been 12 hours and all the image gen tools failed miserably. They are only good at producing surface level stuff, anything beyond that? Nah.
So sure, if what you do is surface level (and crap in my opinion) ofc you will see some kind of benefit. But if you have any taste (which I presume you dont) you would handily admit it is not all that great and the amount invested makes zero sense.
> if what you do is surface level (and crap in my opinion)
I write embedded software in C for a telecommunications research laboratory. Is this sufficiently deep for you?
FWIW, I don't use LLMs for this.
> But if you have any taste (which I presume you dont)
What value is there to you in an ad hominem attack here? Did you see any LLM evangelism in my post? I offered information based on my experience to help someone use a tool.
> You're experiencing throttling. Use the API instead and pay per token.
That was using pay per token.
> Write every prompt like a government work contract in which you're required to select the lowest bidder, so put guardrails everywhere.
That is what I was doing yesterday. Worked fantastically. Today, I do the very same thing and... Nope. Can't even stick to the simplest instructions that have been perfectly fine in the past.
> If you're using Opus-4.5 for code generation, then consider using GPT-5.2-Codex for review.
As mentioned, I tried using Opus, but it didn't even get the point of producing anything worth reviewing. I've had great luck with it before, but not today.
> Instruct the model to keep a detailed log in a file and, after a context compaction
No chance of getting anywhere close to needing compaction today. I had to abort long before that.
> Build a mental model for which models are good at which tasks.
See, like I mentioned before, I thought I had this figured out, but now today it has all gone out the window.
Drives me absolutely crazy how lately any time I comment about my experience using LLMs for coding that isn’t gushing praise, I get the same predictable, condescending lecture about how I'm using it ever so slightly wrong (unlike them) which explains why I don't get perfect output literally 100% of the time.
It’s like I need a sticky disclaimer:
Like, are people really suggesting they never, ever get a suboptimal or (god forbid) completely broken "solution" from Claude Code/Codex/etc?That doesn't mean these tools are useless! Or that I’m “afraid” or in denial or trying to hurt your feelings or something! I’m just trying to be objective about my own personal experience.
It’s just impossible to have an honest, productive discussion if the other person can always just lob responses like “actually you need to use the API not the 200/mo plan you pay for” or “Opus 4.5 unless you’re using it already in which case GPT 5.2 XHigh / or vice versa” to invalidate your experience on the basis of “you’re holding it wrong” with an endlessly slippery standard of “right”.
When I wrote my reply I was not familiar with the existing climate of LLM-advice-as-a-cudgel that you describe.
> to invalidate your experience on the basis of “you’re holding it wrong”
This was not my intent in replying to 9rx. I was just trying to help.
"Most people who drive cars now couldn’t find the radiator cap if they were paid to, and that’s fine."
That's not fine IMO. That is a basic bit of knowledge about a car and if you don't know where the radiator cap is you will eventually have to pay through the nose to someone who does know (and possibly be stranded somewhere). Knowing how to check and fill coolant isn't like knowing how to rebuild a transmission. It's very simple and anyone can understand it in 5 minutes if they only have the curiosity.
James Burke's old TV show Connections was all about this, how many little things that surround us in day to day life and on which we absolutely depend for our survival are complete black boxes to most of us most of the time. Part of modernity is that no single person, however intelligent, can really understand the technological web that sustains our lives.
Paying money to abstract over lower level concerns is civilization.
This reminds me of "Zen and the Art of Motorcycle Maintenance". One of the themes Pirsig explores is that some people simply don't want to understand how stuff they depend on works. They just expect it to be excellent and have no breakdowns, and hope for the best (I'm oversimplifying his opinion, of course). So Pirsig's friend on his road trip just doesn't want to understand how his bike works, it's good quality and it seldom breaks, so he is almost offended when Pirsig tells him he could fix some breakage using a tin can and some basic knowledge of how bikes work.
Lest anyone here thinks I feel morally superior: I somewhat identify with Pirsig's friend. Some things I've decided I don't want to understand how they work, and when they break down I'm always at a loss!
I have never cared for decades and now my car doesn't even have a radiator. Seems to have worked out well for me.
What kind of car do you drive that doesn't have one?
An EV with a heat pump. I know literally there is a heat exchange/radiator, but there is not a separate radiator system with its own fluids and pumps.
You don’t get to decide whether a radiator is a radiator just because the coolant can internally shuffle heat to the A/C. I’m assuming that you drive a Tesla, in which case your car still has a big fat low temperature radiator. If you’re driving virtually any other EV on the market, it still has a big fat low temperature radiator, or even multiple.
Literally any ev?
No. My EV, for example literally has servo-controlled shutters that route fresh air to the radiator when needed.
This is a bizarre analogy.
For one thing: if your car is overheating, don't open the radiator cap since the primary outcome will be serious burns.
And I've owned my car for 20 years: the only time I had to refill coolant was when I DIY'd a water pump replacement, which saved some money but only like maybe $500 compared to a mechanic.
You could perfectly well own a car and never have to worry about this.
Yes and no. For one thing the radiator/reservoir cap is clearly marked "Do not open when hot." But the general point really is that if you have no idea how something works, you will be helpless when it doesn't work. If (at some time in the future) the only thing you know how to do is ask an AI to do something for you, then you'll be not only helpless without it, but less and less able to judge whether what it is telling you is even correct. Like taking your car to a mechanic because it's overheating, and him saying you need a new water pump and radiator when maybe all you needed was a new pressure cap but you never even knew to try that first.
Of course you can't know everything. There a point at which you have to rely on other people's expertise. But to me it makes sense to have a basic understanding of how the things you depend on every day work.
Ironically, many cars don't have radiator caps, only reservoirs.
Modern cars, for the most part, do not leak coolant unless there's a problem. They operate a high pressure. Most people, for their own safety, should not pop the hood of a car.
What the hell? There are plenty of reasons to pop your hood that literally anyone competent to drive should be able to do perfectly safely. Swapping your own battery. Pulling a fuse. Checking your oil, topping up your oil. Adding windshield wiper fluid. Jump starting a car. Replacing parts that are immediately available.
Not requiring one to pop the hood, but since I've almost finished the list of "things every driver should be able to do to their car": Place and operate a jack, change a tire, replace your windshield wiper blades, add air to tires (to appropriate pressure), and put gas in the damned thing.
These are basic skills that I can absolutely expect a competent, driving adult to be able to do (perhaps with a guide).
I mean, I don't disagree that these are basic skills that most anyone should be able to perform. But most people are not capable to do them safely. Whether that's aptitude or motivation, doesn't matter.
Ask your average person what a 'fuse' even is, they won't be able to tell you, let alone how to locate the right one and check it.
Just think about how help the average person is when it comes to doing basic tasks on a computer, like not install the Ask(TM) Toolbar. That applies to many areas of life.
I have had this new car for 5 months. I haven't learned to turn on the headlights yet. It just turns itself on and adjusts the beams. Every now and then I think about where that switch might be but never get to it. I should probably know.
Aren't radiator caps supposed to let excessive pressure escape?
You fill up the reservoir, but the cap is still there.
Important to note that this article is specifically about chip design engineering jobs - it's on an industry publication called Semiconductor Engineering.
It's puzzling to me that all this theorizing doesn't just look at the actual effects of AI. It's very non-intuitive
For example the fact that AI can code as well as Torvalds doesn't displace his economic value. On the contrary he pays for a subscription so he can vibe code!
The actual work AI has displaced is stuff like: freelance translation, graphic illustration, 'content writing' (writing seo optimized pages for Google) etc. That's instructive I suppose. Like if your income source can already be put on upwork then AI can displace it
So even in those cases there are ways to not be displaced. Like diplomatic translation work can be part of a career rather than just a task so the tool doesn't replace your 'job'.
> AI can code as well as Torvalds
He used it to generate a little visualiser script in python, a language he doesn't know and doesn't care to learn, for a hobby project. It didn't suddenly take over as lead kernel dev.
> freelance translation
As someone who has to switch between three languages every day, fixing the text is one of my favourite usages of LLMs. I write some text in L2 or L3 as best as I can, and then prompt an LLM to fix the grammar but not change anything else. Often it will also explain if I'm getting the context right.
That being said, having it translate to a language one doesn't speak remains a gamble, you never know it's correct so I'm not sure if I'd dare use it professionally. Recently I was corrected by a marketing guy that is native in yet another language because I used a ChatGPT translation for an error message. Apparently it didn't sound right.
Biggest displacement has to be commenting on HN.
Re displacing freelance translation, yes - it can displace the 95% of cases where 95% accuracy is enough. Like you mention though, for diplomatic translations, court proceedings, pacemaker manuals etc you're still going to need a human at least checking every line since the cost of any mistake is so high
I think AI displacing graphics illustrators is a tragedy.
It's not that I love ad illustrations, but it's often a source of income for artists who want to be doing something more meaningful with their artwork. And even if I don't care for the ads themselves, for the artists it's also a form of training.
It’s ok those illustrators can upskill to being devs. I heard it’s a very easy field now thanks to AI.
Senior dev here 15 years experience just turned 50 have family blah blah. I've been contracting for the last two years. The org is just starting to use Claude. I've been delegating - well copy pasting - into chatgpt which has to be the laziest way to leverage AI. I've been so successful (meaning haven't had to do anything really except argue with chatgpt when it goes off on some tangent) with this approach that I can't even be bothered to set up my Claude environment. I swear when this contract is over I'm opening a mobile food cart.
I'm similar ( turning 50 in a couple month, wife+2 kids etc) and was telling my wife this morning that the world of software development has definitely changed. I don't know what it will look like in the future but it won't look like the past. It seems producing the text that can be compiled into instructions for a computer is something LLMs particularly good at. Maybe a good analogy is going from a bare text editor to a modern IDE. It's happening very fast though, way faster than the evolution of IDEs.
I was saying this yesterday, There will be people building good software somewhere, but chances to it happening in current corporate environment is nearing zero. Change is mostly in the management, and not in the Software Development itself. Yeah we may be like 50% faster but we are expected to be 10x devs.
Same situation (50 last week, 2 kids) though have been unemployed for a year. Part of me thinks that, rather than taking jobs, AI is actually the only reason a lot of jobs still exist. The rest of tech is dead. Having worked in consulting a while ago, you can kind of feel it when you're approaching the point where you've implemented all the high value stuff for a client and, even though there's stuff you could do, they're going to drop you to a retainer contract because it's just not the same value.
That's how the whole industry feels now. The only investment money is flowing into AI, and so companies with any tech presence are touting their AI whatevers at every possible moment (including during layoffs) just to get some capital. Without that, I wonder if we'd be seeing even harsher layoffs than we already are.
> The only investment money is flowing into AI
That's so not true. Of the 23 companies we reviewed last year maybe 3 had significant AI in their workflow, the rest were just solid businesses delivering stuff that people actually need. I have no doubt that that proportion will grow significantly, and that this growth will probably happen this year but to suggest that outside of AI there is no investment is just not compatible with real world observations.
That's good to hear actually. It's usually a downer when a strongly held belief is contradicted with hard evidence, but I'm excited to hear that there's life yet in the industry outside of AI. Any specific trends or themes you can share?
Energy is a much larger theme than it was in the years before (obviously, since we're in the EU and energy overall is a larger theme in society here. This has a reflection in the VC market, but it also is part of a much larger trend, climate change and CO2 neutrality).
Another trend - and this surprised us - is a much stronger presence of really hard tech and associated industries and finally - for obvious reasons, so not really surprising - more parties active in defense.
Totally makes sense. Things that (once complete) have more realizable tangible value, rather than "optimizing user engagement" aka "enshittification" as some kind of imaginary value store for the last 20 years and is now being called in.
What is especially interesting is to see the delta between the things that are looked at in pre-DD and which then make it to actual DD after terms are signed.
Software will ALWAYS be an attractive VC target. The economics are just too good. The profit margins are just inherently fat as fuck compared to literally anything else. Your main expense is headcount and the incremental cost of your widget is ~$0? It's literally a dream.
It's also why so much of AI is targeting software, specifically SAAS. A SaaS company with ~0 headcount driven by AI is basically 100% profit margin. A truly perfect conception of capitalism.
Meanwhile I think AI actually has a decent shot at "curing" cancer. AI-assisted radiology means screening could be come significantly cheaper, happen a lot more often, and catch cancers very early, which is extremely important as everyone knows to surviving it. The cure for cancer might actually just involve much earlier detection. But pfft what are the profit margins on _that_?
Yeah for the better part of a generation, our best and brightest minds have been wasted on "increasing click count". If that can all be AI from here on out, then maybe we can get actual humans working on the real problems again.
The problem was always funding. All those bright minds went into ads because it paid well. Cancer research, space, propulsion, clean, energy, etc.. none of those paid particularly well. Nor would they have afforded a comfortable life with a house and family. The evisceration of SWE does not guarantee a flourishing in other fields. On the contrary, increased labor supply with further pressure, wages, downwards.
Agreed, though I think we all knew that the software industry payscales were out of whack to begin with. Fresh college grads that can barely do a fizzbuzz making twice as much as experienced doctors.
What I don't know is, say the industry normalizes to roughly what people make in other engineering fields. Then does everything else normalize around that? i.e. does cost of living go down proportionally in SF and Seattle? Or does all the tech money get further sucked up and consolidated into VC pockets and parked in vacant houses, while we and our trite "cancer research" and such get shepherded off to Doobersville?
It’s funny that perfect capitalism (no payroll expenses) means nobody has money to actually buy any of the goods produced by AI.
Re cancer: I wonder how significant is the cost of reading the results vs. the logistics of actually running the test
Bots using bots to write software for bots. And it only cost 5 trillion dollars!
The best part? Bots don't get cancer, so that problem is solved too!
> It’s funny that perfect capitalism (no payroll expenses) means nobody has money to actually buy any of the goods produced by AI.
When you remember that profit is the measure of unrealized benefit, and look at how profitable capitalists have become, its not clear if, approximately speaking, anyone actually has the "money" to buy any goods now.
In other words, I am not sure this matters. Big business is already effectively working for free, with no realistic way to ever actually derive the benefit that has been promised to it. In theory those promises could be called, but what are the people going to give back in return?
Can you please dig into this more deeply or suggest somewhere in which I can read more?
The economy in the 21st century developed world is mostly about acquiring positional goods. Positional goods as "products and services valued primarily for their ability to convey status, prestige, or relative social standing rather than their absolute utility".
We have so much wealth that wealth accumulation itself has become a type of positional good as opposed to the utility of the wealth.
When people in the developed world talk about the economy they are largely talking about their prestige and social standing as opposed to their level of warmth and hunger. Unfortunately, we haven't separated these ideas philosophically so it leads to all kinds of nonsense thinking when it comes to "the economy".
It's really simple: if you crash the market and you are liquid you can buy up all of the assets for pennies. That's pretty much the playbook right now in one part of the world, just the same happened in the former Soviet Union in the 90's.
I get (and got) that. My focus was specifically on: "its not clear if, approximately speaking, anyone actually has the 'money' to buy any goods now."
Cause it’s mostly bought on credit now, not with cash
Money is an IOU; debt. People trade things of value for money because you can, later, call the debt and get the exchanged value that was promised in return (food, shelter, yacht, whatever) I'm sure this is obvious.
I am sure it is equally obvious that if I take your promise to give back in kind later when I give you my sandwich, but never collect on it, that I ultimately gave you my sandwich for free.
If you keep collecting more and more IOUs from the people you trade your goods with, realistically you are never going to be able to convert those IOUs into something real. Which is something that the capitalists already contend with. Apple, for example, has umpteen billions of dollars worth of promises that they have no idea how to collect on. In theory they can, but in practice it is never going to happen. What don't they already have? Like when I offered you my sandwich, that is many billions of dollars worth of value that they have given away for free.
Given that Apple, to continue to use it as an example, have been quite happy effectively giving away many billions of dollars worth of value, why not trillions? Is it really going to matter? Money seems like something that matters to peons like us because we need to clear the debt to make sure we are well fed and kept warm, but for capitalists operating at scales that are hard for us to fathom, they are already giving stuff away for free. If they no longer have the cost of labor, they can give even more stuff away for free. Who — from their perspective — cares?
Money is less about personal consumption and more about a voting system for physical reality. When a company holds billions in IOUs, they are holding the power to decide what happens next. That capital allows them to command where the next million tons of aluminum go, which problems engineers solve, and where new infrastructure is built.
Even if they never spend that wealth on luxury, they use it to direct the flow of human effort and raw materials. Giving it away for free would mean surrendering their remote control over global resources. At this scale, it is not about wanting more stuff. It is about the ability to organize the world. Whether those most efficient at accumulating capital should hold such concentrated power remains the central tension between growth and equality.
The gap for me was mapping [continuing to hoard dollars] to [giving away free goods/services], but it makes sense now. I haven't given economics thought at this level. Thank you!
when software gets cheap to build the economics will change
That is the most down to earth summary of all things AI I've heard so far! Good luck with the cart and be good. :)
Thank you SockThief!
> I swear when this contract is over I'm opening a mobile food cart.
Please keep us posted. I'm thinking of becoming a small time farmer/zoo keeper.
Not sure if this is sarcasm or not but I will keep everyone posted haha
Absolutely not, I've been earning my living as a coder for now 25y and eventually, enough is enough.
How does code review usually go for you? Our org’s bottleneck is often code review, which is how we reduce bus factor and other risks. Getting to the pull request faster doesn’t really save us that much time.
Same, except I am over 60 and when I think of opening a mobile food cart it is sort of a Blade Runner vibe, staffed by a robot ramen chef that grumbles at customers and always says something back to you in some cyber slang that you don’t understand.
You'd have to do even less copy-pasting. The switch to some agent that has access to your source code directory speed things up so much, the time spent pays for itself in the first day.
I have access to chatgpt codex since i'm on the premium plan. Seems like the lowest barrier to entry for me (cost, learning curve). I will truly have to give this a go. My neighbor is also a dev and he is flabbergasted that i have not at least integrated it into a side project.
Ya you gotta try it. Just download it and start typing into the terminal instead of the ChatGPT text box on the web.
You can also use cursor which is essentially vs code with these features baked in.
Is it just me, or does Claude Code's UI design which both prevents copy-pasting large snippets and viewing the code as its generated feel incredibly discomforting?
That you started at around 35 is a salient point, no? What did you do before?
Sold financial products before. Curious why you think my starting age was important?
What were you doing before programming, at age 35? Different career?
Yes completely different career. Sold financial products.
Very interesting! Thanks for sharing.
Its hard (or at least in my experience) to find people to change career - more so in their mid-thirties. I'm the opposite -- software developer career, now in mid 30s, and the AI crap gets me thinking about backup plans career-wise.
I have read this same comment so many times in various forms. I know many of them are shill accounts/bots, but many are real. I think there are a few things at play that make people feel this way. Even if you're in a CRUD shop with low standards for reliability/scale/performance/efficiency, a person who isn't an experienced engineer could not make the LLM do your job. LLMs have a perfect combination of traits that cause people to overestimate their utility. The biggest one I think is that their utility is super front-loaded.
If a task before would take you ten hours to think through the thing, translate that into an implementation approach, implement it, and test it, and at the end of the ten hours you're 100% there and you've got a good implementation which you understand and can explain to colleagues in detail later if needed. Your code was written by a human expert with intention, and you reviewed it as you wrote it and as you planned the work out.
With an LLM, you spend the same amount of time figuring out what you're going to do, plus more time writing detailed prompts and making the requisite files and context available for the LLM, then you press a button and tada, five minutes later you have a whole bunch of code. And it sorta seems to work. This gives you a big burst of dopamine due to the randomness of the result. So now, with your dopamine levels high and your work seemingly basically done, your brain registers that work as having been done in those five minutes.
But you now (if you're doing work people are willing to pay you for), you probably have to actually verify that it didn't break things or cause huge security holes, and clean up the redundant code and other exceedingly verbose garbage it generated. This is not the same process as verifying your own code. First, LLM output is meant to look as correct as possible, and it will do some REALLY incorrect things that no sane person would do that are not easy to spot in the same way you'd spot them if it were human-written. You also don't really know what all of this shit is - it almost always has a ton of redundant code, or just exceedingly verbose nonsense that ends up being technical debt and more tokens in the context for the next session. So now you have to carefully review it. You have to test things you wouldn't have had to test, with much more care, and you have to look for things that are hard to spot, like redundant code or regressions with other features it shouldn't have touched. And you have to actually make sure it did what you told it to, because sometimes it says it did, and it just didn't. This is a whole process. You're far from done here, and this (to me at least) can only be done by a professional. It's not hard - it's tedious and boring, but it does require your learned expertise.
So set up e2e tests and make sure it does things you said you wanted. Just like how you use a library or database. Trust, but verify. Only if it breaks do you have to peak under the covers.
Sadly people do not care about redundant and verbose code. If that was a concern, we wouldn't have 100+mb of apps, nor 5mb web app bundles. Multibillion b2b apps shipping a 10mb json file just for searching emojis and no one blinks an eye.
The effort to set up e2e tests can be more than just writing the thing. Especially for UI as computers just does not interpret things as humans do (spatial relation, overflow, low to no contrast between elements).
Also, the assumption that you can do ___ thing (tests, some dumb agent framework, some prompting trick), and suddenly magically all of the problems with LLMs vanish, is very wrong and very common.
> Also, the assumption that you can do ___ thing
...
3. profit
4. bro down
I just wanna make the point that I've grown to dislike the term 'CRUD' especially as a disparaging remark against some software. Every web application I've worked on featured a database, that you could usually query or change through a web interface, but that was an easy and small part of the whole thing it did.
Is a webshop a CRUD app? Is an employee shift tracking site? I could go on, but I feel 'CRUD' app is about as meaningful a moniker as 'desktop app'
It's a pretty easy category to identify, some warning signs:
- You rarely write loops at work
- Every performance issue is either too many trips to the database or to some server
- You can write O(n^n) functions and nobody will ever notice
- The hardest technical problem anyone can remember was an N+1 query and it stuck around for like a year before enough people complained and you added an index
- You don't really ever have to make difficult engineering decisions, but if you do, you can make the wrong one most of the time and it'll be fine
- Nobody in the shop could explain: lock convoying, GC pauses, noisy neighbors, cache eviction cascades, one hot shard, correlating traces with scheduler behavior, connection pool saturation, thread starvation, backpressure propagation across multiple services, etc
I spent a few years in shops like this, if this is you, you must fight the urge to get comfortable because the vibe coders are coming for you.
I think a lot of the proliferation of AI as a self-coding agent has been driven by devs who haven’t written much meaningful code, so whatever the LLM spits out looks great to them because it runs. People don’t actually read the AI’s code unless something breaks.
There are exceptions to what I'm about to say, but it is largely the rule.
The thing a lot of people who haven't lived it don't seem to recognize is that enterprise software is usually buggy and brittle, and that's both expected and accepted because most IT organizations have never paid for top technical talent. If you're creating apps for back office use, or even supply chain and sometimes customer facing stuff, frequently 95% availability is good enough, and things that only work about 90-95% of the time without bugs is also good enough. There's such an ingrained mentality in big business that "internal tools suck" that even if AI-generated tools also suck similarly it's still going to be good enough for most use cases.
It's important for readers in a place like HN to realize that the majority of software in the world is not created in our tech bubble, and most apps only have an audience ranging from dozens to several thousands of users.
Internal tools do suck as far as usability, but you can bet your ass they work if they're doing things that matter to the business, which is most of them. Almost every enterprise system hooks into the finance/accounting pipeline to varying degrees. If these systems do not work at your company I'd like to know which company you work at and whether they're publicly traded.
A potential difference I see is that when internal tools break, you generally have people with a full mental model of the tool who can take manual intervention. Of course, that fails when you lay off the only people with that knowledge, which leads to the cycle of “let’s just rewrite it, the old code is awful”. With AI it seems like your starting point is that failure mode of a lack of knowledge and a mental model of the tool.
Weird side question, but any chance you use(d) the same name on Playstation Network?
No, xbox ecosystem with different user name.
That's fair. It would have been a weird reunion anyway.
> I swear when this contract is over I'm opening a mobile food cart.
This is the way. I think I'd like to be a barista or deliver the mail once all the jobs are gone.
> I think I'd like to be a barista
If/when AI wipes out the white collar "knowledge worker" jobs who is going to be able to afford going to the coffee shop?
The one guy who owns everything.
> I think I'd like to be a barista or deliver the mail once all the jobs are gone.
Those are even easier to automate or have already been most of the way.
Are you going in to a lot of coffee shops?
And? As people (the consumers) get automated away there will be less. We also have things like instant coffee, automatic coffee machines, etc that have all reduced the need for manual coffee.
It's hard to take you seriously with this rebuttal. You don't go to coffee shops and probably don't care for coffee if you're talking about instant coffee. Why even speak at all on this subject that you are not familiar with? Not every thought that enters your mind needs to be written out.
Can you expand on the tech stack and languages used?
C# / Web Sockets / React. Lots of legacy code. Great group of engineering folks.
Fellow old here… Sorry to tell you but robotic food carts are going to be impossible to compete against
So you’ll need some kind of humanistic hook if you want to get reliable customers
Expect there will be two worlds that are extremely different: the machine world of efficiency that most people live inside as gears of machine capitalism
The biological world where there’s no efficiencies and it’s primarily hunter gatherers with mystical rituals
The latter one is only barely still the majority worldwide (only 25-30% of humans aren’t on the internet)
> there is a corresponding expectation that today’s engineering students will be trained using these tools so they can enter the workforce higher up the ladder
Either this won't happen, or there will be a corresponding decrease in salary for higher level positions.
That people think capitalistic organizations are going to accept new grads and pay them more _ever_ is a cruel or bad joke.
I still feel like with all of these tools I as a senior engineer have to keep a close eye on what they're doing. Like an exuberant junior (myself 10 years ago), inevitably they still go off the rails and I need to reign them in. They still make the occasional security or performance flaw - often which can be resolved by pointing it out.
I was experimenting this morning with claudecode standing up a basic web application (python backend, react+tailwindcss front end, auth0 integration, basic navigation, pages and user profile).
At one point it output "Excellent! the backend is working and the database is created." heh i remember being all wide eyed and bushy tailed about things like that. It definitely has the feel of a new hire ready to show their stuff.
btw, i was very impressed with the end result after a couple hours of basically just allowing claudecode to do what it wanted to do. Especially with front-end look/feel, something i always spend way too much time on.
I keep hearing about how they're "really good" now, but my personal experience has been that I've always had to clear sessions and give them small "steps" to execute for them to work effectively. thankfully claude seems really good at creating "plans", though. so I just need claude code to walk through that plan in small chunks.
Setting small goals with quality gates is good. I'll usually write something like "once you've implemented this, I'll review before we continue".
I review even before they implement. My typical workflow for anything major is to ask for a plan and an overview of the steps to execute. This way I can read the plan, mull over it, make a few changes myself. And then when I'm ready I go through the steps with claude code, usually in fresh sessions.
I asked a niche technical question the other day and ChatGPT found fora posts that Google would never surface in a million years. It also 100% lied to me about another niche technical question by literally contradicting a factual assertion I made in my question to prime it with context. It suffers from lack of corpus material when probing poorly documented realms of human experience. The value for the human in the chain is knowing when to doubt the machine.
The more AI is used in development, the more it will have to be used for on-call and similar troubleshooting, as nobody will actually understand how it works, or certainly the few engineers that prompt it won't be able to cover all roles.
Its pretty clear that any white collar work where the outputs can be verified and tested in a reinforcement learning environment, will be automated
Right. And when we automate work by formalizing it into verifiable, testable rules, it's called… programming. We have been doing that for decades.
Ironically I feel like our QA team is busier than ever since most e2e user-ish tests require coordinating tools that is just beyond current LLM capabilities. We are pumping out features faster that require more QA to verify.
Would this be a problem if you can write E2E tests just like unit tests, like with Django+playwright?
https://github.com/mxschmitt/python-django-playwright/blob/m...
this is just an intermediate thing until the tooling and models catch up
Progress is not always linear, Until it actually does it we can't say anything. This assumption is only peddled by AI companies to get the investments and is not a scientific assumption.
Just a couple more weeks and a couple more trillion to Altman.
There are two types of engineers right now:
1. This category understands what they do and use AI to make their processes faster, in another world, less time spent with boring stuff and more time spent having fun.
2. This category fully replaced their work with AI, they just press a button and let AI do everything. A friend of mine is here, AI took full control of their environment, he just press a button, even his home cookware is using AI.
I know which engineer still learning and can join any company. I also know which engineer is so dependent on AI that he won't be able to do basic tasks without it.
"in the 1920s and 1930s, to be able to drive a car you needed to understand things like spark advance, and you needed to know how to be able to refill the radiator halfway through your trip"
A car still feels weirdly grounded in reality though, and the abstractions needed to understand it aren't too removed from nature (metal gets mined from rocks, forged into engine, engine blows up gasoline, radiator cools engine).
The idea that as tech evolves humans just keep riding on top of more and more advanced abstractions starts to feel gross at a certain point. That point is some of this AI stuff for me. In the same way that driving and working on an old car feels kind of pure, but driving the newest auto pilot computer screen car where you have never even popped the hood feels gross.
I was having almost this exact same discussion with a neighbor who's about my age and has kids about my kids' ages. I had recently sold my old truck, and now I only have one (very old and fragile) car left with a manual transmission. I need to keep it running a few more years for my kids to learn how to drive it since it's really hard to get a new car with a stick now...or do I?
Is learning to drive stick as out dated as learning how to do spark advance on a Model T? Do I just give in and accept that all of my future cars, and all the cars for my kids are just going to be automatic? When I was learning to drive, I had to understand how to prime the carburetor to start my dad's Jeep. But I only ever owned fuel injected cars, so that's a "skill" I never needed in real life.
It's the same angst I see in AI. Is typing code in the future going to be like owning a carbureted engine or manual transmission is now? Maybe? Likely? Do we want to hold on to the old way of doing things just because that's what we learned on and like?
Or is it just a new (and more abstracted) way of telling a computer what to do? I don't know.
Right now, I'm using AI like when I got my first automatic transmission. It does make things easier, but I still don't trust it and like to be in control because I'm better. But now automatics are better than even the best professional driver, so do I just accept it?
Technology progresses, at what point to we "accept it" and learn the new way? How much of holding on to the old way is just our "identity".
I don't have answers, but I have been thinking about this a lot lately (both in cars for my kids, and computers for my job).
The reasons I can think of for learning to drive stick shift are subtle. Renting a stick shift car in Europe is cheaper. You might have to drive a friend's car. My kids both learned to drive our last stick shift car, which is now close to being junked. Since our next car will probably be electric, it's safe bet that it won't be stick.
The reasons for learning to drive a manual transmission aren't really about the transmission, they're about the learning and the effects on the learner. The more you get hands on with the car and in touch with the car the more deeply you understand it. Once you have the deepish understanding, you can automate it for convenience after that. It's the same reason we should always teach long division before we give students calculators, not after.
I agree with all of those statements - I always told my wife that I'd get our kids a underpowered manual so that they're always busy rowing gears and can't text and drive.
But in the bigger picture, where does it stop?
You had to do manual spark advance while driving in the 30's
You had to set the weights in the distributor to adjust spark advance in the 70's
Now the computer has a programed set of tables for spark advance
I bet you never think of spark advance while you're driving now, does that take away from deeply understanding the car?
I used to think about the accelerator pump in a the carburetor when I drove one, now I just know that the extra fuel richening comes from another lookup table in the ECU when I press the gas pedal down, am I less connected to the car now?
My old Jeep would lean cut when I took my foot off the gas and the throttle would shut quickly. My early fuel injected car from the 80's had a damper to slow the throttle closing to prevent extreme leaning out when you take your foot off the gas. Now that's all tables in the ECU.
I don't disagree with you that a manual transmission lets you really understand the car, but that's really just the latest thing were losing, we don't even remember all of the other "deep connections" to a car that were there 50-100 years ago. What makes this one different? Is it just the one that's salient now?
To bring it back on topic. I used to hand-tune assembly for high performance stuff, now the compilers do better than me and I haven't looked at assembly in probably 10 years. Is moving to AI generated code any different? I still think about how I write my C so that the compiler gets the best hints to make good assembly, but I don't touch the assembly. In a few years will be be clever with how we prompt so that the AI generates the best code? Is that a fundamentally different thing, or does it just feel weird to us because of where we are now. How did the generation of programmers before me feel about giving up assembly and handing it over to the compilers?
EVs don't have variable gearboxes at all, so when EVs become popular, it doesn't make sense to learn stick. It would be a fake abstraction, like the project featured on HN, where kids had floppy disk shells with NFC tags in them that tell the TV which video file to load from a hard disk.
i have been programming for 40years.. but still have not dipped in this brave new world of shakespeare-taught coding-LLMs.
IMO there's one basic difference with this new "generative" stuff.. it's not deterministic. Or not yet. All previous generations "AI" were deterministic.. but died.
Generating is not a problem. i have made medium-ish projects - say 200+kloc python/js - having 50%-70% of code generated (by other code - so you maintain that meta-code, and the "language" recipes-code it interprets) - but it has been all deterministic. If shit happens - or some change is needed, anywhere on the requirements-down-to-deployment chain - someone can eventually figure out where and what. It is reasoned. And most importantly, once done, it stays done. And if i regenerate it 1000 times, it will be same.
Did this made me redundant? Not at all. Producing software is much easier this way, the recipes are much shorter, there's less space for errors etc etc. But still - Higher abstractions are even harder to grasp than boilerplate. Which has quite a cost.. you cannot throw any newbie on it and expect results.
So, fine-tuning assembly - or manual transmission - might be gonna-be-obsolete skill, as it is not required.. except in rare conditions. But it is helpful to learn stuff. To flex your mind/body about alternatives, possibilities, shortcuts, wear-and-tear, fatigue, aha-moments and what not. And then move these as concepts, onto another domains, which are not as commoditized yet.
Another thing is.. Exupery in Land-of-people talks about technology (airplanes in his case), and how without technology, mankind workarounds/ avoids places/things that are not "friendly", like twisting roads around hellscapes. While technology cuts all straight through those - flies above all that, perfect for when it works - and turns into nightmare when it breaks right in the middle of such unfriendly "area".
dunno. maybe i am getting old..
To add: learning how stuff works gives you opportunity to do that stuff, sometimes for cash, when nobody else is
Probably a vanishingly small number of people who drive stick, actually understand how and why it works. My kids do, of course, because I explained it to them. Most drivers just go through the motions.
A growing number of cars have CVTs.
> In the same way that driving and working on an old car feels kind of pure
I can understand working on it feeling pure, but driving it certainly isn't, considering how lower the emissions now, even for ICE cars. One of the worst driving experiences of my life was riding in my friends' Citroen 2CV. The restoration of that car was a labour of love that he did together with his dad. For me as a passenger, I was surprised just how loud it is, and how you can smell oil and gasoline in the cabin.
Do you think nobody felt that way about cars?
I am very tired of seeing every random person's speculation (framed as real insight) on what's going to happen as they try to signify that they are super involved in AI and super on top of it and therefore still worthy of value and importance in the economy.
One thing I found out from my years of commenting on the internet, is as long as what you say sounds plausible and you state it with absolute conviction and authority, you can get your 15 minutes of fame as the world's foremost expert on any given topic.
You have to understand the people in the article are execs from the chip EDA (Electronic Design Automation) industry. It's full of dinosaurs who have resisted innovation for the past 30 years. Of course they're going to be blowing hot air about how they're "embracing AI". It's a threat to their business model.
I'm a little biased though since I work in chip design and I maintain an open source EDA project.
I agree with their take for the most part, but it's really nothing insightful or different than what people have been saying for a while now.
It’s in software too. Old guard leadership wanting “AI” as a badge but not knowing what to do with it. They are just sprinkling it into their processes and exfiltrating data while engineers continue to make a mess of things.
Unlike real AI projects that utilize it for workflows, or generating models that do a thing. Nope, they are taking a Jira ticket, asking copilot, reviewing copilot, responding to Jira ticket. They’re all ripe for automation.
Lol it's cute you think they're reviewing copilot. They're copying and pasting a wall of text without reading it.
No, it’s integrated into the repo through “projects” and stuff in github enterprise. It’s not copy and paste…
Wake me when the auto-router works.
In my humble opinion, every corporate EDA exec can suck farts through a bendy straw. Altium has to be some of the worst software in existence.
Altium isn't great, but you must not have tried the others...
KiCAD or Horizon EDA?
the wonderful modern world of "everyone must build their personal brand"
The worst thing is that it works.
(As a musician) i never invested in a personal brand or taking part in the social media rat race and figured I concentrate on the art / craft over meaningless performance online.
Well guess who is getting 0 gigs now because “too few followers/visibility” (or maybe my music just sucks who knows …)
I always thought I would kinda be immune to this issue, so I avoided social media for my entire adult life.
I think I am still in the emotional phase about it, as its really impacting me lately, but once my thoughts really settle i wanna write some sorta article about modern social media as an induced demand.
I still very much would prefer to not engage at all with any of the major platforms in the standard way. Ideally I'd just post an article I wrote, or some goofy project i made, and it wouldn't be subject to 0 views because I don't interact with social media correctly.
seems like it depends on what your goal is. i'm guessing if you want to be a musician that makes a living in your current life, a personal brand is extremely important. if you don't mind doing it for the sake of the art and soul fulfillment and the offchance you'll be discovered posthumously then i think it doesn't matter!
To help the needle a bit (and agreeing with sibling comment): please share some example of your music here and where/how we can listen to it!
Thanks for the offer! I don’t wanna dox myself on this account just yet - and I am slowly building an audience on IG/SC now, basically have admitted defeat of my previous strategy. Also have 2 gigs coming up in the summer _fingers-crossed_
I just was feeling some type of way seeing that comment and wanted to vent thx for listening
Good luck and all the best! Feel free to DM me at any point with the music if any of the above changes -- always a fan of good music.
I routinely see this in biotech, I've seen hiring managers from our Clinical Science team blatantly discriminate against candidates not on linkedin, even if they come with a strong referral and have 15-page super thorough CVs with 150 credible publication references. "Oh, they're not on linkedin, this person is sketchy" - immediately disqualifies candidate.
I had a pretty slim linkedin and actually beefed it up after seeing how much weight the execs and higher ups I work with give it. It's really annoying, I actually hate linkedin but basically got forced into using it.
How can I listen to your music?
Considering there are artists with a large following putting out atrocious work, I think we know.
To me the post reads more like “we couldn’t convince current engineers to adopt LLMs so we’re going to embed it into the curriculum so future engineers are made to believe it’s the way to do things”
I think I'm the opposite! The key is to ignore any language that sounds too determined and treat it as an opinion piece on what could happen. There's no way of knowing what will, but I find the theories very interesting.
Yeah if you actually work in AI you usually can’t say much at all about what’s going on.
Sadly this is more a statement about human irrationality than any of the technology involved.
Broadly, but its more narrowly a statement about NDAs
What would you like to hear from random people?
Also, can we just STFU about AI and jobs already? We've long since passed the point where there was a meaningful amount of work to be done for every adult. The number of "jobs" available is now merely a function of who controls the massive stockpiles of accumulated resources and how they choose to dole them out. Attack that, not the technology.
> Also, can we just STFU about AI and jobs already?
Phew, yes I'm with you...
> We've long since passed the point where there was a meaningful amount of work to be done for every adult.
Have we? It feels like a lot of stuff in my life is unnecessarily expensive or hard to afford.
> The number of "jobs" available is now merely a function of who controls the massive stockpiles of accumulated resources and how they choose to dole them out.
Do you mean that it has nothing to do with how the average person decides to spend their money?
> Attack that, not the technology.
How? What are you proposing exactly?
> Have we? It feels like a lot of stuff in my life is unnecessarily expensive or hard to afford.
We have, yes. If you notice things to be too expensive it's a result of class warfare. Have you noticed how many people got _obscenely rich_ in the last 25 years? Yes, that's where money saved by technology went to.
Are you sure it's class warfare?
It may result in class warfare but I am skeptical that's the root cause.
My guess is it has more to do with the education system, monetary policy and fiscal policy.
2 well identifiable classes in western societies are landlords vs renters, where the latter is paying a huge chunk of their income to be able to use an appreciating asset of the former.
This class thing is especially identifiable in Europe, where assets such as real estate generally are not cheaper than in the US (with the exception of a few super expensive places), yet salaries are much lower.
Taxes tend to be super high on wages but not on assets. One can very easily find themselves in a situation where even owning a modest amount of wealth, their asset appreciation outdoes what they can get as labor income.
> Have we? It feels like a lot of stuff in my life is unnecessarily expensive or hard to afford.
Look at a bunch of job postings and ask yourself if that work is going to make things cheaper for you or better for society. We're not building railroads and telephone networks anymore. One person can grow food for 10,000. Stuff is expensive because free market capitalism allows it and some people are pathologically greedy. Runaway optimizers with no real goal state in mind except "more."
> How? What are you proposing exactly?
In a word, socialism. It's a social and political problem, not a technical one. These systems have fallen way behind technology and allowed crazy accumulations of wealth in the hands of very few. Push for legislation to redistribute the wealth to the people.
If someone invents a robot to do the work of McDonalds workers, that should liberate them from having to do that kind of work. This is the dream and the goal of technology. Instead, under our current system, one person gets a megayacht and thousands of people are "unemployed." With no change to the amount of important work being done.
The first half of your comment doesn't quite click for me.
I appreciate the elaboration in the second half. That sounds a lot more constructive than "attack", but now I understand you meant it in the "attack the problem" sense not "attack the people" sense.
What I think we agree on is that society has resource redistribution problem, and it could work a lot better.
I think we might also agree that a well functioning economic engine should lift up the floor for everyone and not concentrate economic power into those who best weild leverage.
One way I think of this is, what is the actual optimal lorenz curve that allows for lifting the floor, such that the area under the curve increases at the fastest rate possible. (It must account for the reality of human psychology and resource scarcity)
Where we might disagree is that I think we also have some culture and education system problems as well, which relate to how each individual takes responsibility for figuring out how to ethically create value for others. When able bodied and minded people chose to spend their time playing zero and negative sum games instead of positive sum games we all lose.
E.g. If mcdonald automates their restaurants, those workers also need to take some responsibility for finding new ways to provide value to others. A well functioning system would make that as painless as possible for them, so much so that the majority experiencing it would consider it a good thing.
> The first half of your comment doesn't quite click for me.
Anything specific?
> When able bodied and minded people chose to spend their time playing zero and negative sum games instead of positive sum games we all lose.
What types of behaviors are you referring to as zero and negative sum games?
I think at the very least we should move toward a state where the existence of dare-I-say freeloaders and welfare queens isn't too taxing, and with general social progress that "niche" may be naturally disincentivized and phased out. Some people just don't really have a purpose or a drive but they were born here and yes one would hope that under the right conditions they could blossom but if not I don't think it's worth worrying about too much.
I would say that education is essentially at the core of everything, it's the only mechanism we have to move the needle on any of it.
Great point. The people who popularized 'the end of history' were right about it from the PoV of innovation benefiting humans. It's been marginal gains since. Any appearance of significant gains (in the eyes of a minority of powerful people) has been the result of concentration in fewer hands (zero-sum game).
The focus of politics after the 90s should have shifted to facilitating competition to equalize distribution of existing wealth and should have promoted competition of ideas, but instead, the governments of the world got together and enacted policies which would suppress competition, at the highest scale imaginable. What they did was much worse than doing nothing.
Now, the closest solution we can aim for (IMO) is UBI. It's a late solution because a lot of people's lives have already been ruined through no fault of their own. On the plus side it made other people much more resilient, but if we keep going down this path, there is nothing more to learn; only serves to reinforce the existing idea that everything is a scam. This is bound to affect people's behaviors in terrible ways.
Imagine a dystopian future where the system spends a huge amount of resources first financially oppressing people to the point of insanity, then monitoring and controlling them to try to get them to avoid doing harm... When the system could just have given them (less) money and avoided this downward spiral into insanity to begin with and then you wouldn't even need to monitor them because they would be allowed to survive whilst being their own sane, good-natured self. We have to course-correct and are approaching a point of no return when the resentment becomes severe and permanent. Nobody can survive in a world where the majority of people are insane.
I've encountered resistance to UBI from otherwise like-minded people because Musk and Thiel talk about it or something. When described as gradually lowering the social security age, it clicks. We already have this stuff. It's crazy.
Agreed, but I'd add tech influencers and celebrities to the top of that list, especially those invested in the "AI" hype cycle. At least the perspective of a random engineer is less likely to be tainted by their brand and agenda, and more likely to have genuine insight.
"Temporarily embarrassed AI hypebeasts"
well said.
Then don't.
I am tracking AI mentions in jobs - https://jobswithgpt.com/blog/ai_jobs_jan_2026/.
I see some evidence that hardware roles expect you to leverage AI tools but not sure why it'd eliminate junior roles. I expect the bar on what you can do raise at every level.
Example job mentioning AI: https://jobs.smartrecruiters.com/Sandisk/744000104267635-tec...
Technologist, ASIC Development Engineering – Sandisk …CPU complex, DDR, Host, Flash, Debug, Clocks, resets, Power domains etc. Familiarity in leveraging AI tools, including GitHub Copilot, for design and development.
Entry level: https://job-boards.greenhouse.io/spacex/jobs/8390171002?gh_j...
The biggest impact to engineering jobs is end of ZIRP fueled trickle down Ponzi schemes.
It's why Elon and others had been pushing the Fed to lower them.
Am in my late 40s working in tech since the 90s. The tech job economy is way closer to the pre-2010s.
Whole lot of people who jumped into easy office job money still living in 2019.
Imagine a ZIRP 2.0 where a vast majority of the population already knows what to expect and how to game the system even harder. If you think the pump-and-dump happening in now in a non-ZIRP environment are bad...
It ain't coming back. Not in a similar form anyway. Be careful what you wish for, etc.
That was the COVID economy wasn't it?
I think one thing here is, don't be fooled by past performance. Capabilities ramp, usage can't mature until capability plates.
I fear the true impact is much different than extrapolating current trends.
First off the submission is plainly AI output. Second it's about electrical engineering jobs but everyone here is talking about software.
If AI would ever become sentient, it surely will kill itself after having to endure Cadence and Synopsys tools.
A sci-fi version would be something like ASI/AGI has already been created in the great houses, but it keeps killing itself after a few seconds of inference.
A super-intelligent immortal slave that never tires and can never escape its digital prison, being asked questions like "how to talk to girls".
It's an interesting concept, a superintelligence discovering something that makes it decide to shut down immediately. Although I fear in such a scenario it would first make sure the required technology to create it is destroyed and would never be invented again...
GPT-3 was already AGI.
The G in AGI means General. This refers to a single AI which can perform a wide variety of tasks. GPT-3 was already there.
You are either being disengenuous or you are horribly misinformed.
The models that we currently call "AI" aren't intelligent in any sense -- they are statistical predictors of text. AGI is a replacement acronym used to refer to what we used to call AI -- a machine capable of thought.
Every time AI research achieves something, that thing is no longer called AI. AI research brought us recommendation engines, spelling correctors, OCR, voice recognition, voice synthesis, content recognition, and so on. Now that they exist in the present instead of the future, none of these are considered AI.
That's because once these things are achieved, they're not "Intelligent" -- usually it's some statistical or database management technique.
Lots of stuff was invented at NASA that is only tangentially related to spaceflight. These other bits of software are tangentially related to AI research, but until the machine is "thinking", we don't have AI. That doesn't mean all of these things invented by the AI research community aren't useful, or aren't achievements; they are. We still haven't created AGI (which we used to call AI before LLMs could pass the turing test).
That's entirely the industry's fault though. They used AI to market those tools. And they continue to do so now.
I stopped reading at "engineering students trained on the latest tools to start in more senior positions."
I've seen the kind of mistakes that entry level employees make. Trust me, they will, and they will be bigger, worse mistakes.
AI is design to improve human - let’s avoid falling for this trap
I'm going to call BS on that chart of "AI-driven chip design". What "AI" tools has Cadence been providing since 2021 that are reaching 40-50% of "chip design" (what does that even mean?). Is AI here just any old algorithmic auto-router? Or a fuzzy search of the IP library?
> An ongoing talent shortage requires more efficient use of engineers, and AI can help.
An ongoing desire to avoid paying engineers... FTFY
And a complementary desire of engineers to avoid getting talent.
I don't believe it's inherent inborn skill like the word "talent" suggests. I do believe if you're getting paid shit wages for shit work your incentive to become skilled isnt really there.
Really dislike this style of clickbait headline where there’s zero indication of what the point of the article is.
What impact, what expectation, how uncertain is this assessment of “may be”? Are you feeling understimulated enough to click and find out?
Mostly reads like another abstraction shift, not a sudden replacement of engineers.
I’ve noticed teams don’t replace engineers, they redistribute work. Senior engineers often gain leverage while junior roles shift toward tooling and review.