There are two aspects to this. The desire to learn and the utility of learning. These are two very different things. Arguably the best programmers I have known have been explorers and hopped around a lot. Their primary skills have been flexibility and curiosity. The point here was their curiosity, not what they were curious about. Curiosity enabled them to attack new problems quickly and find solutions when others couldn't. Very often those solutions had nothing to do with skip lists or bubble sort. Studying algorithms is useful for general problem solving and hey, as a bonus, it helps sometimes when you are solving a real world problem, but staying curious is what really matters.
We have seen so many massive changes to software engineering in the last 30 years that it is hard to argue the clear utility of any specific topic or tool. When I first started it really mattered that you understood bubble sort vs quicksort because you probably had to code it. Now very few people think twice about how sort happens in python or how hashing mechanism are implemented. It does, on occasion, help to know that but not like it used to.
So that brings it back to what I think is a fundamental question: If CS topics are less interesting now, are you shifting that curiosity to something else? If so then I wouldn't worry too much. If not then that is something to be concerned about. So you don't care about red black trees anymore but you are getting into auto-generating Zork like games with an LLM in your free time. You are probably on a good path if that is the case. If not, then find a new curiosity outlet and don't beat yourself up about not studying the limits of a single stack automata.
If there's a single trait that divides the best developers in the world from the rest it's what you described there - curiosity and flexibility. No academic course could bring you on par with those people.
The best software engineers I know, know how to go from ambiguous customer requirements to solutions including solving XYProblems, managing organization and code complexity, dealing with team dynamics etc.
Do you feel yourself losing interest, curiosity, "spark"? If so, then maybe worrying is right.
If you're just (hyper?)focused on something else, then, congrats! Our amazing new tools are letting us focus on even more things -- I, for one, am loving it.
Because AI still hallucinates. Since you mentioned algorithms, today for fun I decided to ask Claude a pretty difficult algorithm problem. Claude confidently told me a greedy solution is enough, before I told Claude a counterexample that made Claude use dynamic programming instead.
If you haven't learned the fundamentals, you are not in a position to judge whether AI is correct or not. And this isn't limited to AI; you also can't judge whether a human colleague writing code manually has written the right code.
The point is that if you know the algorithm will produce X as the output if the input is Y, give that as a tool to Claude
And if you know that the previous algorithm completes in Z milliseconds, tell Claude that too and give it a tool (a command it can run) to benchmark its implementation.
This way you don't need to tell it what it did wrong, it'll check itself.
It was the other way around. Claude gave me an algorithm. I found it fishy. So I specifically constructed a counterexample in response to Claude’s algorithm.
Of course when I gave that to Claude, Claude changed the algorithm. But if I didn’t have enough experience and CS fundamentals to find it fishy in the first place, why would I construct a counterexample?
That is correct. But for how long ? How long would it take for AI to learn all of this too ? AI sure does learn faster than humans and even though it will never degrade the relevance of fundamentals, don't you think that the bar for someone beginning to learn about the fundamentals, would just keep increase exponentially.
AI takes XYZ data to set N range, it never created anything new but took all and created a baseline, which is at many tasks very good.
It cannot really create anything new and never seen, which most people will never do either.
So if we push away even more onto AI, I am afraid MANY(not all) that would previously gone through the discovery path won't stumble onto their next innovation, since they simply prompted a good baseline for ABC task, because we are lazy.
Even if AI knows everything and is basically sentient, we still need to understand these things to work with it. How can we prompt it reliably without understanding the subject matter for which we are prompting?
If anything I consider fundamentals in STEM (such as Math/CS) to be even more valuable moving forward.
It’s a variant of a knapsack problem. But neither Claude nor I initially realized it was a knapsack problem: it became clear only after the solution was found and proved.
AI is great at giving you an answer, but fundamentals tell you if it's the right answer. Without the basics, you're not a pilot; you're a passenger in a self-driving car that doesn't know what a red light is. Stay strong in the fundamentals so you can be the one holding the steering wheel when the AI hits a hallucination at 70mph.
As a non-member of the exalted-many who get to hack for a living-
I agree. The nature of the machine, is to crush the artisanry and joy from the task. However, you can't beat it, so…
I use the miserable things as "research accelerators." I have neither the time, nor the capacity to sustain the BAC necessary, to parse all of the sources and documentation of the various systems in which I'm liable to take interest. I very rarely ask them to "do ${task} for me," but rather:
"What is the modern approach to ${task}? And, how do I avoid that and do ${task} in the spirit of Unix?”
"Has anyone already done ${task} well?"
"Are there any examples of people attempting ${task} and failing spectacularly?"
If you treat it like your boss, it'll act like your boss. If you treat it like your assistant, it'll act like your assistant.
These tools actually make me more interested in CS fundamentals. Having strong conceptual understanding is as relevant as ever for making good judgement calls and staying connected with your work.
Then look at how Anthropic basically Acquihired the entire Bun team. If the CS fundamentals didn't matter, why would they?
Even Anthropic needs people that understand CS fundamentals, even though pretty much their entire team now writes code using AI.
And since then, Jared Sumner has been relentlessly shaving performance bottlenecks from claude code. I have watched startup times come way down in the past couple months.
Sumner might be using CC all day too. But an understanding of those fundamentals (more a way of thinking rather than specific algorithms) still matter.
I work in a subfield of CS that requires those fundamentals pretty regularly, and I also make regular use of AI tools. You definitely need those fundamentals because AI tools can’t always be trusted to make good decisions when it comes to them. Knowing the fundamentals yourself is critical to keep the AI assistants in check, both to know how to guide them AND to know to recognize when they made a bad decision.
A recent example for me: I had a challenging problem in a medium sized codebase (tens of thousands of lines) that boiled down to performing some updates to a complex data structure where the updates needed to be constrained by some properties of the overall structure to maintain invariants. Maintaining the invariants while the data structure was being updated is tricky since naive approaches would required repeated traversals of the whole structure. That would be really inefficient, and a smarter approach would try to localize the work during the updates. The latest Claude and GPT assistants recognized this, but their solutions were exceptionally complex and brittle. I eventually solved it myself with a significantly simpler and more robust method (both AIs even gleefully agreed that my solution was slick after I did it).
Had I let my CS fundamentals go to waste I wouldn’t have been able to solve it myself, nor would I have been able to recognize that the solutions posed by the models were needlessly complex.
Just because an AI can generate a solution that passes tests quickly doesn't mean what it generated is a long term good solution. Your skills in fundamentals is key to recognizing when it does a good job and when it doesn’t, and being able to guide it in the right direction.
To borrow a concept from Simon Willison: you need to "hoard things you know how to do”. You need to know what is possible; you need to be able to articulate what you want. AI is a fast car, but it’s empty and still needs a driver. As long as humans are still in the loop, the quality of the driver matters.
Terminology matters, if you use the right words, the AI will work better.
Just saying "use red/green TDD" is a shortcut to a very specific way of fixing bugs.
Or when you use a multi-modal model to transcribe video saying "timecode" instead of "timestamp" will improve the results (AV production people say timecode, programmers say timestamp, it hits different parts of the training material)
Fundamentals are the only thing left to learn in our field.
Either the AI doesn’t understand them, and you need to walk it down the correct path, or it does understand them, and you have to be able to have an intelligent conversation with it.
Ai emphatically doesn't know when to reach for A vs B in terms of the options on the table. At least understanding some of the characteristic trade offs will go a long way. Especially if you are inclined to favor simplicity over unnecessary complexity. AI can easily over-complicate things and make solutions that become a crazy, complex mess.
The vast majority of line of business apps can be solved with a relatively simple CRUD UI with a simple API server with a SQL based RDBMS. But even then, you will hit limits and experience bottlenecks in practice. If you need to do any kind of scaling, you need to know where the low hanging fruit and complexities lay.
I think that AI, particularly LLMs, can be quite effective for learning, especially if you maintain a sense of curiosity. CS fundamentals, in particular, are well-suited for learning through LLMs because models have been trained on extensive CS material. You can explore different paradigms in various ways, ask questions, and dissect both questions and answers to deepen your understanding or develop better mental models. If you're interested in theory, you can focus on theoretical questions but if you're more hands-on you can take a practical approach, ask for code examples etc. If you have a session and feel that there's something there that you want to retain ask for flash cards.
There are two types of CS fundamentals: the ones that help in making useful software, and the rest of them.
AI tools still don't care about the former most of the time (e.g. maybe we shouldn't do a loop inside of loop every time we need to find a matching record, maybe we should just build a hashmap once).
Even if AI was perfect (it’s not), you still need the fundamentals to properly frame what you want it to do and evaluate the results.
Think of how it was before AI. Someone without foundational knowledge of a topic would flounder. They wouldn’t know what to search for or what questions to ask. Meanwhile, someone with that foundational knowledge is able to put together abstract ideas to ask the proper questions and know how to search for to get additional details.
When you don’t know what you don’t know, it’s almost impossible to be effective, even with AI.
Well, it depends. There's no right or wrong answer here.
Simon wrote an article "What is agentic engineering?" [1]
> Now that we have software that can write working code, what is there left for us humans to do?
> The answer is so much stuff.
> Writing code has never been the sole activity of a software engineer. The craft has always been figuring out what code to write. Any given software problem has dozens of potential solutions, each with their own tradeoffs. Our job is to navigate those options and find the ones that are the best fit for our unique set of circumstances and requirements
Such navigations may require various skills. For example: people/product skills (e.g customer empathy) to determine what to build, or engineering skills (e.g optimizations). Please be open for learning and get stronger via feedbacks.
I was wondering about this. I do not write software to pay the mortgage, I just write the occasional python script, some SQL stuff to update various dashboards, R in my spare time when I'm getting ready for looking at baseball stats or something. AI has had pretty much the opposite effect for me. Watching it write something has made me ask questions, get answers, dig into more details about things I never had the time to google on my own or spend an hour or several looking through stackoverflow.
I'd say my ability to write code has stayed about the same, but my understanding of what's going on in the background has increased significantly.
Before someone comes in here and says "you are only getting what the LLM is interpreting from prior written documentation", sure, yeah, I understand that. But these things are writing code in production environments now are they not?
I actually find the the inverse is true. I find that thinking more in terms of algorithm trade-offs and strategies for distributed systems, and work with the LLM to explore what the state of the art options are and how to analyze which are appropriate for my current project.
I see over and over those with the deeper understanding are able to drive the AI/LLM code generation processes faster and more effectively, and build things that can be built on by others without hitting hard bottlenecks.
The less people understand CS fundamentals the faster they his a blockade of complexity. This is not necessarily bad code, but sloppy thinking. And CS fundamentals are information and logical processing fundamentals.
It is the Centaur issue. You need to help provide the evaluation and framing for the AI/LLM to search out the possibilities and well known solutions, and code up the prototypes. Without the fundamentals you have to rediscover them slowly after you already hit the hard problems and pause for days or months while trying to work you way around them.
Fundamentals should have even higher weight in learning budgets now, because AI can’t reliably reason complex architectural problems. It’s the surface level APIs you don’t have to learn/memorize.
Maybe you mean “AI tools are making me lose interest in learning anything”, which is… a common reaction, I suppose.
It used to be that you had to have a strong understanding of the underlying machine in order to create software that actually worked.
Things like cycle times of instructions, pipeline behavior, registers and so on. You had to, because compilers weren‘t good enough. Then they caught up.
You used to manage every byte of memory, utilized every piece of underlying machinery like the different chips, DMA transfers and so on, because that‘s what you had to do. Now it‘s all abstracted away.
These fundamentals are still there, but 99,9% of developers neither care nor bother with them. They don’t have to, unless they are writing a compiler or kernel, or just because it‘s fun.
I think what you‘re describing is also going to go away in the future. Still there, but most developers are going to move up one level of abstraction.
So far I've only been able to get coding assistants to do things I actually understand (or slightly beyond). Either something I learned long ago, or these days things I learn online .... with the help of the LLM.
Either way, if you want to talk with an LLM on the same level, you're going to need to train on the same dataset.
My naïve answer to this is, one should never be interested in things because of how useful it might be, but because of the thing itself.
Otherwise, AI won’t be the first to make one losing interest.
With interest, AI may even make it more addicting.
Another distinction to make is, when you use AI, are you taking a shortcut, or channeling it to automate the boring stuffs so that you can explore things you otherwise don’t have time to explore?
Knowing fundamentals gives you deeper intuition about the technology, at every layer.
When compilers appeared, you no longer needed to understand assembly and registers. But knowing how assembly and registers actually work makes you better at C. When Python came along, low-level languages felt unnecessary. But understanding C's memory management is what lets you understand Python's limitations.
Now LLMs write the implementation. LLMs abstract away the code. But knowing how algorithms work, even in a high-level language like Python, is exactly how you catch LLM mistakes and inefficiencies.
Knowledge builds on knowledge. We learn basic math before advanced math for a reason. The pyramid keeps accumulating from what came before. Understanding the fundamentals still matters, I think.
The author of Claude Code himself mentioned this in a recent interview. If I recall correctly, he mentioned that the best programmers he knows have an understanding of the "layer below the layer", which I think it's a good way of putting it. You're a better C programmer if you understand assembly, and you're a better "vibe coder" if you can actually understand the LLM generated code.
Many did not. It's important to understand the distinction.
I was in middle and high school when calculators became the standard, but they were still expensive enough that we kept the Ti-80 calculators on a backroom shelf and checked them out when there was an overnight problem set or homework assignment. In a round about way, I think I ended up understanding more about the underlying maths because of this.
So, no, many did not actually learn arithmetic in school. This isn't necessarily because of the calculator, but if you don't get a student to understand what arithmetic even is then handing them a calculator may as well be like handing them a magic wand that "does numbers".
Interestingly, here in Canada, kids are no longer taught long division. When I was in highschool, we were taught to use slide rules, but not very seriously. And I would imagine, nobody gets taught how to use trigonometry tables anymore (at least I hope so). So, these days, you learn arithmetic very differently because calculators exist.
CS fundamentals is about framing an information problem to be solvable.
That'll always be useful.
What's less useful, and what's changed in my own behavior, is that I no longer read tool specific books. I used to devour books from Manning, O'reilly etc. I haven't read a single one since LLMs took off.
I find I'm going even deeper lately. I, obviously, have to completely and _totally_ understand every line written before I will commit it, so if AI spits something out that I haven't seen before, I will generally get nerd sniped pretty good.
I think it's important to stay curious and keep learning, but there's a lot to be curious about and all sorts of different skills you could work on improving. Going deep on algorithms or distributed systems are two possible directions, but there are others.
watching the difference in a non-CS versus a CS person using an LLM is all you need to do to reaffirm belief that fundamentals are still a massive benefit, if not a requirement for the deeper software.
>> watching the difference in a non-CS versus a CS person using an LLM is all you need to do to reaffirm belief that fundamentals are still a massive benefit, if not a requirement for the deeper software.
For now? I'm a CS grad. My son wants to be a CS grad. I'd like to believe what you wrote, and hope it is true. But at the current rate of LLM improvement, would this really be the case in 4yrs? Asking seriously as my son needs to choose a major in 4yrs.
if you really love CS - there's a future in it. If AI becomes the new substrate for civilization, we'll always need people who fundamentally understand how these systems works to some degree.
> why do you think it’s still important to stay strong in CS fundamentals?
I don't think anyone at any level has any idea what the future is holding with this rapid pace of change. What some old timers think is going to be useful in a post-Claude world isn't really meaningful.
I think if I had limited time to prioritize learnings at the moment it would be prioritizing AI tooling comfort (e.g. getting comfortable doing 5 things shallowly in parallel) versus going super deep in understanding.
I specialize in cloud + app dev leading consulting projects. I can absolutely guarantee you that AI does a horrible job of architecture for distributed systems, concurrency implementations, and data engineering and won’t do the optimal design unless you carefully guide it.
CS fundamentals? In this day and age it’s only important to keep up with them if you are one of the relatively few people (even in BigTech and adjacent) who are building the fundamentals or trying to get a job in one where “grinding leetCode” is important.
Before the pearl clutching starts I had to implement many of the algorithms as part of my $DayJob early in my career as a C bit twiddler across various platforms. But haven’t since 2012-2014.
Now I can do shit fast. An LLM will turn any funky idea I have into an MVP within minutes.
Just last night I was struggling browsing / organising media on my NAS, because macOS Samba + NFS suck: "What if I build a bespoke web application I can run on the NAS to do this"
One episode of SNL and two episodes of Graham Norton Show later I had a Dockerized Vite + Go application where I can mount media directories to and browse pictures + videos with previews. It's not 100% done, but close enough for me to see if it's something I need to spend time working on
...but I can also learn stuff - by asking the LLM to teach me.
While that one was building, I stole an idea I saw on the internet and started building an agent harness that uses Qwen3.5:9b as a backend to run tools locally. I specifically asked Claude to build it in parts and explain to me how it works step by step.
Now I know a lot more about that than I did yesterday.
"With powerful computers, I sometimes feel less motivated to study deep mathematical topics like differential equations and statistics. Computers can math quickly, which makes the effort of learning the fundamentals feel less urgent.
For those who have been in the industry longer, why do you think it’s still important to stay strong in mathematical fundamentals?"
Because otherwise you are training to become a button pressing cocaine monkey?
I don't find your analogy compelling. More like "calculators make me less motivated to learn how to multiply four-digit numbers in my head". There used to be jobs for people who were good with numbers. They're pretty much gone, and it's not even much of a parlor trick, so no one bothers to learn these skills anymore.
If the best argument for going into CS is that LLMs sometimes make stuff up and will need human error checkers, I can see why people are less excited about that future. The cocaine monkey option might sound more fun.
It reminds me of the situation with self-driving that expects you to keep your full attention on the road while not driving so that you can take over at any time. It's clearly unrealistic.
It's not a failing of yours or anyone else's, but the idea that people will remain intellectually disciplined when they can use a shortcut machine is just not going to work.
I don't even care about the tooling, because I get to choose whether I use it or not. Sometimes I first:
1. Summarize a page with gemini
2. Then go through it myself to see if I understand the entire page
Which can help a bit with getting up to speed.
What I'm demotivated by is all these new HN posts that are blatantly using LLMs to write, and then hiding the fact they are. Just be honest... There's nothing wrong with making a mistake, you learn from those.
I get that there's a rule against it now, but it will only filter out low-hanging fruit. I still see too many, and I don't think people will ever change in this sense.
80% of my comments lately have been about spotting these posts/comments and I feel like its not doing anything except getting me mad.
> For those who have been in the industry longer, why do you think it’s still important to stay strong in CS fundamentals?
Dictionaries have made me feel like studying languages is pointless. People, why do you think it’s still important to stay strong in languages when dictionaries exist?
It’s important to know what magick words to say to the demon to get what you want. Knowing the fundamentals is part of that in my view. At some point, the YouTube tutorials for complex CS mechanisms are probably fine. I like doing toy examples of it myself and even having the LLM coach me along.
Knowing the right tool for the job is even more powerful now because it will prevent you from going down a rabbit hole the LLM thinks is just fine.
In the AI era, is it still worth spending significant time reading deep CS books like Designing Data-Intensive Applications by Martin Kleppmann?
Part of my hesitation is that AI tools can generate implementations for many distributed system patterns now. At the same time, I suspect that without understanding the underlying ideas (replication, consistency, partitioning, event logs, etc.), it’s hard to judge whether the AI-generated solution is actually correct.
For those who’ve read DDIA or similar books, did the knowledge meaningfully change how you design systems in practice?
Longer answer: About 10 years I moved into leadership roles (VP Eng) and while I continued to write code for POCs, it hasn't been my primary role for quite some time. DDIA has been a book I pull out often when guiding leaders and members of my teams when it comes to building distributed systems. I'm writing more code these days because I can, and I still reference DDIA and have the second edition preordered.
AI tools are more effectively used aligning with/from CS fundamentals. Knowing how to ask and what to avoid is critical. It can be powered through/past, but the incorrect areas in the context can compound and multiply.
I view this a bit like asking "why bother getting a job when I could just get rich at the slot machines"
Knowledge is still power, even in the AI age. Arguably even moreso now than ever. Even if the AI can build impressive stuff it's your job to understand the stuff it builds. Also, it's your job to know what to ask the AI to build
So yes. Don't stop learning for yourself just because AI is around
Be selective with what you learn, be deliberate in your choices, but you can never really go wrong with building strong fundamentals
Edit: What I can tell you almost for certain is that offloading all of your knowledge and thinking to LLMs is not going to work out very well in your favor
Those are great answers to the question you did ask, but I'd also like to answer a question you didn't ask: whether AI can improve your learning, rather than diminish it, and the answer is absolutely a resounding yes. You have a world-class expert that you can ask to explain a difficult concept to you in a million different ways with a million different diagrams; you have a tool that will draft a syllabus for you; you have a partner you can have a conversation with to probe the depth of your understanding on a topic you think you know, help you find the edges of your own knowledge, can tell you what lies beyond those edges, can tell you what books to go check out at your library to study those advanced topics, and so much more.
AI might feel like it makes learning irrelevant, but I'd argue it actually makes learning more engaging, more effective, more impactful, more detailed, more personalized, and more in-depth than anyone's ever had access to in human history.
For now yeah becasue you need to direct the Ai correctly still. Either with planning or you need to fix it's mistakes and identify when it did something correct but not optimally.
There are two aspects to this. The desire to learn and the utility of learning. These are two very different things. Arguably the best programmers I have known have been explorers and hopped around a lot. Their primary skills have been flexibility and curiosity. The point here was their curiosity, not what they were curious about. Curiosity enabled them to attack new problems quickly and find solutions when others couldn't. Very often those solutions had nothing to do with skip lists or bubble sort. Studying algorithms is useful for general problem solving and hey, as a bonus, it helps sometimes when you are solving a real world problem, but staying curious is what really matters.
We have seen so many massive changes to software engineering in the last 30 years that it is hard to argue the clear utility of any specific topic or tool. When I first started it really mattered that you understood bubble sort vs quicksort because you probably had to code it. Now very few people think twice about how sort happens in python or how hashing mechanism are implemented. It does, on occasion, help to know that but not like it used to.
So that brings it back to what I think is a fundamental question: If CS topics are less interesting now, are you shifting that curiosity to something else? If so then I wouldn't worry too much. If not then that is something to be concerned about. So you don't care about red black trees anymore but you are getting into auto-generating Zork like games with an LLM in your free time. You are probably on a good path if that is the case. If not, then find a new curiosity outlet and don't beat yourself up about not studying the limits of a single stack automata.
If there's a single trait that divides the best developers in the world from the rest it's what you described there - curiosity and flexibility. No academic course could bring you on par with those people.
The best software engineers I know, know how to go from ambiguous customer requirements to solutions including solving XYProblems, managing organization and code complexity, dealing with team dynamics etc.
Exactly this. Couldn't have said it better.
Do you feel yourself losing interest, curiosity, "spark"? If so, then maybe worrying is right.
If you're just (hyper?)focused on something else, then, congrats! Our amazing new tools are letting us focus on even more things -- I, for one, am loving it.
> The desire to learn and the utility of learning.
See also Profession by Isaac Asimov for a fictional story about the distinction between the desire to learn and the utility of learning: https://www.inf.ufpr.br/renato/profession.html
and "the feeling of power", also by asimov, for a satirical take on what happens when no one learns stuff the computer can do for them.
I'd take another view here and suggest you not learn all this untill you need it.
The day you need it, you'll be more motivated to learn it. That's pretty much how I learnt most things.
Because AI still hallucinates. Since you mentioned algorithms, today for fun I decided to ask Claude a pretty difficult algorithm problem. Claude confidently told me a greedy solution is enough, before I told Claude a counterexample that made Claude use dynamic programming instead.
If you haven't learned the fundamentals, you are not in a position to judge whether AI is correct or not. And this isn't limited to AI; you also can't judge whether a human colleague writing code manually has written the right code.
Did you give Claude a way to test/verify/benchmark said algorithm compared to other solutions?
If not, how can it not hallucinate when you didn't give it any constraints?
You can just tell it that it's doing it wrong (and why). Of course, you have to know that it did it wrong.
The point is that if you know the algorithm will produce X as the output if the input is Y, give that as a tool to Claude
And if you know that the previous algorithm completes in Z milliseconds, tell Claude that too and give it a tool (a command it can run) to benchmark its implementation.
This way you don't need to tell it what it did wrong, it'll check itself.
It was the other way around. Claude gave me an algorithm. I found it fishy. So I specifically constructed a counterexample in response to Claude’s algorithm.
Of course when I gave that to Claude, Claude changed the algorithm. But if I didn’t have enough experience and CS fundamentals to find it fishy in the first place, why would I construct a counterexample?
That is correct. But for how long ? How long would it take for AI to learn all of this too ? AI sure does learn faster than humans and even though it will never degrade the relevance of fundamentals, don't you think that the bar for someone beginning to learn about the fundamentals, would just keep increase exponentially.
AI takes XYZ data to set N range, it never created anything new but took all and created a baseline, which is at many tasks very good.
It cannot really create anything new and never seen, which most people will never do either.
So if we push away even more onto AI, I am afraid MANY(not all) that would previously gone through the discovery path won't stumble onto their next innovation, since they simply prompted a good baseline for ABC task, because we are lazy.
Even if AI knows everything and is basically sentient, we still need to understand these things to work with it. How can we prompt it reliably without understanding the subject matter for which we are prompting?
If anything I consider fundamentals in STEM (such as Math/CS) to be even more valuable moving forward.
I'm curious, what was the algorithm problem?
It’s a variant of a knapsack problem. But neither Claude nor I initially realized it was a knapsack problem: it became clear only after the solution was found and proved.
AI is great at giving you an answer, but fundamentals tell you if it's the right answer. Without the basics, you're not a pilot; you're a passenger in a self-driving car that doesn't know what a red light is. Stay strong in the fundamentals so you can be the one holding the steering wheel when the AI hits a hallucination at 70mph.
As a non-member of the exalted-many who get to hack for a living-
I agree. The nature of the machine, is to crush the artisanry and joy from the task. However, you can't beat it, so…
I use the miserable things as "research accelerators." I have neither the time, nor the capacity to sustain the BAC necessary, to parse all of the sources and documentation of the various systems in which I'm liable to take interest. I very rarely ask them to "do ${task} for me," but rather: "What is the modern approach to ${task}? And, how do I avoid that and do ${task} in the spirit of Unix?” "Has anyone already done ${task} well?" "Are there any examples of people attempting ${task} and failing spectacularly?"
If you treat it like your boss, it'll act like your boss. If you treat it like your assistant, it'll act like your assistant.
Edit: derp.
These tools actually make me more interested in CS fundamentals. Having strong conceptual understanding is as relevant as ever for making good judgement calls and staying connected with your work.
This is the right answer. AI writing code for you? Then spend that time understanding what it is writing and the fundamentals behind it.
Does it work? How does it work? If you can't answer those questions, you should think carefully about what value you bring.
We're in this greenfield period where everybody's pet ideas can be brought to life. In other words...
Now anyone can make something nobody gives a shit about.
> Now anyone can make something nobody gives a shit about.
As a corollary, I can build shit that's perfect for me and I don't really care if it's any good for anyone else =)
Before I had to find someone else's shit and deal with their shit, trying to make it do the shit I need it to do and nothing else.
"Now anyone can make something nobody gives a shit about."
lol nice one
Read this article from the Bun people about how they used CS fundamentals (and that way of thinking) to improve Bun install's performance.
https://bun.com/blog/behind-the-scenes-of-bun-install
Then look at how Anthropic basically Acquihired the entire Bun team. If the CS fundamentals didn't matter, why would they?
Even Anthropic needs people that understand CS fundamentals, even though pretty much their entire team now writes code using AI.
And since then, Jared Sumner has been relentlessly shaving performance bottlenecks from claude code. I have watched startup times come way down in the past couple months.
Sumner might be using CC all day too. But an understanding of those fundamentals (more a way of thinking rather than specific algorithms) still matter.
I work in a subfield of CS that requires those fundamentals pretty regularly, and I also make regular use of AI tools. You definitely need those fundamentals because AI tools can’t always be trusted to make good decisions when it comes to them. Knowing the fundamentals yourself is critical to keep the AI assistants in check, both to know how to guide them AND to know to recognize when they made a bad decision.
A recent example for me: I had a challenging problem in a medium sized codebase (tens of thousands of lines) that boiled down to performing some updates to a complex data structure where the updates needed to be constrained by some properties of the overall structure to maintain invariants. Maintaining the invariants while the data structure was being updated is tricky since naive approaches would required repeated traversals of the whole structure. That would be really inefficient, and a smarter approach would try to localize the work during the updates. The latest Claude and GPT assistants recognized this, but their solutions were exceptionally complex and brittle. I eventually solved it myself with a significantly simpler and more robust method (both AIs even gleefully agreed that my solution was slick after I did it).
Had I let my CS fundamentals go to waste I wouldn’t have been able to solve it myself, nor would I have been able to recognize that the solutions posed by the models were needlessly complex.
Just because an AI can generate a solution that passes tests quickly doesn't mean what it generated is a long term good solution. Your skills in fundamentals is key to recognizing when it does a good job and when it doesn’t, and being able to guide it in the right direction.
To borrow a concept from Simon Willison: you need to "hoard things you know how to do”. You need to know what is possible; you need to be able to articulate what you want. AI is a fast car, but it’s empty and still needs a driver. As long as humans are still in the loop, the quality of the driver matters.
Terminology matters, if you use the right words, the AI will work better.
Just saying "use red/green TDD" is a shortcut to a very specific way of fixing bugs.
Or when you use a multi-modal model to transcribe video saying "timecode" instead of "timestamp" will improve the results (AV production people say timecode, programmers say timestamp, it hits different parts of the training material)
Fundamentals are the only thing left to learn in our field.
Either the AI doesn’t understand them, and you need to walk it down the correct path, or it does understand them, and you have to be able to have an intelligent conversation with it.
Ai emphatically doesn't know when to reach for A vs B in terms of the options on the table. At least understanding some of the characteristic trade offs will go a long way. Especially if you are inclined to favor simplicity over unnecessary complexity. AI can easily over-complicate things and make solutions that become a crazy, complex mess.
The vast majority of line of business apps can be solved with a relatively simple CRUD UI with a simple API server with a SQL based RDBMS. But even then, you will hit limits and experience bottlenecks in practice. If you need to do any kind of scaling, you need to know where the low hanging fruit and complexities lay.
I think that AI, particularly LLMs, can be quite effective for learning, especially if you maintain a sense of curiosity. CS fundamentals, in particular, are well-suited for learning through LLMs because models have been trained on extensive CS material. You can explore different paradigms in various ways, ask questions, and dissect both questions and answers to deepen your understanding or develop better mental models. If you're interested in theory, you can focus on theoretical questions but if you're more hands-on you can take a practical approach, ask for code examples etc. If you have a session and feel that there's something there that you want to retain ask for flash cards.
There are two types of CS fundamentals: the ones that help in making useful software, and the rest of them.
AI tools still don't care about the former most of the time (e.g. maybe we shouldn't do a loop inside of loop every time we need to find a matching record, maybe we should just build a hashmap once).
And I don't care if they care about the latter.
Even if AI was perfect (it’s not), you still need the fundamentals to properly frame what you want it to do and evaluate the results.
Think of how it was before AI. Someone without foundational knowledge of a topic would flounder. They wouldn’t know what to search for or what questions to ask. Meanwhile, someone with that foundational knowledge is able to put together abstract ideas to ask the proper questions and know how to search for to get additional details.
When you don’t know what you don’t know, it’s almost impossible to be effective, even with AI.
Well, it depends. There's no right or wrong answer here.
Simon wrote an article "What is agentic engineering?" [1]
> Now that we have software that can write working code, what is there left for us humans to do? > The answer is so much stuff. > Writing code has never been the sole activity of a software engineer. The craft has always been figuring out what code to write. Any given software problem has dozens of potential solutions, each with their own tradeoffs. Our job is to navigate those options and find the ones that are the best fit for our unique set of circumstances and requirements
Such navigations may require various skills. For example: people/product skills (e.g customer empathy) to determine what to build, or engineering skills (e.g optimizations). Please be open for learning and get stronger via feedbacks.
[1]. https://simonwillison.net/guides/agentic-engineering-pattern...
The problem is when we get into many Isaac Asimov books regarding loss of knowledge across the civilisation.
AI still needs some lucky wizards with CS skills that will keep it going, at least until Skynet gets turned on.
I was wondering about this. I do not write software to pay the mortgage, I just write the occasional python script, some SQL stuff to update various dashboards, R in my spare time when I'm getting ready for looking at baseball stats or something. AI has had pretty much the opposite effect for me. Watching it write something has made me ask questions, get answers, dig into more details about things I never had the time to google on my own or spend an hour or several looking through stackoverflow.
I'd say my ability to write code has stayed about the same, but my understanding of what's going on in the background has increased significantly.
Before someone comes in here and says "you are only getting what the LLM is interpreting from prior written documentation", sure, yeah, I understand that. But these things are writing code in production environments now are they not?
I actually find the the inverse is true. I find that thinking more in terms of algorithm trade-offs and strategies for distributed systems, and work with the LLM to explore what the state of the art options are and how to analyze which are appropriate for my current project.
I see over and over those with the deeper understanding are able to drive the AI/LLM code generation processes faster and more effectively, and build things that can be built on by others without hitting hard bottlenecks.
The less people understand CS fundamentals the faster they his a blockade of complexity. This is not necessarily bad code, but sloppy thinking. And CS fundamentals are information and logical processing fundamentals.
It is the Centaur issue. You need to help provide the evaluation and framing for the AI/LLM to search out the possibilities and well known solutions, and code up the prototypes. Without the fundamentals you have to rediscover them slowly after you already hit the hard problems and pause for days or months while trying to work you way around them.
Fundamentals should have even higher weight in learning budgets now, because AI can’t reliably reason complex architectural problems. It’s the surface level APIs you don’t have to learn/memorize.
Maybe you mean “AI tools are making me lose interest in learning anything”, which is… a common reaction, I suppose.
It used to be that you had to have a strong understanding of the underlying machine in order to create software that actually worked.
Things like cycle times of instructions, pipeline behavior, registers and so on. You had to, because compilers weren‘t good enough. Then they caught up.
You used to manage every byte of memory, utilized every piece of underlying machinery like the different chips, DMA transfers and so on, because that‘s what you had to do. Now it‘s all abstracted away.
These fundamentals are still there, but 99,9% of developers neither care nor bother with them. They don’t have to, unless they are writing a compiler or kernel, or just because it‘s fun.
I think what you‘re describing is also going to go away in the future. Still there, but most developers are going to move up one level of abstraction.
Natural language is not a higher abstraction than code, it’s just more ambiguous.
But you can describe more abstract things with natural language than you can with code.
You can't "move up one level of abstraction" from computational complexity.
So far I've only been able to get coding assistants to do things I actually understand (or slightly beyond). Either something I learned long ago, or these days things I learn online .... with the help of the LLM.
Either way, if you want to talk with an LLM on the same level, you're going to need to train on the same dataset.
My naïve answer to this is, one should never be interested in things because of how useful it might be, but because of the thing itself. Otherwise, AI won’t be the first to make one losing interest. With interest, AI may even make it more addicting. Another distinction to make is, when you use AI, are you taking a shortcut, or channeling it to automate the boring stuffs so that you can explore things you otherwise don’t have time to explore?
Knowing fundamentals gives you deeper intuition about the technology, at every layer. When compilers appeared, you no longer needed to understand assembly and registers. But knowing how assembly and registers actually work makes you better at C. When Python came along, low-level languages felt unnecessary. But understanding C's memory management is what lets you understand Python's limitations. Now LLMs write the implementation. LLMs abstract away the code. But knowing how algorithms work, even in a high-level language like Python, is exactly how you catch LLM mistakes and inefficiencies.
Knowledge builds on knowledge. We learn basic math before advanced math for a reason. The pyramid keeps accumulating from what came before. Understanding the fundamentals still matters, I think.
The author of Claude Code himself mentioned this in a recent interview. If I recall correctly, he mentioned that the best programmers he knows have an understanding of the "layer below the layer", which I think it's a good way of putting it. You're a better C programmer if you understand assembly, and you're a better "vibe coder" if you can actually understand the LLM generated code.
Did you learn arithmetic in school even though calculator exist?
Many did not. It's important to understand the distinction.
I was in middle and high school when calculators became the standard, but they were still expensive enough that we kept the Ti-80 calculators on a backroom shelf and checked them out when there was an overnight problem set or homework assignment. In a round about way, I think I ended up understanding more about the underlying maths because of this.
So, no, many did not actually learn arithmetic in school. This isn't necessarily because of the calculator, but if you don't get a student to understand what arithmetic even is then handing them a calculator may as well be like handing them a magic wand that "does numbers".
Interestingly, here in Canada, kids are no longer taught long division. When I was in highschool, we were taught to use slide rules, but not very seriously. And I would imagine, nobody gets taught how to use trigonometry tables anymore (at least I hope so). So, these days, you learn arithmetic very differently because calculators exist.
CS fundamentals is about framing an information problem to be solvable.
That'll always be useful.
What's less useful, and what's changed in my own behavior, is that I no longer read tool specific books. I used to devour books from Manning, O'reilly etc. I haven't read a single one since LLMs took off.
I find I'm going even deeper lately. I, obviously, have to completely and _totally_ understand every line written before I will commit it, so if AI spits something out that I haven't seen before, I will generally get nerd sniped pretty good.
When I utilize LLMs for coding, it makes me think that I have no idea how I'd use it if I was not an expert already.
I actually become more interested. When I need to spend less time typing I have more time to spend understanding and applying CS concepts.
Ultimately humans are the judge of reality, not LLMs.
How can you be a good judge? You must have very strong foundations and fundamental understanding.
I think it's important to stay curious and keep learning, but there's a lot to be curious about and all sorts of different skills you could work on improving. Going deep on algorithms or distributed systems are two possible directions, but there are others.
I see things the opposite, LLMs are automating slow, easy, and tedious work so I have more time to spend on actually interesting problems
I will keep learning fundamentals.
I studied Physics fundamentals even though I had a microwave or could buy an airplane ticket. And I deeply enjoyed it. I still do.
I will keep doing it with CS fundamentals. Simply because I enjoy it too much.
watching the difference in a non-CS versus a CS person using an LLM is all you need to do to reaffirm belief that fundamentals are still a massive benefit, if not a requirement for the deeper software.
>> watching the difference in a non-CS versus a CS person using an LLM is all you need to do to reaffirm belief that fundamentals are still a massive benefit, if not a requirement for the deeper software.
For now? I'm a CS grad. My son wants to be a CS grad. I'd like to believe what you wrote, and hope it is true. But at the current rate of LLM improvement, would this really be the case in 4yrs? Asking seriously as my son needs to choose a major in 4yrs.
if you really love CS - there's a future in it. If AI becomes the new substrate for civilization, we'll always need people who fundamentally understand how these systems works to some degree.
> why do you think it’s still important to stay strong in CS fundamentals?
I don't think anyone at any level has any idea what the future is holding with this rapid pace of change. What some old timers think is going to be useful in a post-Claude world isn't really meaningful.
I think if I had limited time to prioritize learnings at the moment it would be prioritizing AI tooling comfort (e.g. getting comfortable doing 5 things shallowly in parallel) versus going super deep in understanding.
I specialize in cloud + app dev leading consulting projects. I can absolutely guarantee you that AI does a horrible job of architecture for distributed systems, concurrency implementations, and data engineering and won’t do the optimal design unless you carefully guide it.
CS fundamentals? In this day and age it’s only important to keep up with them if you are one of the relatively few people (even in BigTech and adjacent) who are building the fundamentals or trying to get a job in one where “grinding leetCode” is important.
Before the pearl clutching starts I had to implement many of the algorithms as part of my $DayJob early in my career as a C bit twiddler across various platforms. But haven’t since 2012-2014.
Now I can do shit fast. An LLM will turn any funky idea I have into an MVP within minutes.
Just last night I was struggling browsing / organising media on my NAS, because macOS Samba + NFS suck: "What if I build a bespoke web application I can run on the NAS to do this"
One episode of SNL and two episodes of Graham Norton Show later I had a Dockerized Vite + Go application where I can mount media directories to and browse pictures + videos with previews. It's not 100% done, but close enough for me to see if it's something I need to spend time working on
...but I can also learn stuff - by asking the LLM to teach me.
While that one was building, I stole an idea I saw on the internet and started building an agent harness that uses Qwen3.5:9b as a backend to run tools locally. I specifically asked Claude to build it in parts and explain to me how it works step by step.
Now I know a lot more about that than I did yesterday.
"With powerful computers, I sometimes feel less motivated to study deep mathematical topics like differential equations and statistics. Computers can math quickly, which makes the effort of learning the fundamentals feel less urgent. For those who have been in the industry longer, why do you think it’s still important to stay strong in mathematical fundamentals?"
Because otherwise you are training to become a button pressing cocaine monkey?
I don't find your analogy compelling. More like "calculators make me less motivated to learn how to multiply four-digit numbers in my head". There used to be jobs for people who were good with numbers. They're pretty much gone, and it's not even much of a parlor trick, so no one bothers to learn these skills anymore.
If the best argument for going into CS is that LLMs sometimes make stuff up and will need human error checkers, I can see why people are less excited about that future. The cocaine monkey option might sound more fun.
It reminds me of the situation with self-driving that expects you to keep your full attention on the road while not driving so that you can take over at any time. It's clearly unrealistic.
It's not a failing of yours or anyone else's, but the idea that people will remain intellectually disciplined when they can use a shortcut machine is just not going to work.
I don't even care about the tooling, because I get to choose whether I use it or not. Sometimes I first:
1. Summarize a page with gemini 2. Then go through it myself to see if I understand the entire page
Which can help a bit with getting up to speed.
What I'm demotivated by is all these new HN posts that are blatantly using LLMs to write, and then hiding the fact they are. Just be honest... There's nothing wrong with making a mistake, you learn from those.
I get that there's a rule against it now, but it will only filter out low-hanging fruit. I still see too many, and I don't think people will ever change in this sense.
80% of my comments lately have been about spotting these posts/comments and I feel like its not doing anything except getting me mad.
Then don't use them?
> For those who have been in the industry longer, why do you think it’s still important to stay strong in CS fundamentals?
Dictionaries have made me feel like studying languages is pointless. People, why do you think it’s still important to stay strong in languages when dictionaries exist?
It’s important to know what magick words to say to the demon to get what you want. Knowing the fundamentals is part of that in my view. At some point, the YouTube tutorials for complex CS mechanisms are probably fine. I like doing toy examples of it myself and even having the LLM coach me along.
Knowing the right tool for the job is even more powerful now because it will prevent you from going down a rabbit hole the LLM thinks is just fine.
One follow-up question I’ve been thinking about:
In the AI era, is it still worth spending significant time reading deep CS books like Designing Data-Intensive Applications by Martin Kleppmann?
Part of my hesitation is that AI tools can generate implementations for many distributed system patterns now. At the same time, I suspect that without understanding the underlying ideas (replication, consistency, partitioning, event logs, etc.), it’s hard to judge whether the AI-generated solution is actually correct.
For those who’ve read DDIA or similar books, did the knowledge meaningfully change how you design systems in practice?
Short answer: yes.
Longer answer: About 10 years I moved into leadership roles (VP Eng) and while I continued to write code for POCs, it hasn't been my primary role for quite some time. DDIA has been a book I pull out often when guiding leaders and members of my teams when it comes to building distributed systems. I'm writing more code these days because I can, and I still reference DDIA and have the second edition preordered.
So you know how a bad idea when you see one, and to be able to ask it to do it the good way?
AI tools are more effectively used aligning with/from CS fundamentals. Knowing how to ask and what to avoid is critical. It can be powered through/past, but the incorrect areas in the context can compound and multiply.
I view this a bit like asking "why bother getting a job when I could just get rich at the slot machines"
Knowledge is still power, even in the AI age. Arguably even moreso now than ever. Even if the AI can build impressive stuff it's your job to understand the stuff it builds. Also, it's your job to know what to ask the AI to build
So yes. Don't stop learning for yourself just because AI is around
Be selective with what you learn, be deliberate in your choices, but you can never really go wrong with building strong fundamentals
Edit: What I can tell you almost for certain is that offloading all of your knowledge and thinking to LLMs is not going to work out very well in your favor
How can you possibly make any informed statement about the solutions AI generates for you if you don't understand them?
But without CS fundamental, how do you intended debugging AI slop?
babas03 put it best IMO - https://news.ycombinator.com/item?id=47394432
I'd also second bluefirebrand's point that "it's your job to know what to ask the AI to build" - https://news.ycombinator.com/item?id=47394349
Those are great answers to the question you did ask, but I'd also like to answer a question you didn't ask: whether AI can improve your learning, rather than diminish it, and the answer is absolutely a resounding yes. You have a world-class expert that you can ask to explain a difficult concept to you in a million different ways with a million different diagrams; you have a tool that will draft a syllabus for you; you have a partner you can have a conversation with to probe the depth of your understanding on a topic you think you know, help you find the edges of your own knowledge, can tell you what lies beyond those edges, can tell you what books to go check out at your library to study those advanced topics, and so much more.
AI might feel like it makes learning irrelevant, but I'd argue it actually makes learning more engaging, more effective, more impactful, more detailed, more personalized, and more in-depth than anyone's ever had access to in human history.
For now yeah becasue you need to direct the Ai correctly still. Either with planning or you need to fix it's mistakes and identify when it did something correct but not optimally.