A big loss for the Emacs community! emacs-aio is great!
I see the author is spring cleaning:
> I've
turned over a new leaf (no more Openbox, Tridactyl, Xorg, xterm), and so
some of these things I no longer use. On Linux I now use KDE on Wayland
with a minimally-configured browser. I miss the power user features, but
I do not miss the friction and constant maintenance.
LLMs have inspired a similar change in me: with a big change in how I work, I feel I can and should be more flexible with adopting new tech, which involving freeing myself of previous choices.
FWIW, the age of LLMs made me build a deeper, more intimate relationship with Emacs, because it's a Lisp REPL loop with a built-in editor, not the other way around. When you give an LLM a closed loop system where it can evaluate code in a live REPL and observe the results, it stops guessing and starts reasoning empirically.
LLM that I run inside Emacs can fully control the active Emacs instance. I can make it change virtually any aspect of it. To load-test things, I even made it play Tetris in Emacs. And not just simply run it, but to actually play it without losing. It was insane.
Also, Emacs is all about plain text - you can easily extract text from anything - from the browser, terminal, CLI apps, Slack, Jira, etc., and you can do that on your own terms - context can appear in a buffer, in your clipboard, become a file or series of API requests. That is really hard to beat.
Asolutely. It doesn't have to be an either-or. I use gptel and org mode when I was to be really hands on driving the development. It's a very different mode of interacting with models, and the way newer models are trained to play nice with harnesses makes them very obedient.
In case anyone else wondered about using gptel to edit thinking (eg vis Qwen3.6's `preserve thinking`), [1] explains:
> In a multi-turn request, from the time you run `gptel-send`, everything the LLM sends is passed back to it [...during tool calls...] includes multiple reasoning blocks. [...But...] subsequent gptel-send calls read their input from the buffer contents (or active region, etc), so the reasoning blocks in the buffer will not [] be sent as "reasoning_content".
But in org mode, those are apparently `#+being_reasoning` blocks (`gptel-include-reasoning`?), so editable thought might be an easy addition?
A caution, fwiw, that any llms which respond with interleaved content and reasoning blocks, currently only work when not streaming, and fixing that is non-trivial.[also 1]
Same here. Emacs has been the stable editor for all kinds of language changes, tool changes, and IDE changes. Emacs is great with LLM, as LLM is mostly text related and Emacs is great in capturing and dealing with text.
Can't agree more. Lisp was discovered/invented for the purpose of AI research. Of course, modern neural nets and transformers is a big departure from McCarthy's vision of AI - logical, interpretable, symbolic. However, if the current wave of AI hits a wall - and many serious researchers think it will, or already has at the margins - there's growing interest in neurosymbolic approaches that combine neural nets with symbolic reasoning. That's closer to McCarthy's original vision, and Lisps are genuinely well-suited for it.
Let's be honest: Lisp probably won't ever get bigger than Python, unless Python for whatever reason starts dying on its own. But if AI ever gets serious about interpretability, formal reasoning, program synthesis - all the stuff Lisp was built for - it just might quietly become relevant again in research contexts, without ever reclaiming mainstream status.
Scicloj has been building out a serious ML stack in Clojure - noj, metamorph.ml, scicloj.ml.tribuo, libpython-clj for Python interop. Beside that, people been proving that 'code is data' is exactly what makes it a better target for LLMs. Clojure is most token efficient PL - it's been proven. There are some recent interesting clj projects in relevance:
Well, this is because "normal" programming languages are one step above AST. So LLM has to work with program text, which is much easier than regular human text, as it is constrained to well defined number of keywords and grammar, but still this is pretty variable. Lisp is just AST, so it is one level lower. I guess that at some point LLM-s will stop writing human-readable code, as this is additional obstacle, they will work directly with binaries or virtual machines code (like in Java), because this will be easier and eat less tokens.
I've tried different AI packages and currently gptel and ECA remain the main ingredients. This is a quickly changing landscape, and things may change, but for now it feels very good.
I like gptel because it's enormously extendable and exploitable - it allows me to send LLM requests from just about anywhere - I could be typing a message (like this very one) and suddenly in need of ideas for how to phrase something better, or explain simply, or fact-check my assumptions, whatever. Quick & dirty interaction that gets discarded in the same buffer. For longer investigations and research I would use a dedicated gptel buffer. Those get automatically saved.
I don't use gptel as a coding assistant, even though you can do that, it's not really optimized for that kind of work. I use ECA. It works much better for me than every other alternative I tried, and I tried more than a few. What's crazy that I sometimes would type a prompt in ECA, then ask gptel (with a different model) to make it more "AI-friendly" changing the prompt in-place and then send it.
All my MCPs are coded in Clojure (mostly babashka)¹ - because (like I said) giving an AI a Lisp REPL makes much more sense (maybe even more than using a statically typed language). I had to employ a few tricks so all the tools, skills and instructions can be shared between gptel, eca-emacs, ECA Desktop, Claude Code CLI, Claude Desktop App, and Copilot CLI. Even though I mostly use gptel and ECA, it's good to keep other options around, just in case.
All the AI-related Emacs settings are in my config².
Is this helpful, or you want some more concrete examples?
Big same. I have been doing a lot of clojure development, and hooking up my app to a live REPL has given me an absolutely fantastic feedback loop for the LLM. I don't think a lot of people understand what they're missing.
> I don't think a lot of people understand what they're missing
Very true. There's an enormous tacit knowledge gap. Check this out:
I have to use Mac for work. My WM is Yabai, which is controlled via Hammerspoon (great tool on its own), which means I can use Fennel, which means I can have a Lisp REPL. MCP connected to that REPL can query and inspect every single window I have on my screen. It can move them around, it can resize them, it can extract some properties of them. It's figuring out stuff like: "pick a selected Slack thread from the app and send it into an Emacs buffer", or "make my app windows work like Emacs buffers" - pick from the list and swap it in place. Or "find the HN thread about retiring from Emacs among my browser tabs and summarize the content"...
Never in my life have I been more grateful to my younger self for grokking the philosophy of Lisp. Recent months have only reinforced my firm belief that this 70-year-old tech is truly everlasting. Thank you, John McCarthy, for the great gift to humanity, even though so weirdly underappreciated.
I am really loving working on a fun Elisp project with pi, a minimal and very extensible agent. I have the agent use emacsclient to control my session, showing me code, running magit ediff for me, testing, formatting, reloading -- it's all working great.
I'm still exploring all the ways the agent and I can collaborate using Emacs as a shared medium, but at the moment am super optimistic about it.
Do you have credentials anywhere within reach of that session? Can you open your bank account in a browser ... within reach of that session? Are your contacts available within reach of that session? What about personal notes/emails/goals or other sensitive information? That people think these can't be added together in one very socially/monetarily destructive fell swoop is ... telling.
Ignoring obvious bad-actor concerns from just giving root to your whole life to an LLM running on someone else's server, LLMs themselves can act in ways that are extremely counterproductive to their organization/host/etc.
A quote/warning I learned in the late 90s is just as relevant today, "Computers make very fast, very accurate mistakes."
Anything an LLM does on your computer should happen it its own account. No sudo config of course, or at most one that is strictly limited to what you want to allow it to do (risk here, as many programs have non-obvious paths to general command execution).
It should have zero access to your private home directory or your system configs. You can have access to its files of course. That's the beauty of separate accounts and permissions.
So? My terminal has the same full system access. If I didn't use Emacs, I'd be using Claude code in it. It's contained locally on my computer, I don't see any problem here. I use Emacs like my OS-layer. Why would I complain that my OS has access to something? It would be weird and annoying if it's the opposite.
I don't think it's very reasonable to use claude code on a computer that have credentials without some kind of sandboxing or validing every command it does, at which point I'd rather do things manually
Yeah, that's incredibly unsafe. You made a footgun machine and you're firing it with no shoes on. Don't run that on any machine with credentials you care about.
At the very least, run it in Docker. It's not a security tool, but it's at least some kind of guardrail against data loss and exfiltration.
Ah come on, guys, let's talk pragmatically. "Malleable editor as an OS layer" has benefits beyond subjective reasoning. Emacs has had M-x shell-command and arbitrary elisp eval forever. A metacircular MCP isn't some new capability class. Even if I didn't use Emacs - my shell, my editor, my browser extensions, my npm install, my VSCode plugins, my curl | bash from yesterday - they all have the same access. Singling out the LLM in this context is like selection bias.
Of course, reasonable mitigations are a must - just like for any other tool. Narrowing MCP scope - tool routing rules, read-only git defaults, etc. "Docker or nothing" is a lazy answer - Docker-for-everything has real costs: friction, broken integrations, worse ergonomics.
Practical security is all about staying in the goldilocks zone. You shouldn't get relaxed about the basics - sandboxing, 2FA, password managers - they are worth doing, and you can get so paranoid about so many things, and yet against a targeted, well-resourced attacker, your sandboxing posture is mostly irrelevant. The interesting attacks bypass the threat model entirely. Read about Ben Nassi's team research¹ - pretty cool example. There are multitudes of other ways and your Docker container won't stop them. Defend against the boring 99%, and accept that the 1% is someone else's problem (or a much bigger problem than your dev environment)
TLDR LLM Summary: Researchers showed that a device's power LED subtly flickers in brightness and color while the CPU performs cryptographic work, and these flickers leak information about the secret key. By pointing an ordinary video camera (an iPhone or an internet-connected security camera) at the LED and exploiting the camera's rolling shutter, they boosted the effective sampling rate from 60 to 60,000 measurements per second, enough to do cryptanalysis. Using only this video footage, they recovered full ECDSA and SIKE keys from a smartcard reader and a Samsung Galaxy S8, with no malware on the target devices.
It's your computer and you can do whatever yolo nonsense you want, my dude, but put those goalposts back where they were.
"Don't run that shit on a credentialed box with data you care about" is addressing real threats, not some goofy nation state thing or abstract security research.
If you let the footgun machine constantly generate new code and run it on your computer, you're just asking for data loss and bad shit to happen.
Docker isn't a great solution but it at least doesn't let yolo code delete files or access env vars or read the contents of .ssh/
> my browser extensions, my npm install, my VSCode plugins, my curl | bash
Yeah, and you shouldn't yolo those, either lol. If they didn't come from a trusted source, you need to read through them. If you don't want to, don't use them. That's not paranoia, that's, like, normal.
> If you let the footgun machine constantly generate new code
Are you talking about autonomous LLM projects that automatically write code? Yeah, no shit, I wouldn't run anything like that directly on any machine without sandboxing. My typical LLM use inside my editor is never in self-driving mode, there's not even cruise-control - I tell it exactly when to write, where to write and how to do it. Automated scripts never get run by LLM and don't get to run at all without prior precise and meticulous inspection. I'm not moving goalposts - at worst we're in disagreement on the level of pragmatics vs. paranoia, that's all.
I don't even get why people are so crazy about LLMs generating code - on both sides. LLMs for me personally are such a great tool for investigating things, for finding things, for bridging the gaps - the stuff that happens 10K feet above code writing. By the time I'm done gathering the details, code generation becomes an almost insignificant touch of the whole endeavor.
One example: it disables the default Ctrl-F search function but its own search function is subpar (no match counts/hlsearch, e.g.) and often clashes with website's built-in search (on Github, e.g.).
It doesn't work on the default newtab either, and changing the default newtab somehow makes opening a new tab slower (that's FF's fault, I guess)…
You can type /phrase and then press ctrl-F for the full search bar. A more annoying problem is that some websites capture / presses, making it harder to initiate a page search. Then you have to shift-esc ctrl-f to search.
cool to see you in the wild, for me, it does work out of the box however, some sites will break or have too complex of a navigation, especially with iframes. and will have to swap to a mouse which is a bummer, which I understand is an inherent limitation of the tech, since web is not built today to do that.
To be honest I find the use of a separate browser at work a good way of forcing separation - all "work stuff" is done in one browser, and all "personal stuff" is done in a different one.
This time around I'm using Chromium for personal stuff, and Firefox for work-stuff. I do more work-related browsing, so having the vertical tabs in firefox meant that was the better browser to use for official stuff.
(In my previous job I used safari for work, and firefox for personal.)
I used Firefox for 20 years, loved it, defended it. But they just kept removing features that I was used to, and I ran into some bugs with popular websites and decided to hang it up. Currently on Brave and fully convinced it's the new Firefox.
I am running Ubuntu as my desktop operating system. I would never do this without an LLM to do the work of keeping it functional for me. Today, Rise of Nations wouldn't launch. Never had that problem before. Seems the driver for 32-bit games and my Nvidia GPU weren't getting along after an update. Codex was called in and solved the problem for me in about 5 minutes. I just copied and pasted the Steam log and let it tell me what to do. Tadah.
I'm actually excited about the potential for a future where local agents help improve the operating system experience as I go by making changes based on my use case. All local, of course. I do not want to trust a cloud provider with my use cases/behavior on my computer so they can sell me more ads...
Does anyone else not understand what people mean when they refer to the "friction" supposedly inherent to these power user tools? Almost none of the configs/scripts/etc I use for my heavily-customized and terminal-heavy setup get changed for years at a time.
If you are frequently having to use other computers, a heavily customized setup has much more friction either to setup the machine like you want, or remember how to do things without all the customization (if you can't customize or it isn't worth the time).
When I graduated college I used Dvorak and Emacs on Linux. Six months of having to use shared Windows lab computers extensively beat me down to surrender all of those points - my brain just couldn't handle switching, so I conformed my desktop to match. Then later I switched jobs to a group that was all Unix, but of many varieties most of which only had vi, not Emacs. And so I learned vi. Sometimes minimizing friction means going with the flow.
Arguably NixOS is the most config heavy platform but it solves the pain point of having to reconfigure on different systems. Especially in the LLM era where I can configure Emacs and my OS decoratively.
How do you nixify your Emacs configuration? I've looked into it but at the time the advice was to specify dependencies both in Nix and in .emacs.d, which seemed redundant to me. Is there something like callCabal2Nix for Emacs?
Edit: Or do you mean "declaratively" in the sense of using something like straight.el?
The previous post alludes to Evil being the long term plan. That seems sensible: it ought to be easier to use an implementation of Vim in Emacs rather than port much of Emacs to independent applications.
Yet the author ended up doing the latter and it's not really made clear why. Why?
I used emacs full-time for many many years. Then I switched to Vim or other editors with Vim modes, also for many years. I have to be honest, I don’t see a particularly clear winner between them. Model editing is a bit unusual in many ways. There are some things that it certainly makes easier, but I personally found that the overall process of editing and writing code in real time for me was more efficient in a single mode emacs.
> I don’t see a particularly clear winner between them
Because deep down they are incomparable categorically. Separate the tools from the foundational ideas and you see the very different value. Vim-model of text navigation is fantastic, practical, brilliant idea. Once you grok it - you can take it anywhere. You can use it in your editor, browser, terminal, WM. Emacs is rooted in another, even more brilliant idea of practical notation for lambda calculus. These ideas have no overlap. But understanding the philosophy of each (ideally both) could open so many different possibilities.
Evil is not just great. It's the only "true" vim layer outside of vim/nvim worth commending. Gary Bernhardt once said: "there's no such thing as vim mode", in the sense that every single attempt to emulate vim outside of vim/nvim is a bleak imitation. None of them - not a single VSCode vim plugin, not Sublime's, not IdeaVim in IntelliJ, no browser extensions are without some glaring omissions. While Evil+evil plugins in Emacs are not just 'close' - they are better than the source of inspiration. Gary just probably didn't know that.
My vim muscle memory has paid off more for me than my emacs muscle memory. Emacs was the better editor, though. Anything that doesn't have Vimscript is an automatic winner IMO.
I use ^a to go to the beginning of a line and ^e to go to the end nearly everywhere. Many Emacs keystrokes are so pervasive that they're not often thought of as Emacs keystrokes.
My usage of emacs is so vim-like that I’ve tried switching a few times. Vim is definitely faster, and overlays and cursor placement is much simpler and more intuitive. But there were still feature gaps and configuration issues that prevented full adoption.
You may get good bang for your buck out of neovim. With only a very minimal set of plugins, it has replaced all other IDEs for me. (They're also making good progress Sherlocking their core plugins, so the future is bright for those of us who dislike plugins for core functionality.)
can you elaborate? Heavy vim user here, have considered using emacs in vim mode to quell a decades long nagging curiosity. Just need a compelling nudge.
I don't know how much this applies to everyone else, but the ability to display images inline is really nice for notetaking. I cannot write properly, so org-mode (a notetaking tool that can export to a variety of formats) with embedded rendered latex equations makes it really easy to take notes and write things up in a plaintext format without needing to export every 30 seconds to view equations. The ability to embed code that can actually run is also very nice.
Emacs is primarily a platform for developing Lisp applications. Lisp applications are immensely hackable, meaning an Emacs configuration can be tailored in detail to specific desires.
There is also an ecosystem of applications for Emacs that are really good. They don't require you to use Emacs as your editor (you can run, say, Magit as a standalone instance) but if you do, they integrate really well with each other.
I've been retired from emacs for several years now but I'm still looking for a magit replacement that is independent of my editor. Vscode's magit extension is really good but i split my time between IntelliJ and vscode.
#!/bin/sh
if [ "$(git rev-parse --is-inside-work-tree)" = "true" ]; then
exec emacs -nw -q --no-splash -l "/path/to/magit-init.el"
fi
It worked well for me because I can reuse all my keybindings (evil + leader keys with `general`) and my workflow is fully in the terminal. (I have since moved on to Jujutsu, and `jjui` is filling this gap for me right now, but it's not quite a magit-for-jj).
Honestly, magit is just a masterclass in UI design. It makes most everything incredibly easy to do while still giving you the ability to tweak things if you need to.
The author is the developer of the RSS reader Elfeed, which a lot of Emacs users use several times a day. Though the article talks about a vibe-coded wxWidgets-based GUI application called Elfeed2 that he wrote as a replacement, Emacs afficionados would be loath to leave their Emacs environment and switch to that. Hopefully Emacs elfeed finds a new maintainer.
I tried Elfeed2 immediately after the announcement, well, it's nowhere near the experience of elfeed in Emacs. Elfeed2 doesn't load content for most of my feeds, elfeed does. I also integrated elfeed-tube, which shows previews of videos and their transcripts, making it no-brainer to get a summary without watching the whole video.
My understanding of the context is the author is no longer using Emacs, and is very excited about the productivity from AI.
My experience with LLM technologies is it does make generating the code a really quick part. It may be reasonable to take much more time to specify things up front (rather than emergently as you would by hand). -- I mean, if you've got a well crafted description of what you want, you'll be able to get a working program MUCH quicker with an LLM, today, compared to writing it out by hand.
Would it really be surprising/shocking if an LLM was able to rewrite (most) features from an existing software, to a new software?
It seems like the reality today is, we've gone from a maintained software in a niche ecosystem with happy users, to a more fragmented one where everyone has an LLM write their own half-baked one.
Probably because it's closer to a reimplementation than anything else, and in Emacs you can use libraries with much less friction than in self-contained languages.
Good. People here it's blind with the CADT model. They aren't even aware than with Elfeed for instance you can automatically set a hook on a feed that it calls lingva.el functions to translate, for instance, feeds written in Spanish or German to your native language in the spot.
Try doing that with Elfeed2.
Vi/Nvi2 users can almost do the same with Unix pipes and apertium/translate-shell/some lingva CLI translating tools for the whole document/regex selection/lines, a la Emacs. So can sfeed users, where depending on the feed they can pipe the plumbers' output (or just hack the scripts) to any other translating tool:
git://codemadness.org/sfeed
Heck, a few years ago I could reuse Telega.el's (Telegram client) translating functions for non-Telega buffers translating some text guide in the spot. So, did the blogger actually win something?
I have long struggled to learn emacs and use it effectively. Just for the fun it, If I were to use claude as I my teacher, how can I ask it to teach me to use Emacs? I don't like to ask questions and go back to try it. I want it to be a drive that will assist me with the usage. Has anyone tried such an approach to learn emacs?
There's a nice built-in tutorial for actually editing text with it. Press control-h then t to launch it. But that's just for using the editor. For actually configuring it, I've found that Opus 4.6 (inside Droid) is exceptionally good at tweaking my init.el.
Yesterday I typed "Set the default YAML indentation to 2 spaces." It came up with
Droid is my employer's alternative to Claude Code, which I personally prefer. But the general point is that LLMs are really good at Emacs Lisp these days.
I've started using Droid inside Emacs via the agent-shell package I learned about here a few days ago (https://news.ycombinator.com/item?id=45561672). It handles quite a few other agents, too.
Don't try to "learn Emacs". Grok the foundational layer - Lisp. Emacs is not an editor - it's first and foremost Lisp interpreter with a built-in editor. You need to get two things: REPL (evaluating Lisp expressions in-place) and structural editing (moving, expanding, transposing expressions).
You can start with vanilla Emacs with zero config and Claude/Copilot/Codex/etc, running separately. Your first goal is to have the LLM running inside Emacs - ask the LLM how. It probably will recommend gptel - as one of the most popular and robust choices, go with it.
Once you get LLM tools to modify Emacs state from within, you can just go crazy. You can tell it to change colors, fonts, ask any stupid questions, whatever. It will do it without losing a beat - no restarts, no waiting, no copy pasting - just flow.
My advice is to use a base, vanilla Emacs for a little while to learn where its boundaries go, before installing a bunch of modes. That makes it easier to troubleshoot problems later.
If the author is on, I'm curious why he chose wxWidgets instead of Qt; I'd be surprised if it is that much lighter weight than Qt. (I even wrote my own cross-platform toolkit with "more lightweight" as one of the reasons, and if you use all the features, it weighs in about the same size as Qt, I think.) Also, the last time I used wxWidgets, many years ago, it had a clunky MFC style to it, limited feature, along with a rather Windowsy look and feel. Have those things changed?
My experience with wxWidgets based apps is that they tend to not handle DPI scaling well. Audacity is a good example, IIRC that's one of the reasons they're moving to Qt.
I was wondering how people feel about this trend. LLM allow you to free yourself from foundations (frameworks, programmable programs) to just generate any support layer you want from old or new libs. This is all very understandable.. yet I find it a loss, in the lisp world, having a core model and semantics shared by all the upper layers means ease of reuse (for instance people leverage emacs calc classes in other places), llm allows for easier fragmentation..
I also suspect it allows easier consolidation. Moving from a deprecated lib to a new (and better) one for example.
Implementations will likely homogenize a bit as well, but on the other hand boy am I glad not to see an increasing amount of bizarre naïve hand-rolled implementations for some things.
Why? What makes Spacemacs so different/special that it requires some kind of distinct opinion that would be extremely valuable? Spacemacs is the same old Emacs with some out-of-the box customizations atop - there's nothing fundamentally different about it.
Spacemacs "is not batteries-included" version of Emacs. You say that and people may get confused. It's not a "different version" of Emacs, it's not Emacs at all - it's an Emacs config you can configure - a meta config. It is more like a collection of recipes you can run on Emacs. That is an important distinction.
Hence my question, what Wellons (who's a seasoned veteran of Emacs) could ever say anything about Spacemacs (or Doom - which in this context makes no difference)? What kind of views one would be interested to hear? Using the Space key as the "Lead key", or something about local-leader key; or vim-navigation/Evil in general; or modules/layers architecture of Emacs config? He said in that post you shared that he believed he'd eventually end up using Evil - he doesn't need to use Spacemacs for that.
Spacemacs is great for beginners, for people who don't want to deal with learning Emacs native bindings - they are legit confusing. For someone like Chris, it makes little sense, they'd probably would just add modal editing packages to their existing config. Even though Spacemacs and Doom are still valuable - one can find many interesting gems there.
Also, these projects may give you a good discipline for structuring your keys mnemonically - everything files related would be at "SPC f", search stuff on "SPF s", etc.
"The" future of software engineering is a silly thing to predict. I might predict one substantial change is that we get our house a little more in order about universities and the private sector distinguishing between computer science, software engineering, and software development. Obviously they are not cleanly separated[1], but LLMs will affect each subfield very differently.
- The impact on computer science seems almost entirely negative so far: mostly the burden of academic wordslop, though an additional negative impact is AI sucking all the air out of the room. What's worse is how little interesting computer science has come out of the biggest technological development with computers in many years: in fact there has been a terrible and very sudden regression of scientific methodology and integrity, people rationalizing unscientific thinking and unprofessional behavior by pointing to economic success. I think it'll take decades to undo the damage, it's ideological.
- The impact on software development actually does seem a bit positive. I am not really a software developer at all. It always felt too frustrating :) However the easing of frustration might be offset by widespread devastation of new FOSS projects. I don't want to put my code online, even though I'm not monetizing it. I'm certainly not alone. That makes me really sad. But I watched ChatGPT copy-paste about 200 lines of F# straight from my own GitHub, without attribution. I'm not letting OpenAI steal my code again.
- Software engineering... it does not seem like any of these systems are actually capable of real software engineering, but we are also being adversely affected by an epidemic of unscientific thinking. Speaking of: I would like to see Mythos autonomously attempt a task as complex and serious as a C compiler. Opus 4.6 totally failed (even if popular coverage didn't portray it as such):
The resulting compiler has nearly reached the limits of Opus’s abilities. I tried (hard!) to fix several of the above limitations but wasn’t fully successful. New features and bugfixes frequently broke existing functionality.
"Future of software engineering" folks should stuff like this in mind. What model is going to undo Mythos's mess? What if that mess is your company's product? Hope you know some very patient humans!
[1] They should have different educational tracks. There is no reason why a big fancy school like MIT can't have computer scientists do something like SICP and software engineers do the applied Python class. Forcing every computer professional into "computer science" is just silly; half the students gripe about how useless this theory is, the other half gripe about how grubby the practice is. What really sucks here is that I think Big Tech would support the idea, we're just stuck in a weird social rut.
I feel like LLMs[1] are going to cause a kind of "divorce" between those who love making software and those who love selling software. It was difficult for these two groups to communicate and coordinate before, and now it is _excruciating_. What little mutual tolerance and slack there was, is practically gone.
Open source was always[2] a fragile arrangement based on the kind of trust that involves looking at things through one's fingers (turning a blind eye may be more idiomatic in English), and we are at the point where you just have to either shut your eyes, or otherwise stop pretending that the situation can be salvaged at all.
Just a thought I had: some people think that LLM-shaming is declasse, and maybe it is, but I think that perhaps we _should_ LLM-shame, until the AI-companies train their LLMs to actually give attribution, if nothing else (I mean if it can memorize entire blocks of code, why can't it memorize where it saw that code? Would this not, potentially, _improve_ the attribution-situation, to levels better than even the pre-LLM era? Oh right, because plagiarism might actually be the product).
[1]: Not blaming the tech itself, but rather the people who choose to use it recklessly, and an industry that is based almost entirely on getting mega-corporations to buy startups that, against the odds, have acquired a decent number of happy-ish customers, that can now be relentlessly locked-in and up-sold to.
To toss them because the level of damage they have done it's astounding. Tons of companies are still fixing the losses from vibe coding.
What we need it's better code analizers, lexers and the like. And LLM's are practically the opposite because they can't never, ever give a concise answer by design. Worse, they rot over time.
> Tons of companies are still fixing the losses from vibe coding.
Well, you have to separate "future of" from "ensuing damage". This is similar to the fishing industry. Fishermen in the past used spears, rods, small nets, nowadays annual national catch statistics are reported in kilotonnes. They are destroying the ocean floor, causing massive extinction of species, causing irreversible damage. Yet, you can't argue looking 100-150 years back that industrial fishing was not "the future of the fishing industry". That is also why programmers won't ever disappear because of AI progress. Just like we still need fishermen, we'd need programmers. The sad truth about this is that soon we truly may have no need for fishermen, because there's no fish left in the ocean.
Hmm... it's hard to imagine that fishing with dynamite ever caused species extinction; trawling industry definitely did. I don't think it's a fitting analogy, but I get what you're trying to say. I'm not arguing about the damage. The damage this human invention will cause is guaranteed. Just like plastics have. The answer to that is not "ban plastics completely" - kinda late for that, innit? The answer is "put resources into plastic research, make safe plastic possible". Maybe if we make safe, better AI, it will help with the plastic? If there's anything I've learned about humans - first, we probably cause a lot of damage.
> With my newly-acquired superpowers I could knock out the last two pieces in a few days’ work
From the linked post:[0]
> I left an employer that is years behind adopting AI to one actively supporting and encouraging it. As of March, in my professional capacity I no longer write code myself. My current situation was unimaginable to me only a year ago. Like it or not, this is the future of software engineering. Turns out I like it, and having tasted the future I don’t want to go back to the old ways.
It's deeply distressing to watch people fall into AI psychosis. Being smart, accomplished, or experienced is no defence.
After the bubble pops and the industry realises the damage these tools can do to people, folks like the author will have to confront that they were taken in by a lie. Many won't be able to confront that.
Now I may be old, but whenever we put a lot of faith in unaccountable megacorps it sure seems to have backfired a lot (remember when Amazon removed 1984 from people's libraries?). As long as a model running locally on a regular laptop bought from the supermarket isn't good enough I'm gonna remain sceptical about current AI.
There's also ethical and environmental considerations, but let's see if we can walk before we try to run.
It's not AI psychosis, you're interpreting what he said to the extreme.
Anyone who has actual corporate team lead or management experience understand AI as effectively a junior dev who doesn't have great persistent memory. These devs using AI are reviewing, guiding, and validating the work given to them by AI just as they would from a junior dev.
The inverse of your statement is more apt; it's distressing to see people so angsty about AI usage. There are going to be skilless vibecoders and then there are going to be experienced devs (like OP) who figured out their AI workflow to multiply their productivity 2-5x.
What the future holds for AI model pricing-- that is a valid concern. But I don't think that's what you intended.
> There are going to be skilless vibecoders and then there are going to be experienced devs (like OP) who figured out their AI workflow to multiply their productivity 2-5x.
Are you sure OP belongs in the second group? He explicitly said he doesn't read all the code generated by his AI:
> I have not read most of the code, and instead focused on results, so you might say this was “vibe-coded.”
this is just like being promoted from developer to manager. some people like it some don't. with AI there is another dimenstion: some people like managing machines instead of people, some don't.
it's not for me. i don't want to stop writing code. i don't mind to manage people but i don't want to manage machines (at least not with an unprecise interface/outcome as AI provides). consequently AI may be fine for this person, but it is not for me.
> It's deeply distressing to watch people fall into AI psychosis.
It's unclear what you're saying here... Yes, AI-induced psychosis is a real problem and the frontier labs' mitigations are ineffective, to put it mildly. But using AI as a coding tool doesn't have anything to do with psychosis.
AI psychosis is to have a toxic relationship with a chatbot as if though it was a real person. It has nothing to do with engineering. You're muddying your own point by conflating all LLM use with some kind of delusion. There is a lot of nuance in this space and you're not doing yourself any favors by ignoring it if you're an engineer. There is no bubble pop, other than a straight up apocalypse, that is going to put this genie back in the bottle. Models are trained. Tools are built. There isn't a single industry that cares about artistry more than efficiency. It's here to stay, it's getting better, and if you don't know how to use it, you're going to have trouble finding work.
> Being smart, accomplished, or experienced is no defence.
Perhaps you're confusing "not using AI" with "not being dependent on AI", those are very different things.
The edge isn't from avoidance, it's from using AI as leverage on top of real skill. A strong developer + AI beats a strong developer alone, and massively beats a weak developer + AI. The edge doesn't come from avoiding a tool - it comes from being the kind of person who doesn't need it but uses it anyway. That's leverage. Refusing to use it is just leaving leverage on the table to make a philosophical point.
> After the bubble pops
People like Chris (who is enormously capable engineer) would just move onto different tools, different techniques and paradigms. That is the essence of being a software developer - many of us choose this path specifically because it forces you to learn something new, every single day. That is (I suspect) also another reason why Wellons decided to migrate away from Emacs - he just learned it so deeply, perhaps it's no longer giving him the satisfaction of learning. Which to be honest is hard to believe - Emacs is a boundless playground, there's always something new to learn there.
I just wonder how jobs like that won't replace their employees. Seems too good to last. In a few years OpenAI will just sell $1,000 per month Human-free Agent Coding for businesses.
Saying they have psychosis is a rude exaggeration.
Not writing code isn't the same as vibe-coding. You can stay on top of AI, make it rewrite the things that look bad, make it refactor until you're happy with how things look, etc....
Maybe a lot of people who are doing that aren't admitting that they've stopped writing code, but when all you're ever doing is manually fixing a few lines, or moving blocks of code to more sensible places, fixing jumbled parameters in a call and such, you're not really writing code anymore. You're now a chef in a kitchen yelling at assistants and just touching things when dealing with communicating a correction to one of those dimwits is more frustrating than just doing it yourself.
You still have to be a cook to be a chef, though. But the reason I say that AI is dumb is because I tell it to do things, it does them in a dumb way, and I complain at it and tell it to write it in a sensible way. It screws that up, and I tell it to do it again, and not to screw it up. I'm still not coding. If it goes into a loop of nonsense, I touch things with the intention of doing just enough to knock it out of that loop (or rather keep the new context from falling into it.)
"After the bubble pops" we might see that a lot of new chefs can't actually afford assistants. But just as likely, the overbuilt (government-subsidized directly and through policy) capacity might end up getting written off, and at the cost of electricity and maintenance costs could stay reasonably good. Or algos improve. Or training methods improve.
AI is dumb is because I tell it to do things, it does them in a dumb way, and I complain at it and tell it to write it in a sensible way. It screws that up, and I tell it to do it again, and not to screw it up. I'm still not coding. If it goes into a loop of nonsense, I touch things with the intention of doing just enough to knock it out of that loop (or rather keep the new context from falling into it.)
it is inconceivable to me how anyone could ever enjoy working like that. but whatever floats their boat.
No. AI is a must for software development. It's non-negotiable. The productivity gains are too great. The era of 100% human-written code is over. People will still do it as an idle curiosity, for personal projects only they intend to use. But even those open source projects with significant user bases that forbid the use of AI (like, afaik, NetBSD) will be eclipsed by those that support it in terms of features, capability, and security. And the commercial world? Forget it. You cannot keep pace with your employer's expectations unless you learn to use these tools well. This is not up for debate. It's reality.
Plenty of accomplished devs are getting good results and accomplishing tasks with unheard-of speed using AI, so if you're still not, that's a PEBKAC. You are not using the tools correctly. Figure it out before you complain.
> "No. AI is a must for software development. It's non-negotiable."
Absolutist rubbish.
> "But even those open source projects with significant user bases that forbid the use of AI [...] will be eclipsed by those that support it in terms of features, capability, and security."
As is this. If a language model is relevant to a project, open source or otherwise, is of course heavily dependent on its nature (ethics, use case, deployment, working environment/culture, et cetera).
LLMs may be a must for programming, but not for engineering. Writing code is the easy part once you figure out what actually needs to be built in the first place.
Indeed. But figuring out what actually needs to be built is the systems analyst's job, not the programmer's. It takes people skills and holistic thought, something programmers are generally poor at (and AI certainly is no good at, at least not today).
I know how to do things by hand, man. But the writing is on the wall: that skill is going the way of writing programs on punchcards. And there's little we can do about it because the economics in favor of LLMs are like laws of physics.
Yes, model collapse is gonna suck. But LLMs are not just left to self-train, they are guided by human researchers who are going to find ways to groom and direct the models to avoid collapse. They can make billions by shipping better models, so why wouldn't they invest a lot of effort in that?
You still don't get where I'm coming from. The AI takeover of programming is inevitable, and I hate it. But my feelings don't make the brutal economics go away. A skilled developer can now accomplish in days what used to take weeks or months with proper use of these tools. Period. I know this because of the absurd number of skilled developers here, on X, Mastodon, and elsewhere—including OP's author—saying "with AI I'm accomplishing in days what used to take me weeks or months". And if you have the opportunity to make use of the tools, you have to be stupid, or you're cutting off your nose to spite your face, not to.
You should’ve started with this. Take a really deep breath, take your phone, find closest park, go slowly there (don’t prompt LLM on the way), find a green patch on the ground (it’s called grass) and touch it.
Contrary to you I've been playing with the AI Howto stuff from TLDP forever from Markov chain based chatbots to genetic algos and neural networks and I know the limits on LLM's and how the rot on retroalimentation by reusing their own data. They can't extrapolate. Period. In every cycle they get dumber by design unless there's new human curated content. Go try to explain that to corporations having their copyrighted code being stolen away, be GPL or propietary.
I find all those arguments unconvincing. The right 10,000 lines of code can be worth a billion dollars. The idea that it would be somehow uneconomical for me to take the time to get it right feels like utter nonsense. I don't have to have much of an edge over an LLM to come out on top once you start to distribute the resulting product. Three months of my time costs $25,000 or so (hey, I'm in Europe, adjust as you see fit), if I can make something just a little bit better than AI Albert who can whip something together for a tenth of the price, my time will pay for itself once you have modest amounts of revenue from it.
And I'm fully convinced that what I do will not just be a little bit better than what AI Al makes. It will trounce it in all quality criteria. But of course, coincidentally with the rise of AI assistance, software quality has completely disappeared from the conversation. I wonder why.
I know this is the joke, and I know Evil is the jokey reply, but ... both sides of the joke carry a grain of truth, as good jokes do.
I know a lot of people become comfortable with the default editing tools in Emacs, and many of them are good, but on the whole, vanilla Emacs does not ship with a great editor.
The Vim family makes up amazingly well designed editors.
Evil is a Vim implementation in Emacs. It is the best of both worlds, and not just on paper. It actually works.
A big loss for the Emacs community! emacs-aio is great!
I see the author is spring cleaning:
> I've turned over a new leaf (no more Openbox, Tridactyl, Xorg, xterm), and so some of these things I no longer use. On Linux I now use KDE on Wayland with a minimally-configured browser. I miss the power user features, but I do not miss the friction and constant maintenance.
https://github.com/skeeto/dotfiles/commit/df275005769b654618...
> I am no longer using Mutt nor running my own mail server. In general less terminal stuff for me.
https://github.com/skeeto/dotfiles/commit/e331e367c75f66aaa9...
LLMs have inspired a similar change in me: with a big change in how I work, I feel I can and should be more flexible with adopting new tech, which involving freeing myself of previous choices.
> LLMs have inspired a similar change in me
FWIW, the age of LLMs made me build a deeper, more intimate relationship with Emacs, because it's a Lisp REPL loop with a built-in editor, not the other way around. When you give an LLM a closed loop system where it can evaluate code in a live REPL and observe the results, it stops guessing and starts reasoning empirically.
LLM that I run inside Emacs can fully control the active Emacs instance. I can make it change virtually any aspect of it. To load-test things, I even made it play Tetris in Emacs. And not just simply run it, but to actually play it without losing. It was insane.
Also, Emacs is all about plain text - you can easily extract text from anything - from the browser, terminal, CLI apps, Slack, Jira, etc., and you can do that on your own terms - context can appear in a buffer, in your clipboard, become a file or series of API requests. That is really hard to beat.
Asolutely. It doesn't have to be an either-or. I use gptel and org mode when I was to be really hands on driving the development. It's a very different mode of interacting with models, and the way newer models are trained to play nice with harnesses makes them very obedient.
https://poyo.co/note/20260202T150723/
Interesting. Tnx.
In case anyone else wondered about using gptel to edit thinking (eg vis Qwen3.6's `preserve thinking`), [1] explains:
> In a multi-turn request, from the time you run `gptel-send`, everything the LLM sends is passed back to it [...during tool calls...] includes multiple reasoning blocks. [...But...] subsequent gptel-send calls read their input from the buffer contents (or active region, etc), so the reasoning blocks in the buffer will not [] be sent as "reasoning_content".
But in org mode, those are apparently `#+being_reasoning` blocks (`gptel-include-reasoning`?), so editable thought might be an easy addition?
A caution, fwiw, that any llms which respond with interleaved content and reasoning blocks, currently only work when not streaming, and fixing that is non-trivial.[also 1]
[1] https://github.com/karthink/gptel/issues/1282
Is this your site? I cannot find an RSS feed for it. I'd like to subscribe.
Same for me!
My .emacs config has improved and I wrote my own Emacs based coding agent https://github.com/mark-watson/coding-agent
Same here. Emacs has been the stable editor for all kinds of language changes, tool changes, and IDE changes. Emacs is great with LLM, as LLM is mostly text related and Emacs is great in capturing and dealing with text.
So much this. Lisp can do things other languages have a hard time with. I think a resurgence is in order.
Can't agree more. Lisp was discovered/invented for the purpose of AI research. Of course, modern neural nets and transformers is a big departure from McCarthy's vision of AI - logical, interpretable, symbolic. However, if the current wave of AI hits a wall - and many serious researchers think it will, or already has at the margins - there's growing interest in neurosymbolic approaches that combine neural nets with symbolic reasoning. That's closer to McCarthy's original vision, and Lisps are genuinely well-suited for it.
Let's be honest: Lisp probably won't ever get bigger than Python, unless Python for whatever reason starts dying on its own. But if AI ever gets serious about interpretability, formal reasoning, program synthesis - all the stuff Lisp was built for - it just might quietly become relevant again in research contexts, without ever reclaiming mainstream status.
Scicloj has been building out a serious ML stack in Clojure - noj, metamorph.ml, scicloj.ml.tribuo, libpython-clj for Python interop. Beside that, people been proving that 'code is data' is exactly what makes it a better target for LLMs. Clojure is most token efficient PL - it's been proven. There are some recent interesting clj projects in relevance:
—
https://github.com/realgenekim/clj-surgeon
https://clojure.getpando.ai
https://github.com/yogthos/chiasmus
Clojure? Forget it, SBCL would be better for that task. Just look what could be done with Coalton.
Well, this is because "normal" programming languages are one step above AST. So LLM has to work with program text, which is much easier than regular human text, as it is constrained to well defined number of keywords and grammar, but still this is pretty variable. Lisp is just AST, so it is one level lower. I guess that at some point LLM-s will stop writing human-readable code, as this is additional obstacle, they will work directly with binaries or virtual machines code (like in Java), because this will be easier and eat less tokens.
Can you describe your setup on how you use LLMs within Emacs?
Of course.
I've tried different AI packages and currently gptel and ECA remain the main ingredients. This is a quickly changing landscape, and things may change, but for now it feels very good.
I like gptel because it's enormously extendable and exploitable - it allows me to send LLM requests from just about anywhere - I could be typing a message (like this very one) and suddenly in need of ideas for how to phrase something better, or explain simply, or fact-check my assumptions, whatever. Quick & dirty interaction that gets discarded in the same buffer. For longer investigations and research I would use a dedicated gptel buffer. Those get automatically saved.
I don't use gptel as a coding assistant, even though you can do that, it's not really optimized for that kind of work. I use ECA. It works much better for me than every other alternative I tried, and I tried more than a few. What's crazy that I sometimes would type a prompt in ECA, then ask gptel (with a different model) to make it more "AI-friendly" changing the prompt in-place and then send it.
All my MCPs are coded in Clojure (mostly babashka)¹ - because (like I said) giving an AI a Lisp REPL makes much more sense (maybe even more than using a statically typed language). I had to employ a few tricks so all the tools, skills and instructions can be shared between gptel, eca-emacs, ECA Desktop, Claude Code CLI, Claude Desktop App, and Copilot CLI. Even though I mostly use gptel and ECA, it's good to keep other options around, just in case. All the AI-related Emacs settings are in my config².
Is this helpful, or you want some more concrete examples?
—
¹ https://github.com/agzam/death-contraptions
² https://github.com/agzam/.doom.d/tree/main/modules/custom/ai
Big same. I have been doing a lot of clojure development, and hooking up my app to a live REPL has given me an absolutely fantastic feedback loop for the LLM. I don't think a lot of people understand what they're missing.
> I don't think a lot of people understand what they're missing
Very true. There's an enormous tacit knowledge gap. Check this out:
I have to use Mac for work. My WM is Yabai, which is controlled via Hammerspoon (great tool on its own), which means I can use Fennel, which means I can have a Lisp REPL. MCP connected to that REPL can query and inspect every single window I have on my screen. It can move them around, it can resize them, it can extract some properties of them. It's figuring out stuff like: "pick a selected Slack thread from the app and send it into an Emacs buffer", or "make my app windows work like Emacs buffers" - pick from the list and swap it in place. Or "find the HN thread about retiring from Emacs among my browser tabs and summarize the content"...
Never in my life have I been more grateful to my younger self for grokking the philosophy of Lisp. Recent months have only reinforced my firm belief that this 70-year-old tech is truly everlasting. Thank you, John McCarthy, for the great gift to humanity, even though so weirdly underappreciated.
I am really loving working on a fun Elisp project with pi, a minimal and very extensible agent. I have the agent use emacsclient to control my session, showing me code, running magit ediff for me, testing, formatting, reloading -- it's all working great.
I'm still exploring all the ways the agent and I can collaborate using Emacs as a shared medium, but at the moment am super optimistic about it.
> LLM that I run inside Emacs can fully control the active Emacs instance ... > you can easily extract text from anything
This is what gives me the most pause.
Care to explain? Why is it? You think it's dangerous or some other reasons?
It's definitely dangerous.
Do you have credentials anywhere within reach of that session? Can you open your bank account in a browser ... within reach of that session? Are your contacts available within reach of that session? What about personal notes/emails/goals or other sensitive information? That people think these can't be added together in one very socially/monetarily destructive fell swoop is ... telling.
Ignoring obvious bad-actor concerns from just giving root to your whole life to an LLM running on someone else's server, LLMs themselves can act in ways that are extremely counterproductive to their organization/host/etc.
A quote/warning I learned in the late 90s is just as relevant today, "Computers make very fast, very accurate mistakes."
Emacs has full system access with arbitrary execution so full emacs access -> full system access.
What? You run emacs as root?
Anything an LLM does on your computer should happen it its own account. No sudo config of course, or at most one that is strictly limited to what you want to allow it to do (risk here, as many programs have non-obvious paths to general command execution).
It should have zero access to your private home directory or your system configs. You can have access to its files of course. That's the beauty of separate accounts and permissions.
The RCE vulnerabilities especially with community flavors of Emacs that come with access control out of the box.
So? My terminal has the same full system access. If I didn't use Emacs, I'd be using Claude code in it. It's contained locally on my computer, I don't see any problem here. I use Emacs like my OS-layer. Why would I complain that my OS has access to something? It would be weird and annoying if it's the opposite.
You have to give Claude Code access to every shell command individually unless you run in yolo mode.
I don't think it's very reasonable to use claude code on a computer that have credentials without some kind of sandboxing or validing every command it does, at which point I'd rather do things manually
Yeah, that's incredibly unsafe. You made a footgun machine and you're firing it with no shoes on. Don't run that on any machine with credentials you care about.
At the very least, run it in Docker. It's not a security tool, but it's at least some kind of guardrail against data loss and exfiltration.
Ah come on, guys, let's talk pragmatically. "Malleable editor as an OS layer" has benefits beyond subjective reasoning. Emacs has had M-x shell-command and arbitrary elisp eval forever. A metacircular MCP isn't some new capability class. Even if I didn't use Emacs - my shell, my editor, my browser extensions, my npm install, my VSCode plugins, my curl | bash from yesterday - they all have the same access. Singling out the LLM in this context is like selection bias.
Of course, reasonable mitigations are a must - just like for any other tool. Narrowing MCP scope - tool routing rules, read-only git defaults, etc. "Docker or nothing" is a lazy answer - Docker-for-everything has real costs: friction, broken integrations, worse ergonomics.
Practical security is all about staying in the goldilocks zone. You shouldn't get relaxed about the basics - sandboxing, 2FA, password managers - they are worth doing, and you can get so paranoid about so many things, and yet against a targeted, well-resourced attacker, your sandboxing posture is mostly irrelevant. The interesting attacks bypass the threat model entirely. Read about Ben Nassi's team research¹ - pretty cool example. There are multitudes of other ways and your Docker container won't stop them. Defend against the boring 99%, and accept that the 1% is someone else's problem (or a much bigger problem than your dev environment)
—
¹ https://www.nassiben.com/video-based-crypta
TLDR LLM Summary: Researchers showed that a device's power LED subtly flickers in brightness and color while the CPU performs cryptographic work, and these flickers leak information about the secret key. By pointing an ordinary video camera (an iPhone or an internet-connected security camera) at the LED and exploiting the camera's rolling shutter, they boosted the effective sampling rate from 60 to 60,000 measurements per second, enough to do cryptanalysis. Using only this video footage, they recovered full ECDSA and SIKE keys from a smartcard reader and a Samsung Galaxy S8, with no malware on the target devices.
It's your computer and you can do whatever yolo nonsense you want, my dude, but put those goalposts back where they were.
"Don't run that shit on a credentialed box with data you care about" is addressing real threats, not some goofy nation state thing or abstract security research.
If you let the footgun machine constantly generate new code and run it on your computer, you're just asking for data loss and bad shit to happen.
Docker isn't a great solution but it at least doesn't let yolo code delete files or access env vars or read the contents of .ssh/
> my browser extensions, my npm install, my VSCode plugins, my curl | bash
Yeah, and you shouldn't yolo those, either lol. If they didn't come from a trusted source, you need to read through them. If you don't want to, don't use them. That's not paranoia, that's, like, normal.
> If you let the footgun machine constantly generate new code
Are you talking about autonomous LLM projects that automatically write code? Yeah, no shit, I wouldn't run anything like that directly on any machine without sandboxing. My typical LLM use inside my editor is never in self-driving mode, there's not even cruise-control - I tell it exactly when to write, where to write and how to do it. Automated scripts never get run by LLM and don't get to run at all without prior precise and meticulous inspection. I'm not moving goalposts - at worst we're in disagreement on the level of pragmatics vs. paranoia, that's all.
I don't even get why people are so crazy about LLMs generating code - on both sides. LLMs for me personally are such a great tool for investigating things, for finding things, for bridging the gaps - the stuff that happens 10K feet above code writing. By the time I'm done gathering the details, code generation becomes an almost insignificant touch of the whole endeavor.
I wonder what friction/maintenance he found with Tridactyl
For me the friction always comes when I try to use the internet without it
We're talking about https://addons.mozilla.org/en-US/firefox/addon/tridactyl-vim...?
One example: it disables the default Ctrl-F search function but its own search function is subpar (no match counts/hlsearch, e.g.) and often clashes with website's built-in search (on Github, e.g.).
It doesn't work on the default newtab either, and changing the default newtab somehow makes opening a new tab slower (that's FF's fault, I guess)…
You can type /phrase and then press ctrl-F for the full search bar. A more annoying problem is that some websites capture / presses, making it harder to initiate a page search. Then you have to shift-esc ctrl-f to search.
cool to see you in the wild, for me, it does work out of the box however, some sites will break or have too complex of a navigation, especially with iframes. and will have to swap to a mouse which is a bummer, which I understand is an inherent limitation of the tech, since web is not built today to do that.
solid extension, big fan
I'm not the author, but I recently gave up on Firefox, sadly.
Since I needed to keep around a Chromium anyway, and I already am forced to use one for work, it became simpler to just use solely use a Chromium.
In the process I dropped some extensions.
It's been great.
To be honest I find the use of a separate browser at work a good way of forcing separation - all "work stuff" is done in one browser, and all "personal stuff" is done in a different one.
This time around I'm using Chromium for personal stuff, and Firefox for work-stuff. I do more work-related browsing, so having the vertical tabs in firefox meant that was the better browser to use for official stuff.
(In my previous job I used safari for work, and firefox for personal.)
I used Firefox for 20 years, loved it, defended it. But they just kept removing features that I was used to, and I ran into some bugs with popular websites and decided to hang it up. Currently on Brave and fully convinced it's the new Firefox.
I am running Ubuntu as my desktop operating system. I would never do this without an LLM to do the work of keeping it functional for me. Today, Rise of Nations wouldn't launch. Never had that problem before. Seems the driver for 32-bit games and my Nvidia GPU weren't getting along after an update. Codex was called in and solved the problem for me in about 5 minutes. I just copied and pasted the Steam log and let it tell me what to do. Tadah.
I'm actually excited about the potential for a future where local agents help improve the operating system experience as I go by making changes based on my use case. All local, of course. I do not want to trust a cloud provider with my use cases/behavior on my computer so they can sell me more ads...
LLM discourse inspires me to do a cleaning of my browser tabs every hour.
Does anyone else not understand what people mean when they refer to the "friction" supposedly inherent to these power user tools? Almost none of the configs/scripts/etc I use for my heavily-customized and terminal-heavy setup get changed for years at a time.
If you are frequently having to use other computers, a heavily customized setup has much more friction either to setup the machine like you want, or remember how to do things without all the customization (if you can't customize or it isn't worth the time).
When I graduated college I used Dvorak and Emacs on Linux. Six months of having to use shared Windows lab computers extensively beat me down to surrender all of those points - my brain just couldn't handle switching, so I conformed my desktop to match. Then later I switched jobs to a group that was all Unix, but of many varieties most of which only had vi, not Emacs. And so I learned vi. Sometimes minimizing friction means going with the flow.
A heavily-customised setup is very comfortable.
It's so comfortable that it acts as an impediment to change, since some types of change are uncomfortable.
This can feel like friction to me.
When I remove customisation, I am more "open to experience", and often find preferable tooling.
Arguably NixOS is the most config heavy platform but it solves the pain point of having to reconfigure on different systems. Especially in the LLM era where I can configure Emacs and my OS decoratively.
How do you nixify your Emacs configuration? I've looked into it but at the time the advice was to specify dependencies both in Nix and in .emacs.d, which seemed redundant to me. Is there something like callCabal2Nix for Emacs?
Edit: Or do you mean "declaratively" in the sense of using something like straight.el?
> heavily-customized and terminal-heavy setup
this exactly. most people can’t set it up that well.
The previous post alludes to Evil being the long term plan. That seems sensible: it ought to be easier to use an implementation of Vim in Emacs rather than port much of Emacs to independent applications.
Yet the author ended up doing the latter and it's not really made clear why. Why?
I used emacs full-time for many many years. Then I switched to Vim or other editors with Vim modes, also for many years. I have to be honest, I don’t see a particularly clear winner between them. Model editing is a bit unusual in many ways. There are some things that it certainly makes easier, but I personally found that the overall process of editing and writing code in real time for me was more efficient in a single mode emacs.
> I don’t see a particularly clear winner between them
Because deep down they are incomparable categorically. Separate the tools from the foundational ideas and you see the very different value. Vim-model of text navigation is fantastic, practical, brilliant idea. Once you grok it - you can take it anywhere. You can use it in your editor, browser, terminal, WM. Emacs is rooted in another, even more brilliant idea of practical notation for lambda calculus. These ideas have no overlap. But understanding the philosophy of each (ideally both) could open so many different possibilities.
Why not both? Evil is a reimplementation of Vim in Emacs and it is great.
Evil is not just great. It's the only "true" vim layer outside of vim/nvim worth commending. Gary Bernhardt once said: "there's no such thing as vim mode", in the sense that every single attempt to emulate vim outside of vim/nvim is a bleak imitation. None of them - not a single VSCode vim plugin, not Sublime's, not IdeaVim in IntelliJ, no browser extensions are without some glaring omissions. While Evil+evil plugins in Emacs are not just 'close' - they are better than the source of inspiration. Gary just probably didn't know that.
My vim muscle memory has paid off more for me than my emacs muscle memory. Emacs was the better editor, though. Anything that doesn't have Vimscript is an automatic winner IMO.
I use ^a to go to the beginning of a line and ^e to go to the end nearly everywhere. Many Emacs keystrokes are so pervasive that they're not often thought of as Emacs keystrokes.
Aren't they actually readline keystrokes, and emacs is "readline-aware"?
I'm pretty sure the navigation shortcuts date back to when Emacs was literally just a set of TECO macros, and GNU Readline adopted them
Evil is a tried-and-true Vim implementation that doesn't use Vimscript!
My usage of emacs is so vim-like that I’ve tried switching a few times. Vim is definitely faster, and overlays and cursor placement is much simpler and more intuitive. But there were still feature gaps and configuration issues that prevented full adoption.
You may get good bang for your buck out of neovim. With only a very minimal set of plugins, it has replaced all other IDEs for me. (They're also making good progress Sherlocking their core plugins, so the future is bright for those of us who dislike plugins for core functionality.)
can you elaborate? Heavy vim user here, have considered using emacs in vim mode to quell a decades long nagging curiosity. Just need a compelling nudge.
If you haven't used it before, give it a shot. Worst case you waste a few years of your life.
Doom emacs and Spacemacs are both good starter kits to give you an idea of what you could do.
I don't know how much this applies to everyone else, but the ability to display images inline is really nice for notetaking. I cannot write properly, so org-mode (a notetaking tool that can export to a variety of formats) with embedded rendered latex equations makes it really easy to take notes and write things up in a plaintext format without needing to export every 30 seconds to view equations. The ability to embed code that can actually run is also very nice.
Check out Doom Emacs if you are looking for a good starting point. The defaults make sense coming from Vim.
Emacs is primarily a platform for developing Lisp applications. Lisp applications are immensely hackable, meaning an Emacs configuration can be tailored in detail to specific desires.
There is also an ecosystem of applications for Emacs that are really good. They don't require you to use Emacs as your editor (you can run, say, Magit as a standalone instance) but if you do, they integrate really well with each other.
+1
I've been retired from emacs for several years now but I'm still looking for a magit replacement that is independent of my editor. Vscode's magit extension is really good but i split my time between IntelliJ and vscode.
Anyone know of something like this?
When I still used Git, I used to have a minimized `magit-init.el` that essentially did:
And a small wrapper (`~/.local/bin/magit`): It worked well for me because I can reuse all my keybindings (evil + leader keys with `general`) and my workflow is fully in the terminal. (I have since moved on to Jujutsu, and `jjui` is filling this gap for me right now, but it's not quite a magit-for-jj).Lazygit is the closest thing I've seen; it's what I use on remote hosts when TRAMP-ing into Magit would be too painful.
lazygit is too slow for me.
If you are on Linux/Gnome, try out Stage:
https://flathub.org/en/apps/io.github.aganzha.Stage
I was a loyal magit user for a decade. Now I use jujutsu from the command line. It's actually really nice.
>i split my time between IntelliJ and vscode
The IntelliJ git client is my favorite by far, I am curious what do you not like about it?
It's not that i don't like it. It's that I've got 10-15 years (i think) of muscle memory with magit
Honestly, magit is just a masterclass in UI design. It makes most everything incredibly easy to do while still giving you the ability to tweak things if you need to.
https://github.com/altsem/gitu
The author is the developer of the RSS reader Elfeed, which a lot of Emacs users use several times a day. Though the article talks about a vibe-coded wxWidgets-based GUI application called Elfeed2 that he wrote as a replacement, Emacs afficionados would be loath to leave their Emacs environment and switch to that. Hopefully Emacs elfeed finds a new maintainer.
I tried Elfeed2 immediately after the announcement, well, it's nowhere near the experience of elfeed in Emacs. Elfeed2 doesn't load content for most of my feeds, elfeed does. I also integrated elfeed-tube, which shows previews of videos and their transcripts, making it no-brainer to get a summary without watching the whole video.
I don't use elfeed, but installed elfeed-tube just for youtube-with-transcripts :)
> I tried Elfeed2 immediately after the announcement, well, it's nowhere near the experience of elfeed in Emacs.
Isn't that kinda expected with a new software release, that it doesn't have a 100% feature parity?
My understanding of the context is the author is no longer using Emacs, and is very excited about the productivity from AI.
My experience with LLM technologies is it does make generating the code a really quick part. It may be reasonable to take much more time to specify things up front (rather than emergently as you would by hand). -- I mean, if you've got a well crafted description of what you want, you'll be able to get a working program MUCH quicker with an LLM, today, compared to writing it out by hand.
Would it really be surprising/shocking if an LLM was able to rewrite (most) features from an existing software, to a new software?
It seems like the reality today is, we've gone from a maintained software in a niche ecosystem with happy users, to a more fragmented one where everyone has an LLM write their own half-baked one.
On the other hand, given there is prior art in Elfeed, why wouldn't it rapidly converge on feature parity?
Probably because it's closer to a reimplementation than anything else, and in Emacs you can use libraries with much less friction than in self-contained languages.
Yeah, I'm not gonna use anything vibe-coded, all those apps are total trash.
Good. People here it's blind with the CADT model. They aren't even aware than with Elfeed for instance you can automatically set a hook on a feed that it calls lingva.el functions to translate, for instance, feeds written in Spanish or German to your native language in the spot.
Try doing that with Elfeed2.
Vi/Nvi2 users can almost do the same with Unix pipes and apertium/translate-shell/some lingva CLI translating tools for the whole document/regex selection/lines, a la Emacs. So can sfeed users, where depending on the feed they can pipe the plumbers' output (or just hack the scripts) to any other translating tool:
git://codemadness.org/sfeed
Heck, a few years ago I could reuse Telega.el's (Telegram client) translating functions for non-Telega buffers translating some text guide in the spot. So, did the blogger actually win something?
On the topic of Emacs.
I have long struggled to learn emacs and use it effectively. Just for the fun it, If I were to use claude as I my teacher, how can I ask it to teach me to use Emacs? I don't like to ask questions and go back to try it. I want it to be a drive that will assist me with the usage. Has anyone tried such an approach to learn emacs?
There's a nice built-in tutorial for actually editing text with it. Press control-h then t to launch it. But that's just for using the editor. For actually configuring it, I've found that Opus 4.6 (inside Droid) is exceptionally good at tweaking my init.el.
Yesterday I typed "Set the default YAML indentation to 2 spaces." It came up with
Now I can hit tab to indent YAML by 2 spaces, and I learned a little in the process. I'm delighted with this setup.> I've found that Opus 4.6 (inside Droid)
What does (inside Droid) mean ? Do you use any package to integrate to claude code in emacs?
Droid is my employer's alternative to Claude Code, which I personally prefer. But the general point is that LLMs are really good at Emacs Lisp these days.
I've started using Droid inside Emacs via the agent-shell package I learned about here a few days ago (https://news.ycombinator.com/item?id=45561672). It handles quite a few other agents, too.
Don't try to "learn Emacs". Grok the foundational layer - Lisp. Emacs is not an editor - it's first and foremost Lisp interpreter with a built-in editor. You need to get two things: REPL (evaluating Lisp expressions in-place) and structural editing (moving, expanding, transposing expressions).
You can start with vanilla Emacs with zero config and Claude/Copilot/Codex/etc, running separately. Your first goal is to have the LLM running inside Emacs - ask the LLM how. It probably will recommend gptel - as one of the most popular and robust choices, go with it.
Once you get LLM tools to modify Emacs state from within, you can just go crazy. You can tell it to change colors, fonts, ask any stupid questions, whatever. It will do it without losing a beat - no restarts, no waiting, no copy pasting - just flow.
My advice is to use a base, vanilla Emacs for a little while to learn where its boundaries go, before installing a bunch of modes. That makes it easier to troubleshoot problems later.
If the author is on, I'm curious why he chose wxWidgets instead of Qt; I'd be surprised if it is that much lighter weight than Qt. (I even wrote my own cross-platform toolkit with "more lightweight" as one of the reasons, and if you use all the features, it weighs in about the same size as Qt, I think.) Also, the last time I used wxWidgets, many years ago, it had a clunky MFC style to it, limited feature, along with a rather Windowsy look and feel. Have those things changed?
My experience with wxWidgets based apps is that they tend to not handle DPI scaling well. Audacity is a good example, IIRC that's one of the reasons they're moving to Qt.
I was wondering how people feel about this trend. LLM allow you to free yourself from foundations (frameworks, programmable programs) to just generate any support layer you want from old or new libs. This is all very understandable.. yet I find it a loss, in the lisp world, having a core model and semantics shared by all the upper layers means ease of reuse (for instance people leverage emacs calc classes in other places), llm allows for easier fragmentation..
> llm allows for easier fragmentation..
I also suspect it allows easier consolidation. Moving from a deprecated lib to a new (and better) one for example.
Implementations will likely homogenize a bit as well, but on the other hand boy am I glad not to see an increasing amount of bizarre naïve hand-rolled implementations for some things.
meanwhile I've just started learning it after being in the GUI for decades
Dude only made it 20 years with Emacs. Weak.
I've been using it since 1994.
Whoa, shit, I'm old.
Dang, your emacs config is older than me.
Lugaru (??) emacs (epsilon) on CP/M in the early 1980’s
https://lugaru.com/ They are still around. Most recent epsilon update was last month (but no real release since 2020?).
It was my first experience with emacs as well, but in MS-DOS, ca 1990. Did not know there was a CP/M version.
Would have liked to see author's opinion on Spacemacs, if possible.
Why? What makes Spacemacs so different/special that it requires some kind of distinct opinion that would be extremely valuable? Spacemacs is the same old Emacs with some out-of-the box customizations atop - there's nothing fundamentally different about it.
Searched by tags and found author may try Evil [1], but unsure if they followed through.
You're right Spacemacs is essentially a batteries-included version of Emacs.
[1] https://nullprogram.com/blog/2017/04/01/
Spacemacs "is not batteries-included" version of Emacs. You say that and people may get confused. It's not a "different version" of Emacs, it's not Emacs at all - it's an Emacs config you can configure - a meta config. It is more like a collection of recipes you can run on Emacs. That is an important distinction.
Hence my question, what Wellons (who's a seasoned veteran of Emacs) could ever say anything about Spacemacs (or Doom - which in this context makes no difference)? What kind of views one would be interested to hear? Using the Space key as the "Lead key", or something about local-leader key; or vim-navigation/Evil in general; or modules/layers architecture of Emacs config? He said in that post you shared that he believed he'd eventually end up using Evil - he doesn't need to use Spacemacs for that.
Spacemacs is great for beginners, for people who don't want to deal with learning Emacs native bindings - they are legit confusing. For someone like Chris, it makes little sense, they'd probably would just add modal editing packages to their existing config. Even though Spacemacs and Doom are still valuable - one can find many interesting gems there.
Also, these projects may give you a good discipline for structuring your keys mnemonically - everything files related would be at "SPC f", search stuff on "SPF s", etc.
But ... why?
> Like it or not, this is the future of software engineering.
For you, perhaps.
What is the future of software engineering in the age of LLMs?
"The" future of software engineering is a silly thing to predict. I might predict one substantial change is that we get our house a little more in order about universities and the private sector distinguishing between computer science, software engineering, and software development. Obviously they are not cleanly separated[1], but LLMs will affect each subfield very differently.
- The impact on computer science seems almost entirely negative so far: mostly the burden of academic wordslop, though an additional negative impact is AI sucking all the air out of the room. What's worse is how little interesting computer science has come out of the biggest technological development with computers in many years: in fact there has been a terrible and very sudden regression of scientific methodology and integrity, people rationalizing unscientific thinking and unprofessional behavior by pointing to economic success. I think it'll take decades to undo the damage, it's ideological.
- The impact on software development actually does seem a bit positive. I am not really a software developer at all. It always felt too frustrating :) However the easing of frustration might be offset by widespread devastation of new FOSS projects. I don't want to put my code online, even though I'm not monetizing it. I'm certainly not alone. That makes me really sad. But I watched ChatGPT copy-paste about 200 lines of F# straight from my own GitHub, without attribution. I'm not letting OpenAI steal my code again.
- Software engineering... it does not seem like any of these systems are actually capable of real software engineering, but we are also being adversely affected by an epidemic of unscientific thinking. Speaking of: I would like to see Mythos autonomously attempt a task as complex and serious as a C compiler. Opus 4.6 totally failed (even if popular coverage didn't portray it as such):
"Future of software engineering" folks should stuff like this in mind. What model is going to undo Mythos's mess? What if that mess is your company's product? Hope you know some very patient humans![1] They should have different educational tracks. There is no reason why a big fancy school like MIT can't have computer scientists do something like SICP and software engineers do the applied Python class. Forcing every computer professional into "computer science" is just silly; half the students gripe about how useless this theory is, the other half gripe about how grubby the practice is. What really sucks here is that I think Big Tech would support the idea, we're just stuck in a weird social rut.
We should start a support group.
I feel like LLMs[1] are going to cause a kind of "divorce" between those who love making software and those who love selling software. It was difficult for these two groups to communicate and coordinate before, and now it is _excruciating_. What little mutual tolerance and slack there was, is practically gone.
Open source was always[2] a fragile arrangement based on the kind of trust that involves looking at things through one's fingers (turning a blind eye may be more idiomatic in English), and we are at the point where you just have to either shut your eyes, or otherwise stop pretending that the situation can be salvaged at all.
Just a thought I had: some people think that LLM-shaming is declasse, and maybe it is, but I think that perhaps we _should_ LLM-shame, until the AI-companies train their LLMs to actually give attribution, if nothing else (I mean if it can memorize entire blocks of code, why can't it memorize where it saw that code? Would this not, potentially, _improve_ the attribution-situation, to levels better than even the pre-LLM era? Oh right, because plagiarism might actually be the product).
[1]: Not blaming the tech itself, but rather the people who choose to use it recklessly, and an industry that is based almost entirely on getting mega-corporations to buy startups that, against the odds, have acquired a decent number of happy-ish customers, that can now be relentlessly locked-in and up-sold to.
[2]: I mentioned a specific example of good old fashioned, pre-LLM, human plagiarism here: https://news.ycombinator.com/item?id=46540608
Is this satire? I can never tell anymore.
To toss them because the level of damage they have done it's astounding. Tons of companies are still fixing the losses from vibe coding.
What we need it's better code analizers, lexers and the like. And LLM's are practically the opposite because they can't never, ever give a concise answer by design. Worse, they rot over time.
https://smsk.dev/2026/04/26/ai-cannot-self-improve-and-math-...
> Tons of companies are still fixing the losses from vibe coding.
Well, you have to separate "future of" from "ensuing damage". This is similar to the fishing industry. Fishermen in the past used spears, rods, small nets, nowadays annual national catch statistics are reported in kilotonnes. They are destroying the ocean floor, causing massive extinction of species, causing irreversible damage. Yet, you can't argue looking 100-150 years back that industrial fishing was not "the future of the fishing industry". That is also why programmers won't ever disappear because of AI progress. Just like we still need fishermen, we'd need programmers. The sad truth about this is that soon we truly may have no need for fishermen, because there's no fish left in the ocean.
No, this is like fishing with dynamite.
Hmm... it's hard to imagine that fishing with dynamite ever caused species extinction; trawling industry definitely did. I don't think it's a fitting analogy, but I get what you're trying to say. I'm not arguing about the damage. The damage this human invention will cause is guaranteed. Just like plastics have. The answer to that is not "ban plastics completely" - kinda late for that, innit? The answer is "put resources into plastic research, make safe plastic possible". Maybe if we make safe, better AI, it will help with the plastic? If there's anything I've learned about humans - first, we probably cause a lot of damage.
That link doesn't support your statement. It's analysis is bad and irrelevant.
Why is the analysis bad? Burden is on you to explain that.
>> the level of damage they have done it's astounding. Tons of companies are still fixing the losses from vibe coding.
This sounds like unsubstantiated hyperbole - can we keep HN grounded in reality, please?
My alternative hypothesis - you don't like agentic coding or maybe LLMs in general. Not helpful for the group.
> With my newly-acquired superpowers I could knock out the last two pieces in a few days’ work
From the linked post:[0]
> I left an employer that is years behind adopting AI to one actively supporting and encouraging it. As of March, in my professional capacity I no longer write code myself. My current situation was unimaginable to me only a year ago. Like it or not, this is the future of software engineering. Turns out I like it, and having tasted the future I don’t want to go back to the old ways.
It's deeply distressing to watch people fall into AI psychosis. Being smart, accomplished, or experienced is no defence.
After the bubble pops and the industry realises the damage these tools can do to people, folks like the author will have to confront that they were taken in by a lie. Many won't be able to confront that.
[0]: https://nullprogram.com/blog/2026/03/29/
Also from that post:
> Models you can run yourself are toys.
Now I may be old, but whenever we put a lot of faith in unaccountable megacorps it sure seems to have backfired a lot (remember when Amazon removed 1984 from people's libraries?). As long as a model running locally on a regular laptop bought from the supermarket isn't good enough I'm gonna remain sceptical about current AI.
There's also ethical and environmental considerations, but let's see if we can walk before we try to run.
It's not AI psychosis, you're interpreting what he said to the extreme.
Anyone who has actual corporate team lead or management experience understand AI as effectively a junior dev who doesn't have great persistent memory. These devs using AI are reviewing, guiding, and validating the work given to them by AI just as they would from a junior dev.
The inverse of your statement is more apt; it's distressing to see people so angsty about AI usage. There are going to be skilless vibecoders and then there are going to be experienced devs (like OP) who figured out their AI workflow to multiply their productivity 2-5x.
What the future holds for AI model pricing-- that is a valid concern. But I don't think that's what you intended.
> There are going to be skilless vibecoders and then there are going to be experienced devs (like OP) who figured out their AI workflow to multiply their productivity 2-5x.
Are you sure OP belongs in the second group? He explicitly said he doesn't read all the code generated by his AI:
> I have not read most of the code, and instead focused on results, so you might say this was “vibe-coded.”
I no longer write code myself
this is just like being promoted from developer to manager. some people like it some don't. with AI there is another dimenstion: some people like managing machines instead of people, some don't.
it's not for me. i don't want to stop writing code. i don't mind to manage people but i don't want to manage machines (at least not with an unprecise interface/outcome as AI provides). consequently AI may be fine for this person, but it is not for me.
> It's deeply distressing to watch people fall into AI psychosis.
It's unclear what you're saying here... Yes, AI-induced psychosis is a real problem and the frontier labs' mitigations are ineffective, to put it mildly. But using AI as a coding tool doesn't have anything to do with psychosis.
AI psychosis is to have a toxic relationship with a chatbot as if though it was a real person. It has nothing to do with engineering. You're muddying your own point by conflating all LLM use with some kind of delusion. There is a lot of nuance in this space and you're not doing yourself any favors by ignoring it if you're an engineer. There is no bubble pop, other than a straight up apocalypse, that is going to put this genie back in the bottle. Models are trained. Tools are built. There isn't a single industry that cares about artistry more than efficiency. It's here to stay, it's getting better, and if you don't know how to use it, you're going to have trouble finding work.
> Being smart, accomplished, or experienced is no defence.
Perhaps you're confusing "not using AI" with "not being dependent on AI", those are very different things.
The edge isn't from avoidance, it's from using AI as leverage on top of real skill. A strong developer + AI beats a strong developer alone, and massively beats a weak developer + AI. The edge doesn't come from avoiding a tool - it comes from being the kind of person who doesn't need it but uses it anyway. That's leverage. Refusing to use it is just leaving leverage on the table to make a philosophical point.
> After the bubble pops
People like Chris (who is enormously capable engineer) would just move onto different tools, different techniques and paradigms. That is the essence of being a software developer - many of us choose this path specifically because it forces you to learn something new, every single day. That is (I suspect) also another reason why Wellons decided to migrate away from Emacs - he just learned it so deeply, perhaps it's no longer giving him the satisfaction of learning. Which to be honest is hard to believe - Emacs is a boundless playground, there's always something new to learn there.
I just wonder how jobs like that won't replace their employees. Seems too good to last. In a few years OpenAI will just sell $1,000 per month Human-free Agent Coding for businesses.
Saying they have psychosis is a rude exaggeration.
A line? Enjoy the papers telling you otherwise. Not just the cognition it's down, LLM's degrade themselves on every iteration.
Not writing code isn't the same as vibe-coding. You can stay on top of AI, make it rewrite the things that look bad, make it refactor until you're happy with how things look, etc....
Maybe a lot of people who are doing that aren't admitting that they've stopped writing code, but when all you're ever doing is manually fixing a few lines, or moving blocks of code to more sensible places, fixing jumbled parameters in a call and such, you're not really writing code anymore. You're now a chef in a kitchen yelling at assistants and just touching things when dealing with communicating a correction to one of those dimwits is more frustrating than just doing it yourself.
You still have to be a cook to be a chef, though. But the reason I say that AI is dumb is because I tell it to do things, it does them in a dumb way, and I complain at it and tell it to write it in a sensible way. It screws that up, and I tell it to do it again, and not to screw it up. I'm still not coding. If it goes into a loop of nonsense, I touch things with the intention of doing just enough to knock it out of that loop (or rather keep the new context from falling into it.)
"After the bubble pops" we might see that a lot of new chefs can't actually afford assistants. But just as likely, the overbuilt (government-subsidized directly and through policy) capacity might end up getting written off, and at the cost of electricity and maintenance costs could stay reasonably good. Or algos improve. Or training methods improve.
AI is dumb is because I tell it to do things, it does them in a dumb way, and I complain at it and tell it to write it in a sensible way. It screws that up, and I tell it to do it again, and not to screw it up. I'm still not coding. If it goes into a loop of nonsense, I touch things with the intention of doing just enough to knock it out of that loop (or rather keep the new context from falling into it.)
it is inconceivable to me how anyone could ever enjoy working like that. but whatever floats their boat.
No. AI is a must for software development. It's non-negotiable. The productivity gains are too great. The era of 100% human-written code is over. People will still do it as an idle curiosity, for personal projects only they intend to use. But even those open source projects with significant user bases that forbid the use of AI (like, afaik, NetBSD) will be eclipsed by those that support it in terms of features, capability, and security. And the commercial world? Forget it. You cannot keep pace with your employer's expectations unless you learn to use these tools well. This is not up for debate. It's reality.
Plenty of accomplished devs are getting good results and accomplishing tasks with unheard-of speed using AI, so if you're still not, that's a PEBKAC. You are not using the tools correctly. Figure it out before you complain.
> "No. AI is a must for software development. It's non-negotiable."
Absolutist rubbish.
> "But even those open source projects with significant user bases that forbid the use of AI [...] will be eclipsed by those that support it in terms of features, capability, and security."
As is this. If a language model is relevant to a project, open source or otherwise, is of course heavily dependent on its nature (ethics, use case, deployment, working environment/culture, et cetera).
> You cannot keep pace with your employer's expectations unless you learn to use these tools well. This is not up for debate. It's reality.
So the issue isn’t LLM productivity but unrealistic expectations that skyrocketed in the last years? Makes sense.
> Plenty of accomplished devs are getting good results and accomplishing tasks with unheard-of speed using AI
I don’t see any major business impact.
LLMs may be a must for programming, but not for engineering. Writing code is the easy part once you figure out what actually needs to be built in the first place.
Indeed. But figuring out what actually needs to be built is the systems analyst's job, not the programmer's. It takes people skills and holistic thought, something programmers are generally poor at (and AI certainly is no good at, at least not today).
> No. AI is a must for software development. It's non-negotiable.
???
https://smsk.dev/2026/04/26/ai-cannot-self-improve-and-math-...
>You are not using the tools correctly.
Stop being deluded, man.
When this crap collapses into itself you will be in tears back asking for the knowledge you failed to get without the fancy Clippys.
Now, stop that fancy Megahal chatbot and learn to do things by hand.
I know how to do things by hand, man. But the writing is on the wall: that skill is going the way of writing programs on punchcards. And there's little we can do about it because the economics in favor of LLMs are like laws of physics.
Yes, model collapse is gonna suck. But LLMs are not just left to self-train, they are guided by human researchers who are going to find ways to groom and direct the models to avoid collapse. They can make billions by shipping better models, so why wouldn't they invest a lot of effort in that?
> But the writing is on the wall: that skill is going the way of writing programs on punchcards.
Strange, I don’t see any punchcards inside of my computers, but for some reason I still see code behind anything that LLM does.
One of the amusing things about AI bros is how naively over-enthusiastic they are about the technology and its inevitability.
You still don't get where I'm coming from. The AI takeover of programming is inevitable, and I hate it. But my feelings don't make the brutal economics go away. A skilled developer can now accomplish in days what used to take weeks or months with proper use of these tools. Period. I know this because of the absurd number of skilled developers here, on X, Mastodon, and elsewhere—including OP's author—saying "with AI I'm accomplishing in days what used to take me weeks or months". And if you have the opportunity to make use of the tools, you have to be stupid, or you're cutting off your nose to spite your face, not to.
> here, on X, Mastodon, and elsewhere
You should’ve started with this. Take a really deep breath, take your phone, find closest park, go slowly there (don’t prompt LLM on the way), find a green patch on the ground (it’s called grass) and touch it.
Contrary to you I've been playing with the AI Howto stuff from TLDP forever from Markov chain based chatbots to genetic algos and neural networks and I know the limits on LLM's and how the rot on retroalimentation by reusing their own data. They can't extrapolate. Period. In every cycle they get dumber by design unless there's new human curated content. Go try to explain that to corporations having their copyrighted code being stolen away, be GPL or propietary.
I find all those arguments unconvincing. The right 10,000 lines of code can be worth a billion dollars. The idea that it would be somehow uneconomical for me to take the time to get it right feels like utter nonsense. I don't have to have much of an edge over an LLM to come out on top once you start to distribute the resulting product. Three months of my time costs $25,000 or so (hey, I'm in Europe, adjust as you see fit), if I can make something just a little bit better than AI Albert who can whip something together for a tenth of the price, my time will pay for itself once you have modest amounts of revenue from it.
And I'm fully convinced that what I do will not just be a little bit better than what AI Al makes. It will trounce it in all quality criteria. But of course, coincidentally with the rise of AI assistance, software quality has completely disappeared from the conversation. I wonder why.
Thank you for making a really important point.
The lifespan of software can easily be ten or more years.
If it takes a few more months to write by hand to ensure correctness and proper abstraction, what does that save over the lifetime of the codebase?
It's a rare piece of software that lasts that long. For the rest of us there's LLMs.
In JS land, for sure; for systems' programming and software made for small and medium companies, that's granted.
You know, economics are made by people and can be changed by them. They're historically contingent, not laws of physics.
This is not terminals vs punchcards. This is like Windows ME over Windows 98. Or, maybe, the 286 over a 8086 when a 386 it's the proper path.
Emacs is an outstanding, extremely powerful piece of software. It just lacks a decent editor.
I know this is the joke, and I know Evil is the jokey reply, but ... both sides of the joke carry a grain of truth, as good jokes do.
I know a lot of people become comfortable with the default editing tools in Emacs, and many of them are good, but on the whole, vanilla Emacs does not ship with a great editor.
The Vim family makes up amazingly well designed editors.
Evil is a Vim implementation in Emacs. It is the best of both worlds, and not just on paper. It actually works.
evil-mode
There’s a lot in a name.
Evil stands for Extensible VI Layer, it's not "literal Google".
Yuck!