Because this library is so very widely used, in practice it is subject to Hyrum's law ABI breaks far beyond what is characterised in the SO_NAME. At the extreme Hyrum's law gets you "spacebar heating", this is not a demand that somehow nothing must change, but an acknowledgement of our reality.
Basically even though Daniel might say "I didn't change the ABI" if your code worked before and now it didn't, as far as you're concerned that's an ABI break. This particularly shows up for changed defaults and for removing stuff that's "unused" except that you relied on it and so now your code doesn't work. Daniel brings up NPN because that seems easy for the public Internet but there have been other examples where a default changed and well... too bad, you were relying on something and now it's changed, but you should have just known to set what you wanted and then you'd have been fine.
I'm a bit confused with the usage of ABI here. I thought compatibility between apps and libs is on the API level, while the ABI sits between machine (cpu intructions) and low level (curl?) lib?
Nope, API compatibility is when you write the code, and ABI compatibility is what happens to already compiled code, but still between app and library. Say you compiled your app against libcurl.so.4 file taken from curl 7.88 packaged by Debian. What happens when, far in the future, you will move your executable file, without recompiling, to a system which has curl 12345.67, but the file will still be libcurl.so.4? It should work fine, that is, the library should export the same symbols (maybe some more added), with more or less the same functionality. Possibly implemented differently, maybe accepting more flags, but the stuff that is there, will be there.
To parent's downvoters: would you kindly cut him some slack? It's OK to ask if you don't know. https://xkcd.com/1053/
Just for a small example how both are not the same, in your C library if you move the positions of a field in a struct it can actually break ABI compatibility (meaning you need to recompile your software), but not API compatibility (meaning you don't need to update your code).
You can also with certain type of APIs increase the size of structs and still be ABI compatible. This is usually done by encoding the size of the type as a member of the struct that is usually assigned via `x.size = sizeof(x)` and passed around as pointers. Takes real care to maintain that though. IIRC this appears somewhat frequently in Win32.
Ohh that takes me back, that feature was used heavily in the FXP warez scene (the one the proper warez people looked down on), you’d find vulnerable FTP servers to gain access to, and the best ones would support this. That way you could quickly spread releases over multiple mirrors without being slowed down by your home internet.
Those were fun times, but torrent fits this use case now. You seed the chunk a single time and it magically gets distributed, like you said, without being slowed down by your home internet...
> The Debian project even decided to override our decision “no, that is not an ABI breakage” and added a local patch in their build that lowered the SONAME number back to 3 again in their builds. A patch they would stick to for many years to come.
I get mad even reading about this. If I was in his shoes I'd demand they fork and rename their implementation. Too many times we hear about over zealous debian patches being the source is issues.
I wish the Python core developers had even the level of commitment to stability that developers of JavaScript frameworks do. Instead they intentionally break API compatibility every single release, I suppose because they assume that only worthless ideas are ever expressed in the form of Python programs.
I’ve been writing Python for 25 years and can’t remember projects I work on ever having compatibility broken from one version to the next (with the exception of 2 to 3, of course). Occasionally it happens to a dependency, but it always seems to be something like Cython and not actual Python code.
But then, I normally try to stay on the leading edge. I think it’s more difficult if you leave it 2+ years between updates and ignore deprecation warnings. But with a year between minor releases, that leaves almost a two year window for moving off deprecated things.
I think that’s reasonable. I don’t experience the pain you describe, and I don’t get the impression that the Python project treats Python programs as “worthless”. The people working on Python are Python users too, why would they make their own lives difficult?
When you bring up "ignoring deprecation warnings", you're implicitly confirming what I said, despite explicitly denying it. Perhaps this is due to a misunderstanding, so I will clarify what I was saying.
Nobody has to worry about ignoring deprecation warnings in libcurl, or for that matter in C, in English, in Unicode, or in linear algebra. There's no point at which your linear algebra theorems stop working because the AMS has deprecated some postulates. Euclid's theorems still work just as well today as they did 2000 years ago. Better, in fact, because we now know of new things they apply to that Euclid couldn't have imagined. You can still read Mark Twain, Shakespeare, or even Cicero without having to "maintain" them first, though admittedly you have to be careful about interpreting them with the right version of language.
That's what it means for intellectual work to have lasting value: each generation can build on the work of previous generations rather than having to redo it.
Last night I watched a Primitive Technology video in which he explains why he wants to roof his new construction with fired clay tiles rather than palm-leaf thatch: in the rainy season, the leaves rot, and then the rain destroys his walls, so the construction only lasts a couple of years without maintenance.
Today I opened up a program I had written in Python not 2000 years ago, not 200 years ago, not even 20 years ago, but only 11 years ago, and not touched since then. I had to fix a bunch of errors the Python maintainers intentionally introduced into my program in the 2-to-3 transition. Moreover, the "fixed" version is less correct than the version I used 11 years ago, because previously it correctly handled filename command-line arguments even if they weren't UTF-8. Now it won't, and there's evidently no way to fix it.
I wish I had written it in Golang or JS. Although it wasn't the case when I started writing Python last millennium, a Python program today is a palm-leaf-thatched rainforest mud hut—intentionally so. Instead, like Euclid, I want to build my programs of something more lasting than mere masonry.
I'm not claiming that you should do the same thing. A palm-leaf-thatched roof is easier to build and useful for many purposes. But it is no substitute for something more lasting.
Today's Python is fine for keeping a service running as long as you have a staff of Python programmers. As a medium of expression of ideas, however, it's like writing in the sand at low tide.
Even for Mathematics though, which I think is where your point is strongest, this is not actually true.
If you're a Mathematician in the nineteenth century (for example Peano) you know what a set is in some sense, and if pressed you'll admit that you don't really have a formal way to explain your intuition. Nevertheless you feel content to write about sets as if they're formally defined, even for the infinite sets.
Turns out you've been relying on an unstated Assumption, the Axiom of Choice. When Ernst Zermelo and Abraham Fraenkel write down axioms for a coherent set theory they discover that oops, the system works fine either way with this regard to this Axiom, and many years later it was proved to be entirely independent, and yet mathematicians had been gaily assuming it's true, without saying so.
So in a sense Peano and say Turing are working with different, slightly incompatible versions of Mathematics. Euclidean geometry works fine... but today you'll be told that our universe's geometry isn't actually Euclidean, so, if something big enough just doesn't work that's to be expected, Euclid's model is neat but it's not actually a scale model of our universe, our universe is much stranger.
> Moreover, the "fixed" version is less correct than the version I used 11 years ago, because previously it correctly handled filename command-line arguments even if they weren't UTF-8. Now it won't, and there's evidently no way to fix it.
Isn't fixing this the whole point of Python's "surrogateescape" handling? Certainly, if I put the filename straight from sys.argv into open(), Python will pass it through just fine:
I didn't realize that surrogateescape (UTF-8B) had become the default! If so, that's a great improvement. Thank you for the correction. (I can confirm that Python does indeed try to open "\377.txt" on my machine with this command line.)
It looks like you are mad about python2->python3 transition? Yeah, this was pretty messed up, and the new I/O encoding is pretty broken. Why would you design a system which cannot print every filenames to screen? But I suspect the reason was that designers of that never actually worked on systems with multiple encodings, and so were not aware that people actually _do_ redirect output to files and/or pipe to iconv.
But other than that botched transition, Python is very stable. My Python photo downloader from 2004 did not need any changes throughout python 2 lifetime and still works today (using 2.7 interpreter). The oldest python3 script I've found is from 2019, and it still works just fine without any changes.
If you take the hard line they have and consider any compatibility break to be a failure, there are plenty of examples.
Most notably is PEP 594 (“Removing dead batteries from the standard library”), which removed 19 obsolete modules. However they were deprecated in Python 3.11 (2022) and removed in Python 3.13 (2024).
So everybody has had two years to update their obsolete code if they want to immediately use new versions of the Python interpreter, and if that isn’t long enough, Python 3.12 is officially supported until 2028. So if you use a module like sunau, which handles an audio format used by SPARC and NeXT workstations in the 80s, then you have six years to figure out what WAVs or MP3s are.
> Nobody has to worry about ignoring deprecation warnings in libcurl, or for that matter in C, in English, in Unicode, or in linear algebra. There's no point at which your linear algebra theorems stop working because the AMS has deprecated some postulates. Euclid's theorems still work just as well today as they did 2000 years ago. Better, in fact, because we now know of new things they apply to that Euclid couldn't have imagined. You can still read Mark Twain, Shakespeare, or even Cicero without having to "maintain" them first, though admittedly you have to be careful about interpreting them with the right version of language.
I mean, that last part really unravels your point. Linguistic meanings definitely drift significantly over time in ways that are vitally important, and there are no deprecation warnings about them.
Take the second amendment to the USA constitution, for example. It seems very obviously scoped to “well-regulated militias”, but there are no end to the number of gun ownership proponents who will insist that this isn’t what was meant when it was written, and that the commas don’t introduce a dependent clause like they do today.
Take the Ten Commandments in the Bible. It seems very obvious that they prohibit killing people, but there are no end to the number of death penalty proponents who are Christian who will insist that what it really prohibits is murder, of which state killings are out of scope, and that “thou shalt not kill” isn’t really what was meant when it was written.
These are very clearly meaningful semantic changes. Compatibility was definitely broken.
If “you have to be careful about interpreting them with the right version of the language”, then how is that any different to saying “well just use the right version of the Python interpreter”?
> Today I opened up a program I had written in Python not 2000 years ago, not 200 years ago, not even 20 years ago, but only 11 years ago, and not touched since then. I had to fix a bunch of errors the Python maintainers intentionally introduced into my program in the 2-to-3 transition.
In your own words: You have to be careful about interpreting it with the right version of the language. Just use a Python 2 interpreter if that is your attitude.
I don’t believe software is something that you can write once and assume it will work in perpetuity with zero maintenance. Go doesn’t work that way, JavaScript doesn’t work that way, and Curl – the subject of this article – doesn’t work that way. They might’ve released v7.16.0 eighteen years ago, but they still needed to release new versions over and over and over again since then.
There is no software in the world that does not require maintenance – even TeX received an update a few years ago. Wanting to avoid maintenance altogether is not achievable, and in fact is harmful. This is like sysadmins who are proud of long uptimes. It just proves they haven’t installed any security patches. Regularly maintaining software is a requirement for it to be healthy. Write-once-maintain-never is unhealthy and should not be a goal.
Compile is rather generous. The massive teetering dependency tree certainly is prone to breaking, but I am not certain I can draw much parallel between the API stability of JS vs the ABI stability of libcurl. If anything, I can still properly render some pretty dang old websites just fine and, for the most part, the websites will work just fine. Mainly benefitted by the light use of JavaScript using probably no external dependent libraries. Though your milage will vary depending on if you go back to browser specific APIs when all that browser war nonsense was going on. I love to bash on JS as much as the next guy, but a large portion of the blame is actually the modern use of it, not an inherent quality of the language breaking compatibility over time.
We have an ancient (in javascript years) app that webpacks stupid template systems and polyfills and all sorts of cruft that should have never been in the first place and we haven't had to touch any of that in years.
What dependency or feature forces you to chase versions?
For those of us who are in the no-ABI-breakage business (I used to be when I worked on JavaScriptCore, which is a framework on Apple platforms and folks dynlink to it), any behavior change that causes an app that has nontrivial number of users to break is considered an ABI break.
Some function in your library used to return 42 and now returns 43, and an app with 10,000 users asserted that you returned 42? That's an ABI break.
There are more obvious examples: renaming functions that someone linked to? ABI break. Change the size of a struct in a public header? ABI break, usually (there are mitigations for that one).
The list goes on and Hyrum's law very much applies.
Another way to define ABI compatibility is that folks who promise it are taking the same exact stance with respect to folks who link against them as the stance that Linus takes regarding not breaking userspace: https://lkml.org/lkml/2012/12/23/75
If I promise you ABI compat, and then I make a change and your app breaks, then I'm promising that it's my fault and not yours.
> Of course any API breakage is also an ABI breakage.
I don't think this is true. You can change things on a superficial level in a source language that still compiles down to the same representation in the end.
One big thing to watch out for is structs. You can’t add extra members later.
If you have a struct which might grow, don’t actually make it part of the ABI, don’t give users any way to find it’s size, and write functions to create, destroy and query it.
The solution in old Win32 APIs was to have a length field as the first member of the struct. The client sets this to sizeof(the_struct). As long as structs are only ever extended the library knows exactly which version it is dealing with from the length.
This got a bit messy because Windows also included compatibility hacks for clients that didn't set the length correctly.
> If you have a struct which might grow, don’t actually make it part of the ABI, don’t give users any way to find it’s size, and write functions to create, destroy and query it.
Thanks! This is very insightful. What is a solution to this? If I cannot expose structs that might grow what do I expose then?
Or is the solution something like I can expose the structs that I need to expose but if I need to ever extend them in future, then I create a new struct for it?
> What is a solution to this? If I cannot expose structs that might grow what do I expose then?
Option 1: If allocating from the heap or somewhere otherwise fixed in place, then return a pointer-to-void (void *) and cast back to pointer-to-your-struct when the user gives it back to you.
Option 2: If allocating from a pool, just return the index.
Unless I've been wrong all these years in C you can write a header file which says this is a pointer to T, without ever saying what's in T and C will not allow people to access the elements of T, since it doesn't know what they are, but they can have the pointer since it doesn't need to know how T is laid out to know it's a pointer and all pointers are laid out the same in C.
Then for your internal stuff you define what's inside T and you can use T normally.
Also, even if you're returning an index, learn from Unix's mistake and don't say it's an integer. Give it some other type, even if that type is just an alias for a primitive integer type, because at least you are signalling that these are not integers and you might make a few programmers not muddle these with other integers they've got. A file descriptor is not, in fact, a process ID, a port number, or a retry count, and 5 is only any of those things if you specify which of them it is.
No, I don't mean void pointers, I'm talking about just pointers to some unknown type T.
What's inside a T? How can we make one? We don't know, but that's fine since we have been provided with APIs which give us a pointer-to-T and which take a pointer-to-T so it delivers the opacity required.
Next biggest humanity-wide problem in this area will be 32/64 bit time I believe. We store timestamps as seconds since 1.01.1970 00:00 UTC, and it turns out someone thought 32 bits will be enough for everybody, so the counter will wrap some time in 2038 year, which is less than 14 years in the future. How do you fix the fact that, on 32-bit systems (in x86 it's limited to i386/i686, which is less and less common, but the world is not only x86) time-related functions return 32-bit wide variables? You either return wrong values or you break the ABI.
Debian chose to do both: https://wiki.debian.org/ReleaseGoals/64bit-time . Wherever they could, they recompiled much of the stuff changing package names from libsomething to libsomethingt64, so where they couldn't recompile, the app still "works" (does not segfault), but links with 32-bit library that just gets wrong values. Other distros had flag day, essentially recompiled everything and didn't bother with non-packaged stuff that was compiled against old 32-bit libs, thus breaking ABI.
If you alter signatures or data structures in the published header file, there is breakage.
If you add new signatures or data structures, software compiled against the previous version should still work with the new version.
In my opinion the whole issue is more important on Windows than on Linux. Just recompile the application against the new library or keep both the old and the new soversion around.
Some Linux distributions go into major contortions to make ABI stability work, and still compiled applications that are supposed to work with newer distros crash. It is a waste of resources.
> Before this release, you could use curl to do “third party” transfers over FTP, and in this release you could no longer do that. That is a feature when the client (curl) connects to server A and instructs that server to communicate with server B and do file transfers among themselves, without sending data to and from the client.
That sounds like a super useful feature that would be great if more FTP servers supported it. I guess FTP itself is a dying protocol these days, but it's extremely simple and does what it says on the tin.
Sadly FTP didn't seem to have a smooth TLS transition. There is FTPS (among SFTP), but I have rarely seen it in use.
I think it will survive as a protocol as a fallback mechanism. Ironically I used FTP on a smartphone here and there because Smartphone OS are abysmally useless. Don't get me started with your awesome proprietary sync app, I don't do trashy and they all are.
Otherwise I do everything today through scp and http, but it is less optimal technically. It just happens to be widely available. FTP theoretically would provide a cleaner way for transfers and permission management.
FTP is still the best way to transfer files between a phone and a PC.
Well, Android anyways. I don't know how things work in the Apple world. It's bizarre that whatever the "official" method of file transfer is is so bad. Also, managing files on Android is, on its own very bad. FTP allows connecting a decent file manager to the phone and do the management externally.
libcurl is part of the macOS API and de-facto standard on any Linux box and commonly available on BSD boxen as well. Microsoft has been shipping curl.exe for a while as well, though not the library.
If Microsoft would also ship the library in %system32%, we would have a truly cross-platform and stable, OS-provided and -patched high-level network protocol client.
As of one of the later releases of Windows 10, "curl.exe" is in System32 (or somewhere else on PATH), but if you type "curl" in a powershell you get the alias. You need to type "curl.exe" to get the binary.
Guessing this is for backwards compatibility with scripts written for the days when it was just PowerShell lying to you.
My personal favorite *nix utility nobody knows ships with Windows is tar.exe, which is libarchive's bsdtar and therefore supports a wide variety of archive formats.
["1/…" is 7z syntax for the first image in the WIM file — Windows 11 Home, in this example — included here to avoid redundant output; curl.exe is included in all images.
"…/SysWOW64/curl.exe" is a 32-bit curl build (32-bit Windows programs see "Windows\SysWOW64" as "Windows\System32").]
> My personal favorite *nix utility nobody knows ships with Windows is tar.exe, which is libarchive's bsdtar and therefore supports a wide variety of archive formats.
I never needed curl on Windows, because on OSes that provide a full stack developer experience such things are part of the platform SDK, and rich language runtimes.
It is only an issue with C and C++, and their reliance on POSIX to complement the slim standard library, effectively making UNIX their "runtime" for all practical purposes.
You missed the point. There are more OSes out there. If you had to relearn a brand new "full stack experience" for every OS just to ship a cross-platform app, you wouldn't have any time left for business logic. libcurl is great and it runs everywhere.
And now for a personal opinion: I'll take libcurl over .NET's IHttpClientFactory any day.
I have been programming since the glory days of 8 bit home computers, try shipping cross-platform applications in Z80 and 6502, and we still had plenty of time left to do lots of stuff.
Additionally, writing little wrappers around OS APIs is something that every C programmer has known since K&R C went outside UNIX V6, which again is why POSIX became a thing.
What? You don't need to use it explicitly most of the time.
Just `new HttpClient` and cache/dispose it. Or have DI inject it for you. It will do the right thing without further input.
The reason for this factory existing is pooling the underlying HttpMessageHandlers which hold an internal connection pool for efficient connection/stream(in case of HTTP2 or Quic) reuse for large applications which use DI.
Rust has explicit provision for C-style ABI if you want that and are willing to pay for it. The provision could be improved, and valuable work on that has taken place and is ongoing, but I think the harsh lesson in this space isn't for Rust but for C++ which fell into just delivering absolute ABI freeze years ago out of terrorized panic rather than any coherent policy or strategy.
Plenty of people told WG21 (the C++ committee) that they need to fix this, P1863R0 ("ABI: Now or Never") is five years old on Friday. If you bet "Never" when that paper was written, you're ahead by five years, if you understand how this actually works you know that you have much longer. When Titus wrote that he estimated 5-10% perf left on the table. That's a thin margin, on its own it would not justify somebody to come in and pick that up. But it isn't on its own, there are a long list of problems with C++.
Because this library is so very widely used, in practice it is subject to Hyrum's law ABI breaks far beyond what is characterised in the SO_NAME. At the extreme Hyrum's law gets you "spacebar heating", this is not a demand that somehow nothing must change, but an acknowledgement of our reality.
Basically even though Daniel might say "I didn't change the ABI" if your code worked before and now it didn't, as far as you're concerned that's an ABI break. This particularly shows up for changed defaults and for removing stuff that's "unused" except that you relied on it and so now your code doesn't work. Daniel brings up NPN because that seems easy for the public Internet but there have been other examples where a default changed and well... too bad, you were relying on something and now it's changed, but you should have just known to set what you wanted and then you'd have been fine.
I'm a bit confused with the usage of ABI here. I thought compatibility between apps and libs is on the API level, while the ABI sits between machine (cpu intructions) and low level (curl?) lib?
API would be the source level. ABI would be that you can mix the binary program and the binary libcurl library. (Curl is attempting to preserve both)
Nope, API compatibility is when you write the code, and ABI compatibility is what happens to already compiled code, but still between app and library. Say you compiled your app against libcurl.so.4 file taken from curl 7.88 packaged by Debian. What happens when, far in the future, you will move your executable file, without recompiling, to a system which has curl 12345.67, but the file will still be libcurl.so.4? It should work fine, that is, the library should export the same symbols (maybe some more added), with more or less the same functionality. Possibly implemented differently, maybe accepting more flags, but the stuff that is there, will be there.
To parent's downvoters: would you kindly cut him some slack? It's OK to ask if you don't know. https://xkcd.com/1053/
Just for a small example how both are not the same, in your C library if you move the positions of a field in a struct it can actually break ABI compatibility (meaning you need to recompile your software), but not API compatibility (meaning you don't need to update your code).
And the opposite is true too: you can break API (by renaming a field) while preserving the ABI (since it didn't move/change type).
You can also with certain type of APIs increase the size of structs and still be ABI compatible. This is usually done by encoding the size of the type as a member of the struct that is usually assigned via `x.size = sizeof(x)` and passed around as pointers. Takes real care to maintain that though. IIRC this appears somewhat frequently in Win32.
Thanks for explaining, people! To me ABI immediately triggers the aarch64 / x86 etc vibes, but that indeed makes sense!
Now I'm wondering if/how managed code of e.g. dotnet solves this issue, but that might be too much of a tangent.
There is a distinction in the dotnet world too. The docs[1] seem to refer to ABI as "Binary compatibility" and API "Source compatibility".
Jon Skeet has some good examples [2]
[1] https://learn.microsoft.com/en-us/dotnet/core/compatibility/... [2] https://codeblog.jonskeet.uk/2018/04/13/backward-compatibili...
> “third party” transfers over FTP,
Ohh that takes me back, that feature was used heavily in the FXP warez scene (the one the proper warez people looked down on), you’d find vulnerable FTP servers to gain access to, and the best ones would support this. That way you could quickly spread releases over multiple mirrors without being slowed down by your home internet.
Those were fun times, but torrent fits this use case now. You seed the chunk a single time and it magically gets distributed, like you said, without being slowed down by your home internet...
That's progress I believe.
> The Debian project even decided to override our decision “no, that is not an ABI breakage” and added a local patch in their build that lowered the SONAME number back to 3 again in their builds. A patch they would stick to for many years to come.
I get mad even reading about this. If I was in his shoes I'd demand they fork and rename their implementation. Too many times we hear about over zealous debian patches being the source is issues.
I wish developers of JavaScript frameworks had this level of commitment to stability.
I agree; it's admirable.
I wish the Python core developers had even the level of commitment to stability that developers of JavaScript frameworks do. Instead they intentionally break API compatibility every single release, I suppose because they assume that only worthless ideas are ever expressed in the form of Python programs.
I’ve been writing Python for 25 years and can’t remember projects I work on ever having compatibility broken from one version to the next (with the exception of 2 to 3, of course). Occasionally it happens to a dependency, but it always seems to be something like Cython and not actual Python code.
But then, I normally try to stay on the leading edge. I think it’s more difficult if you leave it 2+ years between updates and ignore deprecation warnings. But with a year between minor releases, that leaves almost a two year window for moving off deprecated things.
I think that’s reasonable. I don’t experience the pain you describe, and I don’t get the impression that the Python project treats Python programs as “worthless”. The people working on Python are Python users too, why would they make their own lives difficult?
When you bring up "ignoring deprecation warnings", you're implicitly confirming what I said, despite explicitly denying it. Perhaps this is due to a misunderstanding, so I will clarify what I was saying.
Nobody has to worry about ignoring deprecation warnings in libcurl, or for that matter in C, in English, in Unicode, or in linear algebra. There's no point at which your linear algebra theorems stop working because the AMS has deprecated some postulates. Euclid's theorems still work just as well today as they did 2000 years ago. Better, in fact, because we now know of new things they apply to that Euclid couldn't have imagined. You can still read Mark Twain, Shakespeare, or even Cicero without having to "maintain" them first, though admittedly you have to be careful about interpreting them with the right version of language.
That's what it means for intellectual work to have lasting value: each generation can build on the work of previous generations rather than having to redo it.
Last night I watched a Primitive Technology video in which he explains why he wants to roof his new construction with fired clay tiles rather than palm-leaf thatch: in the rainy season, the leaves rot, and then the rain destroys his walls, so the construction only lasts a couple of years without maintenance.
Today I opened up a program I had written in Python not 2000 years ago, not 200 years ago, not even 20 years ago, but only 11 years ago, and not touched since then. I had to fix a bunch of errors the Python maintainers intentionally introduced into my program in the 2-to-3 transition. Moreover, the "fixed" version is less correct than the version I used 11 years ago, because previously it correctly handled filename command-line arguments even if they weren't UTF-8. Now it won't, and there's evidently no way to fix it.
I wish I had written it in Golang or JS. Although it wasn't the case when I started writing Python last millennium, a Python program today is a palm-leaf-thatched rainforest mud hut—intentionally so. Instead, like Euclid, I want to build my programs of something more lasting than mere masonry.
I'm not claiming that you should do the same thing. A palm-leaf-thatched roof is easier to build and useful for many purposes. But it is no substitute for something more lasting.
Today's Python is fine for keeping a service running as long as you have a staff of Python programmers. As a medium of expression of ideas, however, it's like writing in the sand at low tide.
Even for Mathematics though, which I think is where your point is strongest, this is not actually true.
If you're a Mathematician in the nineteenth century (for example Peano) you know what a set is in some sense, and if pressed you'll admit that you don't really have a formal way to explain your intuition. Nevertheless you feel content to write about sets as if they're formally defined, even for the infinite sets.
Turns out you've been relying on an unstated Assumption, the Axiom of Choice. When Ernst Zermelo and Abraham Fraenkel write down axioms for a coherent set theory they discover that oops, the system works fine either way with this regard to this Axiom, and many years later it was proved to be entirely independent, and yet mathematicians had been gaily assuming it's true, without saying so.
So in a sense Peano and say Turing are working with different, slightly incompatible versions of Mathematics. Euclidean geometry works fine... but today you'll be told that our universe's geometry isn't actually Euclidean, so, if something big enough just doesn't work that's to be expected, Euclid's model is neat but it's not actually a scale model of our universe, our universe is much stranger.
> Moreover, the "fixed" version is less correct than the version I used 11 years ago, because previously it correctly handled filename command-line arguments even if they weren't UTF-8. Now it won't, and there's evidently no way to fix it.
Isn't fixing this the whole point of Python's "surrogateescape" handling? Certainly, if I put the filename straight from sys.argv into open(), Python will pass it through just fine:
Though I suppose it could still be problematic for logging filenames or otherwise displaying them.I didn't realize that surrogateescape (UTF-8B) had become the default! If so, that's a great improvement. Thank you for the correction. (I can confirm that Python does indeed try to open "\377.txt" on my machine with this command line.)
It looks like you are mad about python2->python3 transition? Yeah, this was pretty messed up, and the new I/O encoding is pretty broken. Why would you design a system which cannot print every filenames to screen? But I suspect the reason was that designers of that never actually worked on systems with multiple encodings, and so were not aware that people actually _do_ redirect output to files and/or pipe to iconv.
But other than that botched transition, Python is very stable. My Python photo downloader from 2004 did not need any changes throughout python 2 lifetime and still works today (using 2.7 interpreter). The oldest python3 script I've found is from 2019, and it still works just fine without any changes.
Do you have any examples other than the 2 > 3 transition? Maybe within the last 5 years?
If you take the hard line they have and consider any compatibility break to be a failure, there are plenty of examples.
Most notably is PEP 594 (“Removing dead batteries from the standard library”), which removed 19 obsolete modules. However they were deprecated in Python 3.11 (2022) and removed in Python 3.13 (2024).
So everybody has had two years to update their obsolete code if they want to immediately use new versions of the Python interpreter, and if that isn’t long enough, Python 3.12 is officially supported until 2028. So if you use a module like sunau, which handles an audio format used by SPARC and NeXT workstations in the 80s, then you have six years to figure out what WAVs or MP3s are.
> Nobody has to worry about ignoring deprecation warnings in libcurl, or for that matter in C, in English, in Unicode, or in linear algebra. There's no point at which your linear algebra theorems stop working because the AMS has deprecated some postulates. Euclid's theorems still work just as well today as they did 2000 years ago. Better, in fact, because we now know of new things they apply to that Euclid couldn't have imagined. You can still read Mark Twain, Shakespeare, or even Cicero without having to "maintain" them first, though admittedly you have to be careful about interpreting them with the right version of language.
I mean, that last part really unravels your point. Linguistic meanings definitely drift significantly over time in ways that are vitally important, and there are no deprecation warnings about them.
Take the second amendment to the USA constitution, for example. It seems very obviously scoped to “well-regulated militias”, but there are no end to the number of gun ownership proponents who will insist that this isn’t what was meant when it was written, and that the commas don’t introduce a dependent clause like they do today.
Take the Ten Commandments in the Bible. It seems very obvious that they prohibit killing people, but there are no end to the number of death penalty proponents who are Christian who will insist that what it really prohibits is murder, of which state killings are out of scope, and that “thou shalt not kill” isn’t really what was meant when it was written.
These are very clearly meaningful semantic changes. Compatibility was definitely broken.
If “you have to be careful about interpreting them with the right version of the language”, then how is that any different to saying “well just use the right version of the Python interpreter”?
> Today I opened up a program I had written in Python not 2000 years ago, not 200 years ago, not even 20 years ago, but only 11 years ago, and not touched since then. I had to fix a bunch of errors the Python maintainers intentionally introduced into my program in the 2-to-3 transition.
In your own words: You have to be careful about interpreting it with the right version of the language. Just use a Python 2 interpreter if that is your attitude.
I don’t believe software is something that you can write once and assume it will work in perpetuity with zero maintenance. Go doesn’t work that way, JavaScript doesn’t work that way, and Curl – the subject of this article – doesn’t work that way. They might’ve released v7.16.0 eighteen years ago, but they still needed to release new versions over and over and over again since then.
There is no software in the world that does not require maintenance – even TeX received an update a few years ago. Wanting to avoid maintenance altogether is not achievable, and in fact is harmful. This is like sysadmins who are proud of long uptimes. It just proves they haven’t installed any security patches. Regularly maintaining software is a requirement for it to be healthy. Write-once-maintain-never is unhealthy and should not be a goal.
I've had programs written for Python 2 run on 3 first time by just changing the print to print(). I guess they weren't very big though.
There’s JavaScript code we have that won’t compile after 3 months. Pretty sad state of affairs these days.
Compile is rather generous. The massive teetering dependency tree certainly is prone to breaking, but I am not certain I can draw much parallel between the API stability of JS vs the ABI stability of libcurl. If anything, I can still properly render some pretty dang old websites just fine and, for the most part, the websites will work just fine. Mainly benefitted by the light use of JavaScript using probably no external dependent libraries. Though your milage will vary depending on if you go back to browser specific APIs when all that browser war nonsense was going on. I love to bash on JS as much as the next guy, but a large portion of the blame is actually the modern use of it, not an inherent quality of the language breaking compatibility over time.
How?
We have an ancient (in javascript years) app that webpacks stupid template systems and polyfills and all sorts of cruft that should have never been in the first place and we haven't had to touch any of that in years.
What dependency or feature forces you to chase versions?
What are some things you could do in a C project to cause ABI breakage?
I ask this because I'd like to know what practices I might want to avoid to guarantee that there is no ABI breakage in my C project.
For those of us who are in the no-ABI-breakage business (I used to be when I worked on JavaScriptCore, which is a framework on Apple platforms and folks dynlink to it), any behavior change that causes an app that has nontrivial number of users to break is considered an ABI break.
Some function in your library used to return 42 and now returns 43, and an app with 10,000 users asserted that you returned 42? That's an ABI break.
There are more obvious examples: renaming functions that someone linked to? ABI break. Change the size of a struct in a public header? ABI break, usually (there are mitigations for that one).
The list goes on and Hyrum's law very much applies.
Another way to define ABI compatibility is that folks who promise it are taking the same exact stance with respect to folks who link against them as the stance that Linus takes regarding not breaking userspace: https://lkml.org/lkml/2012/12/23/75
If I promise you ABI compat, and then I make a change and your app breaks, then I'm promising that it's my fault and not yours.
If you move fields around in a struct that is passed between a library and library-consumer it will cause ABI breakage
vs Basically anything that moves bytes around in memory for data structures that are passed around. Of course any API breakage is also an ABI breakage.> Of course any API breakage is also an ABI breakage.
I don't think this is true. You can change things on a superficial level in a source language that still compiles down to the same representation in the end.
Yeah true, renaming a field in a struct is an API breakage but not an ABI breakage as long as you don't move it around or change its type.
One big thing to watch out for is structs. You can’t add extra members later.
If you have a struct which might grow, don’t actually make it part of the ABI, don’t give users any way to find it’s size, and write functions to create, destroy and query it.
The solution in old Win32 APIs was to have a length field as the first member of the struct. The client sets this to sizeof(the_struct). As long as structs are only ever extended the library knows exactly which version it is dealing with from the length.
This got a bit messy because Windows also included compatibility hacks for clients that didn't set the length correctly.
> If you have a struct which might grow, don’t actually make it part of the ABI, don’t give users any way to find it’s size, and write functions to create, destroy and query it.
Thanks! This is very insightful. What is a solution to this? If I cannot expose structs that might grow what do I expose then?
Or is the solution something like I can expose the structs that I need to expose but if I need to ever extend them in future, then I create a new struct for it?
You shall expose the structs as anonymous types behind a pointer, and expose functions that act on that pointer.
> What is a solution to this? If I cannot expose structs that might grow what do I expose then?
Option 1: If allocating from the heap or somewhere otherwise fixed in place, then return a pointer-to-void (void *) and cast back to pointer-to-your-struct when the user gives it back to you.
Option 2: If allocating from a pool, just return the index.
Unless I've been wrong all these years in C you can write a header file which says this is a pointer to T, without ever saying what's in T and C will not allow people to access the elements of T, since it doesn't know what they are, but they can have the pointer since it doesn't need to know how T is laid out to know it's a pointer and all pointers are laid out the same in C.
Then for your internal stuff you define what's inside T and you can use T normally.
Also, even if you're returning an index, learn from Unix's mistake and don't say it's an integer. Give it some other type, even if that type is just an alias for a primitive integer type, because at least you are signalling that these are not integers and you might make a few programmers not muddle these with other integers they've got. A file descriptor is not, in fact, a process ID, a port number, or a retry count, and 5 is only any of those things if you specify which of them it is.
i think you mean void pointers. the callee will then cast those pointers to an actual type it knows about (and then possibly go horribly wrong)
No, I don't mean void pointers, I'm talking about just pointers to some unknown type T.
What's inside a T? How can we make one? We don't know, but that's fine since we have been provided with APIs which give us a pointer-to-T and which take a pointer-to-T so it delivers the opacity required.
No, you can just use a pointer to struct S without ever defining the fields of S, no void pointers required.
Incomplete types. Void is just a special kind of incomplete type.
You reserve fields in them for future use.
Next biggest humanity-wide problem in this area will be 32/64 bit time I believe. We store timestamps as seconds since 1.01.1970 00:00 UTC, and it turns out someone thought 32 bits will be enough for everybody, so the counter will wrap some time in 2038 year, which is less than 14 years in the future. How do you fix the fact that, on 32-bit systems (in x86 it's limited to i386/i686, which is less and less common, but the world is not only x86) time-related functions return 32-bit wide variables? You either return wrong values or you break the ABI.
Debian chose to do both: https://wiki.debian.org/ReleaseGoals/64bit-time . Wherever they could, they recompiled much of the stuff changing package names from libsomething to libsomethingt64, so where they couldn't recompile, the app still "works" (does not segfault), but links with 32-bit library that just gets wrong values. Other distros had flag day, essentially recompiled everything and didn't bother with non-packaged stuff that was compiled against old 32-bit libs, thus breaking ABI.
If you alter signatures or data structures in the published header file, there is breakage.
If you add new signatures or data structures, software compiled against the previous version should still work with the new version.
In my opinion the whole issue is more important on Windows than on Linux. Just recompile the application against the new library or keep both the old and the new soversion around.
Some Linux distributions go into major contortions to make ABI stability work, and still compiled applications that are supposed to work with newer distros crash. It is a waste of resources.
> Before this release, you could use curl to do “third party” transfers over FTP, and in this release you could no longer do that. That is a feature when the client (curl) connects to server A and instructs that server to communicate with server B and do file transfers among themselves, without sending data to and from the client.
That sounds like a super useful feature that would be great if more FTP servers supported it. I guess FTP itself is a dying protocol these days, but it's extremely simple and does what it says on the tin.
Sadly FTP didn't seem to have a smooth TLS transition. There is FTPS (among SFTP), but I have rarely seen it in use.
I think it will survive as a protocol as a fallback mechanism. Ironically I used FTP on a smartphone here and there because Smartphone OS are abysmally useless. Don't get me started with your awesome proprietary sync app, I don't do trashy and they all are.
Otherwise I do everything today through scp and http, but it is less optimal technically. It just happens to be widely available. FTP theoretically would provide a cleaner way for transfers and permission management.
FTP is still the best way to transfer files between a phone and a PC.
Well, Android anyways. I don't know how things work in the Apple world. It's bizarre that whatever the "official" method of file transfer is is so bad. Also, managing files on Android is, on its own very bad. FTP allows connecting a decent file manager to the phone and do the management externally.
Unfortunately this is no longer maintained, but it's interesting nonetheless: https://abi-laboratory.pro/?view=timeline&l=curl
libcurl is part of the macOS API and de-facto standard on any Linux box and commonly available on BSD boxen as well. Microsoft has been shipping curl.exe for a while as well, though not the library.
If Microsoft would also ship the library in %system32%, we would have a truly cross-platform and stable, OS-provided and -patched high-level network protocol client.
(So that probably won't happen)
> Microsoft has been shipping curl.exe for a while as well, though not the library.
I'm not sure if this is accurate. Why do they include a default alias in Powershell for `curl` that points to the `Invoke-WebRequest` cmdlet then?
I've always installed curl myself and removed the alias on Windows. Maybe I've never noticed the default one because of that.
As of one of the later releases of Windows 10, "curl.exe" is in System32 (or somewhere else on PATH), but if you type "curl" in a powershell you get the alias. You need to type "curl.exe" to get the binary.
Guessing this is for backwards compatibility with scripts written for the days when it was just PowerShell lying to you.
https://curl.se/windows/microsoft.html
curl.exe does ship with Windows. Given a Windows 11 ISO freshly downloaded from microsoft.com, we have
This is, in fact, curl: My personal favorite *nix utility nobody knows ships with Windows is tar.exe, which is libarchive's bsdtar and therefore supports a wide variety of archive formats.["1/…" is 7z syntax for the first image in the WIM file — Windows 11 Home, in this example — included here to avoid redundant output; curl.exe is included in all images.
"…/SysWOW64/curl.exe" is a 32-bit curl build (32-bit Windows programs see "Windows\SysWOW64" as "Windows\System32").]
> My personal favorite *nix utility nobody knows ships with Windows is tar.exe, which is libarchive's bsdtar and therefore supports a wide variety of archive formats.
TIL. However, some caveats: https://github.com/libarchive/libarchive/issues/2092
Hm, since a DLL is a PE file and EXE is a PE file, could one load the EXE as a DLL instead?
Edit, I had a recollection I saw something like that before, this might be that: https://www.codeproject.com/articles/1045674/load-exe-as-dll...
I just checked and the curl.exe on my system does not export any symbols, so not in this case.
It is possible to do that in the general sense though.
There are more OSes out there.
I never needed curl on Windows, because on OSes that provide a full stack developer experience such things are part of the platform SDK, and rich language runtimes.
It is only an issue with C and C++, and their reliance on POSIX to complement the slim standard library, effectively making UNIX their "runtime" for all practical purposes.
You missed the point. There are more OSes out there. If you had to relearn a brand new "full stack experience" for every OS just to ship a cross-platform app, you wouldn't have any time left for business logic. libcurl is great and it runs everywhere.
And now for a personal opinion: I'll take libcurl over .NET's IHttpClientFactory any day.
I have been programming since the glory days of 8 bit home computers, try shipping cross-platform applications in Z80 and 6502, and we still had plenty of time left to do lots of stuff.
Additionally, writing little wrappers around OS APIs is something that every C programmer has known since K&R C went outside UNIX V6, which again is why POSIX became a thing.
What? You don't need to use it explicitly most of the time.
Just `new HttpClient` and cache/dispose it. Or have DI inject it for you. It will do the right thing without further input.
The reason for this factory existing is pooling the underlying HttpMessageHandlers which hold an internal connection pool for efficient connection/stream(in case of HTTP2 or Quic) reuse for large applications which use DI.
Take note, Rust.
Rust has explicit provision for C-style ABI if you want that and are willing to pay for it. The provision could be improved, and valuable work on that has taken place and is ongoing, but I think the harsh lesson in this space isn't for Rust but for C++ which fell into just delivering absolute ABI freeze years ago out of terrorized panic rather than any coherent policy or strategy.
Plenty of people told WG21 (the C++ committee) that they need to fix this, P1863R0 ("ABI: Now or Never") is five years old on Friday. If you bet "Never" when that paper was written, you're ahead by five years, if you understand how this actually works you know that you have much longer. When Titus wrote that he estimated 5-10% perf left on the table. That's a thin margin, on its own it would not justify somebody to come in and pick that up. But it isn't on its own, there are a long list of problems with C++.