Note that NixOS and reproducible builds did not detect the xz backdoor, and in fact NixOS shipped the malicious builds of xz (though they didn't do anything because the malware didn't target NixOS):
> I am a NixOS developer and I was surprised when the backdoor was revealed to see that the malicious version of xz had ended up being distributed to our users.
As always theory and reality are different, and the thing that made xz possible was never a technical vulnerability with a technical solution—xz was possible because of a meatspace exploit. We as a community are very very bad at recognizing that you can't always just patch meatspace with better software.
> NixOS and reproducible builds did not detect the xz backdoor
Nix declarativeness is quite useful to increase protection against exploits in a number of ways. Unfortunately, there is still a lot of untapped potential. My number one priority would be to implement fine-grained ephemeral containers. Guix has these already.
This would make it convenient to run every single process with restricted privileges, including no access to ~/, except those directories that are needed by the task. That would prevent e.g. a rogue pip package from stealing SSH keys.
Still, I think the xz backdoor did not work on NixOS because its unusual non FHS-compliant filesystem structure.
> I think the xz backdoor did not work on NixOS because its unusual non FHS-compliant filesystem structure.
Right, but this is not part of the security model, it's an incidental attribute of the OS that's there for other reasons and easily solved for if the attacker had prioritized it. The only reason why it didn't work is because the attacker didn't bother making it work on NixOS, not because he couldn't have if he'd wanted to.
>Still, I think the xz backdoor did not work on NixOS because its unusual non FHS-compliant filesystem structure.
It didn't work on NixOS because the build-time hooks that inserted the backdoor only activated itself when it recognized that it was being built for an RPM or Debian package.
You don’t even need to run in a container for this. It’s possible to do this entirely in systemd service configuration. The easiest way is just to have separate user for every service and reduce stuff running as root. You can also restrict filesystem access, network access and even syscall access (although some of this may be implemented as a container under the hood).
Unfortunately, this wouldn’t help with the xz vulnerability because the SSH server is the one loading the compromised library in that case (indirectly). Since SSH itself needs to have access to the private keys, it’s not really easy to secure it against vulnerabilities in the library it loads itself.
On the flip side, unless the vulnerability is in one of the important binaries/shared libraries, the amount of damage it can cause it probably quite contained with simply having good user isolation. Nix can make this analysis really simple (because of explicitly specified dependencies), so you can crack down on critical dependencies a lot more easily.
> It’s possible to do this entirely in systemd service configuration
Sure, but I think that leaves out many use cases. What if I want to e.g. start a Python shell that has access to certain directories, and nothing else, including no network access?
Nix provides a good way of doing that for common use cases, as it has decent support for Firejail. But I would like something like Guix containers, which is convenient for any ad hoc use case. This greatly reduces any security threat. It's a poor-man's QubesOS.
Yeah, I guess most existing Linux stuff that is actually configured (so not SELinux etc) is geared at system processes not user ones. Transparently running all user applications in properly isolated containers would be quite neat.
Does Firejail handle dynamic access (eg. I may want an xz invocation to work on my private keys, but not THIS specific one where I’ve given it a completely different file?).
I quite like pledge/unveil for this kind of thing on OpenBSD, although that’s for a different threat model.
Firejail and bwrap are setuid sandbox frontends. You can wrap e.g. a new xz invocation to let it work on your private keys.
But Nix relies on ephemeral shells and flakes, and they don't play so well with each other. The interface is clumsy. Guix, in contrast, has a pretty nice set of CLI switches for these features.
Even normal distros should prioritize some simple graphical UI for this. Running programs with minimal privileges would result in a significant enhancement of security. The kernel features for achieving this are already there.
Literally any container runtime can do this for you. No one does it though because it's annoying as hell to upfront figure out what you want, and then be unable to increase that list later.
Like if I were to try and find a not annoying way to do this, it would be to snapshot and overlay mount my filesystem at process launch time, then give a heuristic warning at process termination about what was changed.
But then we're still into basically SELinux territory which because of course there's upfront things we don't want to allow read access to - i.e. SSH keys.
(don't know what the lesson here is other then "for the love of god could we get a standardized secrets filesystem and/or API or something". Looking at you Hashicorp Vault and ~/.vault-token).
This is such a headache with snap and flatpak though.
If you're trying to do something that the package maintainer thought of in 5 minutes of testing then it's usually fine, but still inside any non-trivial application you'll often find parts that try to use extra privileges that aren't documented because they're not normally considered "privileges".
Some examples of sandboxing issues:
* FreeCAD doesn't have access to /usr which means when you try to make a Draft ShapeString you can't pick any fonts
* FreeCAD stores the path to the shape of a milling cutter inside your project, but that path is inside /mnt/.FreeCAhjhffg or whatever so it doesn't work after you restart the program
* Inkscape gcodetools has its own "idiomatic" way of saving out results which doesn't use a file dialog, and therefore can't save any gcode because it can't write to your home directory
* Matrix is only allowed to access ~/Downloads/ which means you can't share any files with anyone unless you copy them into Downloads first
* "Recent" in the Gimp file picker doesn't show any of my recent files, presumably because it is using its own sandboxed file dialog which doesn't have access to the actual "Recent" files
* Docker can't access /tmp/.X11-unix which means you can't give docker containers access to X
In all of these cases you can work around it of course (mainly by having an accurate mental model of the problem and guessing a path that the program is allowed to access), but the user experience is just made worse for no benefit.
The general theme is that the user wants the sandboxed program to be able to do something that the person who assigned privileges didn't think of.
So maybe if we must do sandboxing, let's make it easy for users to break programs out of the sandbox when it suits them?
This looks like a chicken and egg problem to me. You can't sandbox things properly because things don't specify their required privileges properly. Things don't specify their privileges properly because things aren't sandboxed so there's no need to think about that.
As a user, I quite like the iOS approach, of apps "sharing" their ressources with other apps or asking for permission to access this or that collection of ressources. This can probably be improved and adapted to a non-touch model, but I think the concept is nice.
But, of course, apps have to be built for this kind of environment, so I think there will unfortunately be some janky transition period, with the customary competing, incompatible implementations.
Firejail sandboxing works well on these scenarios. You can trivially grant more or less privileges, including the removal of the entire sandbox if you wish.
It actually ships with rulesets for hundreds of programs that tend to be quite polished and work out of the box.
Personally, I dislike flatpak because it doesn't let me control the dependencies of packaged software, and I feel we loose one of the most important advantages of Linux.
You raise many good points. One of the primary reasons I don't use nix or guix for everything is because it seems like there's too much magic and it's often too difficult for me to figure out how to modify a detail if a toggle wasn't explicitly provided for it. Even just getting insight into the chain of events to debug things was incredibly obtuse the last time I played with nix. Like cmake on steroids. At least cmake will spit out a multi-megabyte trace file for me to pick through. (I'm convinced cmake is an elaborate conspiracy to waste developer time.)
> let's make it easy for users to break programs out of the sandbox when it suits them?
At least for Flatpak given the things you described this is quite straightforward via bind mounts. Although it did seem a bit goofy having an entire list of per-application bind mounts in my fstab. Maybe things have improved since I last tried going that route?
> "idiomatic" ... doesn't use a file dialog,
That's an Inkscape (plugin?) bug plain and simple. GUI apps should be using the appropriate xdg portals for all supported operations at this point. The only excuse (IMHO) is for missing or broken functionality.
It would be like a system tray widget not working and blaming the DE instead of the program that fails to implement the decently old and widely adopted standard.
> Matrix ... you can't share any files with anyone
Which client is this? Anyway the xdg portal should work. Did the dev try it? Programs should never need blanket access to save or open specific user supplied paths as a one off. That's a large part of the point of sandboxing stuff in the first place.
> if they "aren't" doing that then it doesn't help.
True enough. To be clear I don't fault anyone for not using new or known to be broken or little known standards.
However at some point responsibility has to shift to the developers. There are standard ways of doing things. Just as you can't expect an arbitrary project to support your pet API that hardly anyone uses, developers can't expect major distributions or the majority of users to cater to their refusal to conform to widely accepted standards.
I'm not saying you shouldn't use a particular program. Just that I don't think it's reasonable to fault the tooling for certain things.
> This would make it convenient to run every single process with restricted privileges, including no access to ~/
Please no. I understand why Flatpaks do it, but this is one of the most ridiculously annoying things about the Flatpak sandbox. You can often only drag 'n drop from ~/Downloads/, and from any other location either causes the receiving application to glitch out, fail silently, or fail with a general error. Hell, you sometimes can't even copy-paste an image from one application to another via the copy-paste buffer! Meanwhile on macOS and Windows it works perfectly.
Why? I am just asking for a simple UI, which Guix already has. Mainly for CLI applications. The idea is to be able to launch an ephemeral shell with any combination of packages, filesystem R/W privileges, and network access in a convenient way.
I think launching e.g. a Python shell with some packages that are potentially compromised and letting those read ~/.ssh and whatever else they want is fundamentally insecure. Rogue PyPI packages that steal SSH keys is not a theoretical security breach, it already happened several times [1].
The current security model in Unix is untenable. But I agree well-implemented sandboxing should be frictionless. What you are experiencing is probably a X or Wayland sandboxing glitch. I also dislike Flatpak, for other reasons, but that doesn't make sandboxing a bad abstraction. It's just that we don't happen to like this particular implementation.
An UI would be fine, but having no permission to access ~/ is a terrible default. It causes a lot of breakage and glitchiness because applications are simply not written with that limitation in mind. This is why Flatpaks often crap out. It will cause a huge amount of applications / binaries to crap out if it became the default in NixOS, and lord knows NixOS has enough sharp edges already.
The best solution would be a framework akin to macOS that pops up 'allow application X to access folder Y from now?' in the UI or in the CLI as a terminal prompt whenever an application tries to access a folder. With a special permission for "full home access" and "full disk access".
I’ll note that macOS doesn’t necessarily always let you do this the first time, it’ll pop a dialogue saying “hey, you’re cool with this app seeing (your files/other apps files), right?” I wonder if such a thing could be implemented in flatpak.
Then I have to ask for printing to always work. Then something else that I haven't even thought of yet. Security is hard because there are so many details that must be right or you compromise useability/usefulness.
Is it? There is such a thung as GUI automation. It's not a very popular exploit vector because it is visible, and because there are simpler non-GUI exploit vectors available. But nothing fundamentally stops an attacker process from pretending it's accessibility software and taking control of the mouse to do a drag-n-drop.
That should be a privileged status. If you manage to trick the user into installing malicious software followed by granting it elevated privileges then you likely didn't need such a roundabout method in the first place.
I mean I could also just not use computers, or the internet and that would be perfectly secure as well.
Breaking or making actively annoying expected and useful functionality isn't security (and has a long track record of leading to workarounds which compromise security).
Oh, i know. I was making a joke at Nix's expense, such that i fully expected to be downvoted for said joke, but i also expected some "nah-uh, that's not why;" and then a page of evangelism for NixOS, the one True™ OS.
The maintainer would just change the sandboxing constraints to weaken the software. Just like they did in the first place. You can try to make obfuscation difficult, but it’s always possible.
It's interesting to observe that every process is already restricted to only be able to do computation by default. Then along comes the OS with a plethora of holes in the sandbox to do various things. And then it's strange that we take those holes for granted and apply bandaid patches over them instead of not creating the holes to begin with. Why can't we ask the kernel to create a new extremely limited memory map and run some code until a certain software interrupt fires, then restore the previous context? Why should we have to start with a fully powered-up process and then close off its abilities, instead of starting with no capabilities except the in/out buffers and computation? In this model, there could be a deliberate backdoor in the computation and it still couldn't do anything besides DoS.
I get the potential need for such draconian measures in perhaps, some top secret government installations or something, but gosh that sounds tiring -- a lot like MacOS lately asking me "(AppName) wants to access your Downloads folder, cancel or allow?" when I have just directed it to open a file.
Even if you only use trusted applications and they have stringent security policies avoiding supply chain compromises, RCEs are a fact of life. E.g. iMessage vulnerabilities are found all the time and there are probably a lot of vulnerabilities that are not reported because state actors hold on to them. This is the reason why iOS uses application sandboxing and on top of that Blastdoor for iMessage.
Maybe Linux isn't as effected now because it is not very popular as a desktop system. But this issue will have to be addressed as/when Linux becomes more popular. Having networked clients that do image parsing, etc. (usually in C code) without any sandboxing will just lead to mass exploitation, data exfiltration, etc.
The Linux desktop has to move away from the 90ies security model where the internet was relatively safe and attackers would only be after UID 0.
a lot like MacOS lately asking me "(AppName) wants to access your Downloads folder, cancel or allow?" when I have just directed it to open a file.
I don't think it asks that when it goes through a portal (e.g. file dialog)?
> Still, I think the xz backdoor did not work on NixOS because its unusual non FHS-compliant filesystem structure.
It didn't work on nixos because the build-time check included checking whether the build was being executed in a debian or fedora build environment. This was to avoid suspicious build failures on distros with weird toolchains or incompatible architectures/ABIs/library versions. (The backdoor was a precompiled .o file so rather ABI sensitive)
The usual answer in Android is "you can't do that". The primary difference from my perspective is that developers for those platforms design with the limitations in mind. Stuff on linux often just breaks and requires involved workarounds if it wasn't intended by the developer to be stuffed into a flatpak. (And might not even compile under nix without half a dozen monkey patches to the build system, let alone run once built.)
I think developers of desktop application are generally open towards facilitating sandboxing though. Most applications use standard XDG folders for files, use standard toolkit file pickers, etc.
I don't have hard data, but my impression is that the general tendency in Flatpaks is that they are able to do more sandboxing over time. When Flatpak was new, a lot of applications pretty much required completely opening their sandboxes, the same applications have much more limited privileges nowadays.
It's a long process, but at least with desktop applications there is progress. Unfortunately, the same can't really be said about command-line tools and development tools (NPM, cargo, pip, editor plugins, etc.).
I'm not even sure what it would look like for CLI tools. Probably the sanboxing tools themselves need better controls and better UX for those controls.
For CLI a solution would probably look a lot like unshare or (a more user friendly version of) setcap. The user would need to reach out to the sandbox to communicate what additional things to permit during this specific session.
And then inevitably someone would configure the equivalent of passwordless sudo at which point I wonder what the point of the whole thing was to begin with. Related, we need a better paradigm for CLI to differentiate between user versus programmatically generated input. A program shouldn't be able to impersonate me unless I explicitly granted it some extremely unusual privileges.
I still like the blogpost, because NixOS bills itself as a technical solution to prevent build artifacts that are decoupled from the source code (i.e. not reproducible), and the xz backdoor was hidden in build artifacts.
Yeah, it's a good blog post in part because it gets into the details of how it was possible that this vulnerability made it into NixOS, which purports to solve the problem.
Also, I'm not a NixOS critic either: I'm writing this from NixOS! I just don't think there's such a thing as a security cure-all as long as humans are in the loop anywhere.
That would only make the xz attack harder, not make it impossible. Just add some step to the action run that fetches blobs from an internet resource you control, then swap out the blobs for malicious ones.
Right. The title seemed to be suggesting that the Nix way of doing things might have detected the backdoor. It's actually intending to suggest ways that Nix could be changed in order to detect the backdoor.
Thank you very much for citing that! along with highlighting the fact that the exploit was in fact, not detected by reproducible builds prior to other means of discovery.
In recent times, actual reality is often maligned when compared to how people feel about objective reality and how it meshes with their individual value systems.
I have personal values too, but I don't hold the opinion that actual reality is less significant than how I feel about it. It's not a popular perspective 8-/
I've always liked to say: The difference between theory and reality is that, in theory they're the same, and in reality they're not.
I hope the realization that the reproducible builds of NixOS _could_ have detected the xz exploit, but didn't, will lead to new advances in the analysis of those reproducible builds to detect other exploits sooner in the future.
I feel the author is a bit tunnel visioned by what happens to happen this time. The Jiatan incident has a sample size of one, it'd be a bit short sighted to think that's the only way it could happen. You can imagine various scenarios where the defenses suggested here will not have worked.
Also I (as a nix user myself) think it's unlikely NixOS would have caught it. As evidenced by the fact that it didn't. (Yeah I realize I just said next time it might happen differently but it'd be foolish to put faith in nix without evidence).
NixOS is really irrelevant here because the xz backdoor specifically targeted RedHat and Debian. It's equally relevant to say the xz backdoor didn't affect Windows (ironically the backdoor was ultimately found by a Microsoft employee, an oft-overlooked detail).
how is that automatic if you need the checksum of the backdoor?!
automatic here means that code written prior to the existence/knowledge of the xz backdoor catches not only this specific attack, but the entire class of such tarball attacks.
But would that have solved anything here? The main maintainer was overwhelmed. The back door was obfuscated inside a binary blob there for so-called testing purposes. I doubt anyone was reviewing the binary blobs or the autoconf code used to load it in, and for that matter it’s not clear anything was getting reviewed. Fetching and building straight from GitHub doesn’t solve that if the malicious actor simply puts the binary blob into the repo.
Might not be a big chance depending on the project in question, but it's still tons more likely for someone randomly clicking through commits to find a backdoor committed to a git repo than within autogenerated text in a tarball. I click around random commits of random projects I'm interested in every now and then at least. At the very least it changes the attack from "yeah noone's discovering this code change" to "let's hope no random weirdo happens to click into this commit".
A binary blob by itself is harmless, you need something to copy it over from the build env to the final binary. So it's "safe" to ignore binary blobs that you're sure that the build system (which should all be human-written human-readable, and a small portion of the total code in sane projects) never touches.
That said, of course, there's still many options for problematicness - some projects have commit autogenerated code; bootstrapping can bring in a much larger surface area of things that might copy the binary blob; and more.
> At the very least it changes the attack from "yeah noone's discovering this code change" to "let's hope no random weirdo happens to click into this commit".
There's also value in leaving a trail to make auditing easier in the event that an attack is noticed or even if there is merely suspicion that something might be wrong. More visibility into internal processes and easier UX to sort through the details can easily make the difference between discovery versus overlooking an exploit.
Your comment doesn’t quite make sense: Building from source lets you (and everyone else) inspect the source, while building from provided tarballs means if you compare it to source it’ll be inherently different, as the autoconf process makes changes to the files.
If you’re downloading and executing a binary from github releases, then you’re completely at the mercy of the maintainer (nix only does that with closed source packages)
The article does in fact cite the reproducible-builds project, in the section on "Leveraging bitwise reproducibility". From your comment I am not convinced you understood the point of the article, which is:
* the NixOS build process was unable to perform a full-source build of xz because xz is required too early in the bootstrap;
* a proposed adjustment to nixpkgs to automatically detect compromises of nixpkgs dependencies which are required early in the bootstrap.
Other ecosystems can of course also attempt full-source builds and discover the discrepancy; the entire point of the article is that nixpkgs currently cannot.
If we want to focus on a thing that NixOS could have prevented, we should focus on the CrowdStrike incident. Being able to boot to yesterday's config because today's config isn't working would've mitigated most of the problems.
My point is that the lack of boot flexibility caused a lot of problems. If we want to be able to rely on people to get the job done even on days when something is wonky in the bits, then we should give them boot flexibility. NixOS just happens to do it especially well.
As for ZFS... Dealing in filesystem snapshots is comparatively a bit awkward. If you want to recreate that config elsewhere you have to move the whole snapshot rather than just the recipe for building it, and even then it'll break if the system architecture is different on the target machine. If you've got two of them (perhaps labeled "good" and "bad"), you're not going to get anything friendly when you try to diff them, nor is there an obvious way to use things like `git bisect` to reason about where the problem occurred.
None of these things are show stoppers, but working with code that defines some state is just so much easier than anything you get out of a filesystem which happens to remember that state, but can't tell you why it should be the way it is.
> As for ZFS... Dealing in filesystem snapshots is comparatively a bit awkward. If you want to recreate that config elsewhere you have to move the whole snapshot rather than just the recipe for building it
That wasn't the highlight of the point. It was that you can restore to a known good version of the operating system, effortlessly, regardless of what the operating system is. It could be Ubuntu, Nix, or FreeBSD. Broken OS, select an older snapshot in the boot loader, and you're golden again.
I think that IT departments are going to disallow their users that kind of freedom unless they have more information/control about just which configs their users are allowed to boot to. It has to be more descriptive/composable than a pile of bits and a timestamp or they won't go for it at all.
As much as I want to put all the power in the hands of the user, I'm sympathetic to the plight of the IT guy who has a team that always boots to "last known good" config thats several years old because they just don't trust updates in general.
Yes, if you use a trusted framework then you are safe from things until that framework is attacked. The xz backdoor might have been detected, but the xz backdoor wasn't crafted with the goal of working against the Nix ecosystem. When a nix core developer ends up being a spy or whatever then there will end up being an attack against the nix ecosystem. Don't reply to this with some claim that Nix is inherently secure unless you want me to track you down and make you admit you were wrong when Nix ends up getting successfully exploited in a year or two.
The standard never has been and never will be absolute security. That’s an impossible threshold nothing would ever meet even though it’s objectively true that software today is generally more secure than software 30 years ago. The strongman claim being made is “Nix is harder and more expensive to exploit than traditional build systems”. So sure, if you find a cheap way to exploit Nix, track me down. But until then, it remains at least plausible & in practice very likely that Nix is harder to exploit than alternate systems on a technical level.
The backdoor build script specifically checked for things indicating that it's being built for debian, and if not, not inserting the backdoor; so it only ever was non-reproducible in situations where reproducibility wasn't expected. Not hard to make sure a backdoor with control over the build environment doesn't raise suspicions in non-targeted places.
The infected version was only the tarball, which was part of the obfuscation (i.e. people may look at git commits, but who individually checks autogenerated code in tarballs of every release)
Building from the git commit the release claimed to be from would result in a different binary than building from the tarball if the environment check passed.
While NixOS goes a bit further with it, most other distrubtions also compile everything from source, cryptographically verify that the sources they use are not tampered with, and have versioned dependencies between packages. Debian also has reproducible builds.
The problem is just that the build systems did not strip pre-compiled object files before building from source. Even with that fixed, if nobody checks the source code then you can add all the backdoors you want, and there is nothing in NixOS or any other distro that would protect against that.
Excellent descriptive analysis. Wrong, misleading title, perhaps "technically correct," but at best with a "backdoored" meaning.
It points out the need and use for build-manager tools that go a step beyond union file system layers, but track then enforce that e.g. tests cannot pollute build artifacts. Take a causal trace graph of files affecting files, in the build process, make that trace graph explicit, and then build a way to enforce that graph, or report on deviations from previous trace graphs.
In defense of the author: nobody reads your article if the name is boring (that is my experience at least), which it would've been if they titled it more accurately. That gives incentive to authors to use click-bait titles.
In defense of the bank robber: no clerk simply gives you money if you aren't threatening them (that is my experience at least), which it would have been if they acted like a respectable citizen. This gives people the incentive to become bank robbers.
Yeah it certainly would have made hiding the backdoor more difficult. But far from impossible. You can always hide backdoors in source code if you want, it just takes more effort to make a plausible bug, and probably has a higher chance of detection.
If Jia Tan's PR was approved, malicious artifacts could go to github releases just as easily as in a tarball. Struggling to understand the point made about github releases being a security mitigation.
> the release tarball being different than the source is
> the maintainer provided tarball was honestly generated from the original source code.
How, then? What about differing versions, etc. or has it been mentioned and I just missed it?
Just make sure the generated tarball can be generated from the source code itself, do not exclude anything, git add & commit everything. Can't we do that? We would still have to look at commit history in this case, I believe, and again, he said it himself, it was harmless to the naked eye, so even then, how could we verify? Maybe I don't understand what he meant by verification, but if maintained tarballs are generated from the owner's source code and is not on GitHub (or anywhere else, just a git repo), that is a problem in itself.
Of course there was more to it than just pushing poisoned test files, but still. I do not see how Nix would have prevented it, if the git repo has those test files and with seemingly harmless code (and is reproducible).
Perhaps what we can do is: if an (in)famous project has changed its main lead, then pay closer attention to the commits and check who it is? I don't know, TBH.
Did I misunderstand the article, or am I missing something?
> To build xz from sources, we need autoconf to generate the configure script. But autoconf has a dependency on xz!
Both directions of this seem crazy to me.
1. Why the heck should a build configuration tool like autoconf be unable to function without a compression tool like xz? That makes no sense on its face.
2. For that matter, why the heck should xz, a tool that is supposedly so fundamental, have a hard dependency on a boilerplate generator like autoconf?
At the end of the day all autoconf is doing is telling you how to invoke your compiler. You ought to have a way to do that without the tool, even if it produces a suboptimal binary. If you care about security, instead of taking a giant tarball you don't understand and the running another tool in it, shouldn't you just generate that command line somehow (even in an untrusted fashion), review it, and then use that human-verified script to bootstrap?
And if you need a (de)compressor that low on the dependency tree so that literally the entire world might one day rest on it, surely you can isolate the actual computation for bootstrapping purposes and just expose it with just the open/read/write/close syscalls as dependencies? Why do you need all the bells and whistles?
> Why the heck should a build configuration tool like autoconf be unable to function without a compression tool like xz? That makes no sense on its face.
At face value, both autoconf and its cousin pkg-config are overly complex dogshit software - both with circular dependencies - that should have been retired long ago in favor of something else. I scream with joy when I use software that uses its own bootstrapper or cmake.
Before you think "but I've never had this problem, you must be bonkers" - try building software on a fresh Solaris box with no GNU anything installed and you need to install one of these monstrosities with their circular dependencies. Your hair will fall out before you're done.
>> Why the heck should a build configuration tool like autoconf be unable to function without a compression tool like xz? That makes no sense on its face.
> At face value, both autoconf and its cousin pkg-config are overly complex dogshit software - both with circular dependencies - that should have been retired long ago in favor of something else. I scream with joy when I use software that uses its own bootstrapper or cmake. Before you think "but I've never had this problem, you must be bonkers" - try building software on a fresh Solaris box with no GNU anything installed and you need to install one of these monstrosities with their circular dependencies. Your hair will fall out before you're done.
I've used & seen plenty of the mess of autoconf, thank you. It's a hell I don't want to go back to, and it's a hell a lot of people successfully avoid. But even then, I've also never noticed it requiring compression or decompression, which is partly what boggled my mind at the statement.
In any case, the question was: why should autoconf have a hard dependency on xz? Your response to that was autoconf is complicated and has circular dependencies? How is that a response? That was the premise of the question, not the answer.
autoconf itself doesn't need xz, but in Nixpkgs xz is part of the stdenv, meaning essentially every package has a build-time dependency on xz.
For the case of xz not using the upstream-generated configure would probably be doable with some effort but doing the same for glibc, gcc, gnumake etc. would be much more difficult.
I'm fairly sure xz _isn't_ a general dependency of autoconf etc. Some projects might use xz in their tests, but thats a general bootstrapping problem for xz, not autoconf.
(Autoconf is a pain and I would try to avoid for new projects, but for detecting all kinds of crazy old unices I'm not sure what is better)
I think it's a vuln to think of OSS in terms of 'a community'. It's an abstract thought construct that does not represent reality (though it helps to make sense of it in a rather specific manner)
xz happened because of the absence of community.
It could happen inside this abstract thought of a community as well but here it did not.
xz targeted deb and rpm. The vast majority of what is facing the world.
Nix did not stop it.
I believe this article feeds the possible vuln rather than prevent it.
This article suggests that Nix could have prevented xz backdoor, only to conclude that backdoor could be avoided by building from a git tag rather than source tarball.
This is true for every distro and it grinds me that Nix is even mentioned.
This kind of argument is like how Gentoo used to be better than anything else, because everyone knows how little piece is built, and here we are two decades later, who still is wasting nights compiling everything.
it's somehow immensely funny to me that some state probably had an entire project to land this backdoor in xz, spend literal years to make it happen. And then it was immediately detected and all effort was for nothing.
Or they have N other such projects in flight. The xz backdoor wasn’t that much work, just playing the long game. The person doing the xz project could easily do several other projects at the same time.
A lot of issues do get undetected. E.g. the Debian OpenSSL security accident was only detected when a gazillion servers had predictable SSH keys.
Yeah, without the latency regression, it probably would have gone undetected much longer. Using a secondary thread and spreading the CPU load over a few seconds would have made it not even register as a spike in CPU usage.
Or do cheap ECDSA instead of expensive RSA. Even if the backdoor is hidden inside RSA decryption and the rest of the system thinks the thing being decrypted should be encrypted with RSA, you don't have to use it for the back door.
Is the massive number of spam messages on this thread an attempt to suppress the article / discussion around it?
I've not seen this many from multiple but evidently related green accounts before. Given the implications about nation state actors in play, it's tempting to jump to conclusions here.
There are plenty of Nix users who are skeptical of such claims (e.g. me included).
I generally believe that NixOS is less secure than some other systems. No secure boot by default, no SELinux, too few maintainers for a huge package set, and relatively easy to gain commit access.
Commit scanning probably wouldn't have caught this, since the backdoor happened outside of any commit.
Comparing the tarball's contents against the VCS repository would've likely made this easier to catch, but at that point you might as well just use the VCS repository directly.
Maybe, but good luck getting an LLM (one which does not include analysis of this particular attack in its training data) to spot this attack with a prompt that doesn't also create thousands of false positives when focused on the millions of non-malicious commits out there. I think we're decades away from them being that good.
Note that NixOS and reproducible builds did not detect the xz backdoor, and in fact NixOS shipped the malicious builds of xz (though they didn't do anything because the malware didn't target NixOS):
> I am a NixOS developer and I was surprised when the backdoor was revealed to see that the malicious version of xz had ended up being distributed to our users.
As always theory and reality are different, and the thing that made xz possible was never a technical vulnerability with a technical solution—xz was possible because of a meatspace exploit. We as a community are very very bad at recognizing that you can't always just patch meatspace with better software.
> NixOS and reproducible builds did not detect the xz backdoor
Nix declarativeness is quite useful to increase protection against exploits in a number of ways. Unfortunately, there is still a lot of untapped potential. My number one priority would be to implement fine-grained ephemeral containers. Guix has these already.
This would make it convenient to run every single process with restricted privileges, including no access to ~/, except those directories that are needed by the task. That would prevent e.g. a rogue pip package from stealing SSH keys.
Still, I think the xz backdoor did not work on NixOS because its unusual non FHS-compliant filesystem structure.
> I think the xz backdoor did not work on NixOS because its unusual non FHS-compliant filesystem structure.
Right, but this is not part of the security model, it's an incidental attribute of the OS that's there for other reasons and easily solved for if the attacker had prioritized it. The only reason why it didn't work is because the attacker didn't bother making it work on NixOS, not because he couldn't have if he'd wanted to.
[flagged]
[dead]
>Still, I think the xz backdoor did not work on NixOS because its unusual non FHS-compliant filesystem structure.
It didn't work on NixOS because the build-time hooks that inserted the backdoor only activated itself when it recognized that it was being built for an RPM or Debian package.
Which could have been adjusted, of course.
You don’t even need to run in a container for this. It’s possible to do this entirely in systemd service configuration. The easiest way is just to have separate user for every service and reduce stuff running as root. You can also restrict filesystem access, network access and even syscall access (although some of this may be implemented as a container under the hood).
Unfortunately, this wouldn’t help with the xz vulnerability because the SSH server is the one loading the compromised library in that case (indirectly). Since SSH itself needs to have access to the private keys, it’s not really easy to secure it against vulnerabilities in the library it loads itself.
On the flip side, unless the vulnerability is in one of the important binaries/shared libraries, the amount of damage it can cause it probably quite contained with simply having good user isolation. Nix can make this analysis really simple (because of explicitly specified dependencies), so you can crack down on critical dependencies a lot more easily.
> It’s possible to do this entirely in systemd service configuration
Sure, but I think that leaves out many use cases. What if I want to e.g. start a Python shell that has access to certain directories, and nothing else, including no network access?
Nix provides a good way of doing that for common use cases, as it has decent support for Firejail. But I would like something like Guix containers, which is convenient for any ad hoc use case. This greatly reduces any security threat. It's a poor-man's QubesOS.
Yeah, I guess most existing Linux stuff that is actually configured (so not SELinux etc) is geared at system processes not user ones. Transparently running all user applications in properly isolated containers would be quite neat.
Does Firejail handle dynamic access (eg. I may want an xz invocation to work on my private keys, but not THIS specific one where I’ve given it a completely different file?).
I quite like pledge/unveil for this kind of thing on OpenBSD, although that’s for a different threat model.
Firejail and bwrap are setuid sandbox frontends. You can wrap e.g. a new xz invocation to let it work on your private keys.
But Nix relies on ephemeral shells and flakes, and they don't play so well with each other. The interface is clumsy. Guix, in contrast, has a pretty nice set of CLI switches for these features.
Even normal distros should prioritize some simple graphical UI for this. Running programs with minimal privileges would result in a significant enhancement of security. The kernel features for achieving this are already there.
> Firejail and bwrap are setuid sandbox frontends.
bwrap does not require SUID, it only needs it if user namespaces are disabled for unpriviledged users.
[flagged]
Literally any container runtime can do this for you. No one does it though because it's annoying as hell to upfront figure out what you want, and then be unable to increase that list later.
Like if I were to try and find a not annoying way to do this, it would be to snapshot and overlay mount my filesystem at process launch time, then give a heuristic warning at process termination about what was changed.
But then we're still into basically SELinux territory which because of course there's upfront things we don't want to allow read access to - i.e. SSH keys.
(don't know what the lesson here is other then "for the love of god could we get a standardized secrets filesystem and/or API or something". Looking at you Hashicorp Vault and ~/.vault-token).
Firejail and bwrap can do this in a relatively convenient way, but adding nix shell on top of that is quite clumsy.
In a regular distro, it is tolerable, but a better UI would be great.
[flagged]
sshd was compromised, and it doesn't access private keys. (well, probably the host key.)
[flagged]
> including no access to ~/
This is such a headache with snap and flatpak though.
If you're trying to do something that the package maintainer thought of in 5 minutes of testing then it's usually fine, but still inside any non-trivial application you'll often find parts that try to use extra privileges that aren't documented because they're not normally considered "privileges".
Some examples of sandboxing issues:
* FreeCAD doesn't have access to /usr which means when you try to make a Draft ShapeString you can't pick any fonts
* FreeCAD stores the path to the shape of a milling cutter inside your project, but that path is inside /mnt/.FreeCAhjhffg or whatever so it doesn't work after you restart the program
* Inkscape gcodetools has its own "idiomatic" way of saving out results which doesn't use a file dialog, and therefore can't save any gcode because it can't write to your home directory
* Matrix is only allowed to access ~/Downloads/ which means you can't share any files with anyone unless you copy them into Downloads first
* "Recent" in the Gimp file picker doesn't show any of my recent files, presumably because it is using its own sandboxed file dialog which doesn't have access to the actual "Recent" files
* Docker can't access /tmp/.X11-unix which means you can't give docker containers access to X
In all of these cases you can work around it of course (mainly by having an accurate mental model of the problem and guessing a path that the program is allowed to access), but the user experience is just made worse for no benefit.
The general theme is that the user wants the sandboxed program to be able to do something that the person who assigned privileges didn't think of.
So maybe if we must do sandboxing, let's make it easy for users to break programs out of the sandbox when it suits them?
This looks like a chicken and egg problem to me. You can't sandbox things properly because things don't specify their required privileges properly. Things don't specify their privileges properly because things aren't sandboxed so there's no need to think about that.
As a user, I quite like the iOS approach, of apps "sharing" their ressources with other apps or asking for permission to access this or that collection of ressources. This can probably be improved and adapted to a non-touch model, but I think the concept is nice.
But, of course, apps have to be built for this kind of environment, so I think there will unfortunately be some janky transition period, with the customary competing, incompatible implementations.
Firejail sandboxing works well on these scenarios. You can trivially grant more or less privileges, including the removal of the entire sandbox if you wish.
It actually ships with rulesets for hundreds of programs that tend to be quite polished and work out of the box.
Personally, I dislike flatpak because it doesn't let me control the dependencies of packaged software, and I feel we loose one of the most important advantages of Linux.
You raise many good points. One of the primary reasons I don't use nix or guix for everything is because it seems like there's too much magic and it's often too difficult for me to figure out how to modify a detail if a toggle wasn't explicitly provided for it. Even just getting insight into the chain of events to debug things was incredibly obtuse the last time I played with nix. Like cmake on steroids. At least cmake will spit out a multi-megabyte trace file for me to pick through. (I'm convinced cmake is an elaborate conspiracy to waste developer time.)
> let's make it easy for users to break programs out of the sandbox when it suits them?
At least for Flatpak given the things you described this is quite straightforward via bind mounts. Although it did seem a bit goofy having an entire list of per-application bind mounts in my fstab. Maybe things have improved since I last tried going that route?
> "idiomatic" ... doesn't use a file dialog,
That's an Inkscape (plugin?) bug plain and simple. GUI apps should be using the appropriate xdg portals for all supported operations at this point. The only excuse (IMHO) is for missing or broken functionality.
It would be like a system tray widget not working and blaming the DE instead of the program that fails to implement the decently old and widely adopted standard.
> Matrix ... you can't share any files with anyone
Which client is this? Anyway the xdg portal should work. Did the dev try it? Programs should never need blanket access to save or open specific user supplied paths as a one off. That's a large part of the point of sandboxing stuff in the first place.
> That's an Inkscape (plugin?) bug plain and simple. GUI apps should be using the appropriate xdg portals
We can state what they "should" be doing until we're blue in the face, but if they "aren't" doing that then it doesn't help.
> Which client is this?
The Matrix one is from a long time ago. It could have been Element? Or Riot? Were they the same thing? Don't know. I only used it briefly.
> if they "aren't" doing that then it doesn't help.
True enough. To be clear I don't fault anyone for not using new or known to be broken or little known standards.
However at some point responsibility has to shift to the developers. There are standard ways of doing things. Just as you can't expect an arbitrary project to support your pet API that hardly anyone uses, developers can't expect major distributions or the majority of users to cater to their refusal to conform to widely accepted standards.
I'm not saying you shouldn't use a particular program. Just that I don't think it's reasonable to fault the tooling for certain things.
I wonder how much it applies to Guix.
> This would make it convenient to run every single process with restricted privileges, including no access to ~/
Please no. I understand why Flatpaks do it, but this is one of the most ridiculously annoying things about the Flatpak sandbox. You can often only drag 'n drop from ~/Downloads/, and from any other location either causes the receiving application to glitch out, fail silently, or fail with a general error. Hell, you sometimes can't even copy-paste an image from one application to another via the copy-paste buffer! Meanwhile on macOS and Windows it works perfectly.
Why? I am just asking for a simple UI, which Guix already has. Mainly for CLI applications. The idea is to be able to launch an ephemeral shell with any combination of packages, filesystem R/W privileges, and network access in a convenient way.
I think launching e.g. a Python shell with some packages that are potentially compromised and letting those read ~/.ssh and whatever else they want is fundamentally insecure. Rogue PyPI packages that steal SSH keys is not a theoretical security breach, it already happened several times [1].
The current security model in Unix is untenable. But I agree well-implemented sandboxing should be frictionless. What you are experiencing is probably a X or Wayland sandboxing glitch. I also dislike Flatpak, for other reasons, but that doesn't make sandboxing a bad abstraction. It's just that we don't happen to like this particular implementation.
[1] https://www.packtpub.com/en-tw/learning/tech-news/python-lib...
An UI would be fine, but having no permission to access ~/ is a terrible default. It causes a lot of breakage and glitchiness because applications are simply not written with that limitation in mind. This is why Flatpaks often crap out. It will cause a huge amount of applications / binaries to crap out if it became the default in NixOS, and lord knows NixOS has enough sharp edges already.
The best solution would be a framework akin to macOS that pops up 'allow application X to access folder Y from now?' in the UI or in the CLI as a terminal prompt whenever an application tries to access a folder. With a special permission for "full home access" and "full disk access".
I’ll note that macOS doesn’t necessarily always let you do this the first time, it’ll pop a dialogue saying “hey, you’re cool with this app seeing (your files/other apps files), right?” I wonder if such a thing could be implemented in flatpak.
>You can often only drag 'n drop from ~/Downloads/
Drag and drop could be made to always work since it's being done by a user. Request this feature from your operating system's developer.
Then I have to ask for printing to always work. Then something else that I haven't even thought of yet. Security is hard because there are so many details that must be right or you compromise useability/usefulness.
Is it? There is such a thung as GUI automation. It's not a very popular exploit vector because it is visible, and because there are simpler non-GUI exploit vectors available. But nothing fundamentally stops an attacker process from pretending it's accessibility software and taking control of the mouse to do a drag-n-drop.
GUI automation is typically locked down heavily and would not be something you would give to random applications.
> pretending it's accessibility software
That should be a privileged status. If you manage to trick the user into installing malicious software followed by granting it elevated privileges then you likely didn't need such a roundabout method in the first place.
[flagged]
This is by design do you can't exfiltrate data. Nix is so advanced I feel like a caveman with my openrc Gentoo.
> This is by design
Hopefully one day people will finally accept that this is bad design and that humans require compromises on security
I mean I could also just not use computers, or the internet and that would be perfectly secure as well.
Breaking or making actively annoying expected and useful functionality isn't security (and has a long track record of leading to workarounds which compromise security).
Oh, i know. I was making a joke at Nix's expense, such that i fully expected to be downvoted for said joke, but i also expected some "nah-uh, that's not why;" and then a page of evangelism for NixOS, the one True™ OS.
The maintainer would just change the sandboxing constraints to weaken the software. Just like they did in the first place. You can try to make obfuscation difficult, but it’s always possible.
It's interesting to observe that every process is already restricted to only be able to do computation by default. Then along comes the OS with a plethora of holes in the sandbox to do various things. And then it's strange that we take those holes for granted and apply bandaid patches over them instead of not creating the holes to begin with. Why can't we ask the kernel to create a new extremely limited memory map and run some code until a certain software interrupt fires, then restore the previous context? Why should we have to start with a fully powered-up process and then close off its abilities, instead of starting with no capabilities except the in/out buffers and computation? In this model, there could be a deliberate backdoor in the computation and it still couldn't do anything besides DoS.
See also https://xkcd.com/2044
https://fuchsia.dev/fuchsia-src/get-started/learn/intro/sand...
I get the potential need for such draconian measures in perhaps, some top secret government installations or something, but gosh that sounds tiring -- a lot like MacOS lately asking me "(AppName) wants to access your Downloads folder, cancel or allow?" when I have just directed it to open a file.
Even if you only use trusted applications and they have stringent security policies avoiding supply chain compromises, RCEs are a fact of life. E.g. iMessage vulnerabilities are found all the time and there are probably a lot of vulnerabilities that are not reported because state actors hold on to them. This is the reason why iOS uses application sandboxing and on top of that Blastdoor for iMessage.
Maybe Linux isn't as effected now because it is not very popular as a desktop system. But this issue will have to be addressed as/when Linux becomes more popular. Having networked clients that do image parsing, etc. (usually in C code) without any sandboxing will just lead to mass exploitation, data exfiltration, etc.
The Linux desktop has to move away from the 90ies security model where the internet was relatively safe and attackers would only be after UID 0.
a lot like MacOS lately asking me "(AppName) wants to access your Downloads folder, cancel or allow?" when I have just directed it to open a file.
I don't think it asks that when it goes through a portal (e.g. file dialog)?
> Still, I think the xz backdoor did not work on NixOS because its unusual non FHS-compliant filesystem structure.
It didn't work on nixos because the build-time check included checking whether the build was being executed in a debian or fedora build environment. This was to avoid suspicious build failures on distros with weird toolchains or incompatible architectures/ABIs/library versions. (The backdoor was a precompiled .o file so rather ABI sensitive)
Such sandboxes would only work well, if the whole OS would be built around supporting them.
You would essentially need Android or iOS for it to not be a pain in the ass
Nonetheless, in this year and age this should be the bare minimum from a security point of view.
The usual answer in Android is "you can't do that". The primary difference from my perspective is that developers for those platforms design with the limitations in mind. Stuff on linux often just breaks and requires involved workarounds if it wasn't intended by the developer to be stuffed into a flatpak. (And might not even compile under nix without half a dozen monkey patches to the build system, let alone run once built.)
I think developers of desktop application are generally open towards facilitating sandboxing though. Most applications use standard XDG folders for files, use standard toolkit file pickers, etc.
I don't have hard data, but my impression is that the general tendency in Flatpaks is that they are able to do more sandboxing over time. When Flatpak was new, a lot of applications pretty much required completely opening their sandboxes, the same applications have much more limited privileges nowadays.
It's a long process, but at least with desktop applications there is progress. Unfortunately, the same can't really be said about command-line tools and development tools (NPM, cargo, pip, editor plugins, etc.).
I'm not even sure what it would look like for CLI tools. Probably the sanboxing tools themselves need better controls and better UX for those controls.
For CLI a solution would probably look a lot like unshare or (a more user friendly version of) setcap. The user would need to reach out to the sandbox to communicate what additional things to permit during this specific session.
And then inevitably someone would configure the equivalent of passwordless sudo at which point I wonder what the point of the whole thing was to begin with. Related, we need a better paradigm for CLI to differentiate between user versus programmatically generated input. A program shouldn't be able to impersonate me unless I explicitly granted it some extremely unusual privileges.
> Such sandboxes would only work well, if the whole OS would be built around supporting them.
This. Which is why I'm using Qubes OS designed around sandboxing, and I can't recommend it enough.
>My number one priority would be to implement fine-grained ephemeral containers. Guix has these already.
Besides how large the package repos are, what other reasons would one chose Guix over NixOS?
Non-standard services daemon vs systemd.
Otherwise, available packages are tidy and well defined.
NixPkgs, which I use and develop for, is less tidy.
[dead]
I still like the blogpost, because NixOS bills itself as a technical solution to prevent build artifacts that are decoupled from the source code (i.e. not reproducible), and the xz backdoor was hidden in build artifacts.
Yeah, it's a good blog post in part because it gets into the details of how it was possible that this vulnerability made it into NixOS, which purports to solve the problem.
Also, I'm not a NixOS critic either: I'm writing this from NixOS! I just don't think there's such a thing as a security cure-all as long as humans are in the loop anywhere.
Sure, but you could achieve the same thing by requiring that the build artifacts are generated by a Github Actions runner.
That would only make the xz attack harder, not make it impossible. Just add some step to the action run that fetches blobs from an internet resource you control, then swap out the blobs for malicious ones.
Right. The title seemed to be suggesting that the Nix way of doing things might have detected the backdoor. It's actually intending to suggest ways that Nix could be changed in order to detect the backdoor.
> As always theory and reality are different
Thank you very much for citing that! along with highlighting the fact that the exploit was in fact, not detected by reproducible builds prior to other means of discovery.
In recent times, actual reality is often maligned when compared to how people feel about objective reality and how it meshes with their individual value systems.
I have personal values too, but I don't hold the opinion that actual reality is less significant than how I feel about it. It's not a popular perspective 8-/
I've always liked to say: The difference between theory and reality is that, in theory they're the same, and in reality they're not.
I hope the realization that the reproducible builds of NixOS _could_ have detected the xz exploit, but didn't, will lead to new advances in the analysis of those reproducible builds to detect other exploits sooner in the future.
The reason is a bit funny - the NixOS bootstrap downloads it's source code which is a xz compressed tarball.
[dead]
[dead]
[dead]
[flagged]
[flagged]
[flagged]
I feel the author is a bit tunnel visioned by what happens to happen this time. The Jiatan incident has a sample size of one, it'd be a bit short sighted to think that's the only way it could happen. You can imagine various scenarios where the defenses suggested here will not have worked.
Also I (as a nix user myself) think it's unlikely NixOS would have caught it. As evidenced by the fact that it didn't. (Yeah I realize I just said next time it might happen differently but it'd be foolish to put faith in nix without evidence).
[flagged]
[flagged]
[flagged]
[flagged]
[dead]
NixOS is really irrelevant here because the xz backdoor specifically targeted RedHat and Debian. It's equally relevant to say the xz backdoor didn't affect Windows (ironically the backdoor was ultimately found by a Microsoft employee, an oft-overlooked detail).
a slightly improved version of NixOS (or Guix) would have automatically caught this backdoor once it reached their repos.
they are relevant.
A slightly improved version of any OS would've automatically caught it.
mechanism, or it didn't happen.
and if any OS, then for windoze please!
if ( file_checksum == xz_backdoor ) { red_alert(); }
There.
Of course this doesn't buy you anything if the exploit is changed to target a "patched" OS but the same goes for the proposed Nix solution.
how is that automatic if you need the checksum of the backdoor?!
automatic here means that code written prior to the existence/knowledge of the xz backdoor catches not only this specific attack, but the entire class of such tarball attacks.
[dead]
[dead]
[dead]
[flagged]
[flagged]
Article says that distributions should get source code directly from the VCS (for instance Github) rather than the traditional installation tarball.
I don’t see what this solves though. Couldn’t a malicious maintainer simply add binary blobs directly to the source code repository?
The author suggests Github is trusted, as though Github validates code in some way. Which of course it does not.
What this solves is the problem that often "what is reviewed" is different from "the source code used to build the software".
Verified reproducible builds could have countered the xz utils break, SolarWinds Orion subversion, and many others. It's worth doing.
But would that have solved anything here? The main maintainer was overwhelmed. The back door was obfuscated inside a binary blob there for so-called testing purposes. I doubt anyone was reviewing the binary blobs or the autoconf code used to load it in, and for that matter it’s not clear anything was getting reviewed. Fetching and building straight from GitHub doesn’t solve that if the malicious actor simply puts the binary blob into the repo.
Might not be a big chance depending on the project in question, but it's still tons more likely for someone randomly clicking through commits to find a backdoor committed to a git repo than within autogenerated text in a tarball. I click around random commits of random projects I'm interested in every now and then at least. At the very least it changes the attack from "yeah noone's discovering this code change" to "let's hope no random weirdo happens to click into this commit".
A binary blob by itself is harmless, you need something to copy it over from the build env to the final binary. So it's "safe" to ignore binary blobs that you're sure that the build system (which should all be human-written human-readable, and a small portion of the total code in sane projects) never touches.
That said, of course, there's still many options for problematicness - some projects have commit autogenerated code; bootstrapping can bring in a much larger surface area of things that might copy the binary blob; and more.
> At the very least it changes the attack from "yeah noone's discovering this code change" to "let's hope no random weirdo happens to click into this commit".
There's also value in leaving a trail to make auditing easier in the event that an attack is noticed or even if there is merely suspicion that something might be wrong. More visibility into internal processes and easier UX to sort through the details can easily make the difference between discovery versus overlooking an exploit.
[dead]
[flagged]
Your comment doesn’t quite make sense: Building from source lets you (and everyone else) inspect the source, while building from provided tarballs means if you compare it to source it’ll be inherently different, as the autoconf process makes changes to the files.
If you’re downloading and executing a binary from github releases, then you’re completely at the mercy of the maintainer (nix only does that with closed source packages)
[flagged]
So the argument hinges on the fact that the XZ maintainer hid malicious code in the tarballs that were not checked into Git.
The author demonstrates that Nix can be configured to generate the tarballs from git that go into building the binaries.
What I don't see, however, is how is this a feature that requires Nix or NixOS?
Any build system out there (including the stuff that goes into RPMs and Debs) can be configured to generate tarballs as a intermediate step.
In fact making reproducible builds is a major thing that Debian has been working on for some time now.
https://wiki.debian.org/ReproducibleBuilds
The article does in fact cite the reproducible-builds project, in the section on "Leveraging bitwise reproducibility". From your comment I am not convinced you understood the point of the article, which is:
* the NixOS build process was unable to perform a full-source build of xz because xz is required too early in the bootstrap;
* a proposed adjustment to nixpkgs to automatically detect compromises of nixpkgs dependencies which are required early in the bootstrap.
Other ecosystems can of course also attempt full-source builds and discover the discrepancy; the entire point of the article is that nixpkgs currently cannot.
I see.
[dead]
[flagged]
[dead]
[dead]
[flagged]
If we want to focus on a thing that NixOS could have prevented, we should focus on the CrowdStrike incident. Being able to boot to yesterday's config because today's config isn't working would've mitigated most of the problems.
Except that's a Windows thing where you don't have boot flexibility.
Ubuntu on ZFS can do this as well.
My point is that the lack of boot flexibility caused a lot of problems. If we want to be able to rely on people to get the job done even on days when something is wonky in the bits, then we should give them boot flexibility. NixOS just happens to do it especially well.
As for ZFS... Dealing in filesystem snapshots is comparatively a bit awkward. If you want to recreate that config elsewhere you have to move the whole snapshot rather than just the recipe for building it, and even then it'll break if the system architecture is different on the target machine. If you've got two of them (perhaps labeled "good" and "bad"), you're not going to get anything friendly when you try to diff them, nor is there an obvious way to use things like `git bisect` to reason about where the problem occurred.
None of these things are show stoppers, but working with code that defines some state is just so much easier than anything you get out of a filesystem which happens to remember that state, but can't tell you why it should be the way it is.
> As for ZFS... Dealing in filesystem snapshots is comparatively a bit awkward. If you want to recreate that config elsewhere you have to move the whole snapshot rather than just the recipe for building it
That wasn't the highlight of the point. It was that you can restore to a known good version of the operating system, effortlessly, regardless of what the operating system is. It could be Ubuntu, Nix, or FreeBSD. Broken OS, select an older snapshot in the boot loader, and you're golden again.
I think that IT departments are going to disallow their users that kind of freedom unless they have more information/control about just which configs their users are allowed to boot to. It has to be more descriptive/composable than a pile of bits and a timestamp or they won't go for it at all.
As much as I want to put all the power in the hands of the user, I'm sympathetic to the plight of the IT guy who has a team that always boots to "last known good" config thats several years old because they just don't trust updates in general.
[flagged]
Yes, if you use a trusted framework then you are safe from things until that framework is attacked. The xz backdoor might have been detected, but the xz backdoor wasn't crafted with the goal of working against the Nix ecosystem. When a nix core developer ends up being a spy or whatever then there will end up being an attack against the nix ecosystem. Don't reply to this with some claim that Nix is inherently secure unless you want me to track you down and make you admit you were wrong when Nix ends up getting successfully exploited in a year or two.
The standard never has been and never will be absolute security. That’s an impossible threshold nothing would ever meet even though it’s objectively true that software today is generally more secure than software 30 years ago. The strongman claim being made is “Nix is harder and more expensive to exploit than traditional build systems”. So sure, if you find a cheap way to exploit Nix, track me down. But until then, it remains at least plausible & in practice very likely that Nix is harder to exploit than alternate systems on a technical level.
> But until then, it remains at least plausible & in practice very likely that Nix is harder to exploit than alternate systems on a technical level.
Do people even read package derivations? Feels like it'd be easy to check in a derivation with an exploit.
[dead]
[flagged]
The backdoor was not targeting Nix but it had to not raise any suspicion during a build in Nix to not be exposed.
The backdoor build script specifically checked for things indicating that it's being built for debian, and if not, not inserting the backdoor; so it only ever was non-reproducible in situations where reproducibility wasn't expected. Not hard to make sure a backdoor with control over the build environment doesn't raise suspicions in non-targeted places.
also wasn't the backdoor reproducible to begin with? If it had targeted every system, you'd just get reproducible backdoored binaries.
The infected version was only the tarball, which was part of the obfuscation (i.e. people may look at git commits, but who individually checks autogenerated code in tarballs of every release)
Building from the git commit the release claimed to be from would result in a different binary than building from the tarball if the environment check passed.
While NixOS goes a bit further with it, most other distrubtions also compile everything from source, cryptographically verify that the sources they use are not tampered with, and have versioned dependencies between packages. Debian also has reproducible builds.
The problem is just that the build systems did not strip pre-compiled object files before building from source. Even with that fixed, if nobody checks the source code then you can add all the backdoors you want, and there is nothing in NixOS or any other distro that would protect against that.
Excellent descriptive analysis. Wrong, misleading title, perhaps "technically correct," but at best with a "backdoored" meaning.
It points out the need and use for build-manager tools that go a step beyond union file system layers, but track then enforce that e.g. tests cannot pollute build artifacts. Take a causal trace graph of files affecting files, in the build process, make that trace graph explicit, and then build a way to enforce that graph, or report on deviations from previous trace graphs.
[dead]
[flagged]
In defense of the author: nobody reads your article if the name is boring (that is my experience at least), which it would've been if they titled it more accurately. That gives incentive to authors to use click-bait titles.
In defense of the bank robber: no clerk simply gives you money if you aren't threatening them (that is my experience at least), which it would have been if they acted like a respectable citizen. This gives people the incentive to become bank robbers.
First of all: an exagerated title is in no way compared to threatening someone's life.
Secondly: your comparison does not even make sense: "which it would have been" what?? Try harder next time.
Yeah it certainly would have made hiding the backdoor more difficult. But far from impossible. You can always hide backdoors in source code if you want, it just takes more effort to make a plausible bug, and probably has a higher chance of detection.
If Jia Tan's PR was approved, malicious artifacts could go to github releases just as easily as in a tarball. Struggling to understand the point made about github releases being a security mitigation.
When I did my first github "release" I was shocked that it involved picking files to upload...
And how many of these files have I downloaded over the years? :sob:
The real answer here is downloading a tarball generated from a tag - those are generated by the GitHub backend.
"Could have" means unproven here, and actually... They shipped it
?
the article demonstrates how to imorove NixOS (or Guix) to automatically catch any such discrepancy in the future.
> the release tarball being different than the source is
> the maintainer provided tarball was honestly generated from the original source code.
How, then? What about differing versions, etc. or has it been mentioned and I just missed it?
Just make sure the generated tarball can be generated from the source code itself, do not exclude anything, git add & commit everything. Can't we do that? We would still have to look at commit history in this case, I believe, and again, he said it himself, it was harmless to the naked eye, so even then, how could we verify? Maybe I don't understand what he meant by verification, but if maintained tarballs are generated from the owner's source code and is not on GitHub (or anywhere else, just a git repo), that is a problem in itself.
Of course there was more to it than just pushing poisoned test files, but still. I do not see how Nix would have prevented it, if the git repo has those test files and with seemingly harmless code (and is reproducible).
Perhaps what we can do is: if an (in)famous project has changed its main lead, then pay closer attention to the commits and check who it is? I don't know, TBH.
Did I misunderstand the article, or am I missing something?
Why is nobody questioning this:
> To build xz from sources, we need autoconf to generate the configure script. But autoconf has a dependency on xz!
Both directions of this seem crazy to me.
1. Why the heck should a build configuration tool like autoconf be unable to function without a compression tool like xz? That makes no sense on its face.
2. For that matter, why the heck should xz, a tool that is supposedly so fundamental, have a hard dependency on a boilerplate generator like autoconf?
At the end of the day all autoconf is doing is telling you how to invoke your compiler. You ought to have a way to do that without the tool, even if it produces a suboptimal binary. If you care about security, instead of taking a giant tarball you don't understand and the running another tool in it, shouldn't you just generate that command line somehow (even in an untrusted fashion), review it, and then use that human-verified script to bootstrap?
And if you need a (de)compressor that low on the dependency tree so that literally the entire world might one day rest on it, surely you can isolate the actual computation for bootstrapping purposes and just expose it with just the open/read/write/close syscalls as dependencies? Why do you need all the bells and whistles?
> Why the heck should a build configuration tool like autoconf be unable to function without a compression tool like xz? That makes no sense on its face.
At face value, both autoconf and its cousin pkg-config are overly complex dogshit software - both with circular dependencies - that should have been retired long ago in favor of something else. I scream with joy when I use software that uses its own bootstrapper or cmake.
Before you think "but I've never had this problem, you must be bonkers" - try building software on a fresh Solaris box with no GNU anything installed and you need to install one of these monstrosities with their circular dependencies. Your hair will fall out before you're done.
>> Why the heck should a build configuration tool like autoconf be unable to function without a compression tool like xz? That makes no sense on its face.
> At face value, both autoconf and its cousin pkg-config are overly complex dogshit software - both with circular dependencies - that should have been retired long ago in favor of something else. I scream with joy when I use software that uses its own bootstrapper or cmake. Before you think "but I've never had this problem, you must be bonkers" - try building software on a fresh Solaris box with no GNU anything installed and you need to install one of these monstrosities with their circular dependencies. Your hair will fall out before you're done.
I've used & seen plenty of the mess of autoconf, thank you. It's a hell I don't want to go back to, and it's a hell a lot of people successfully avoid. But even then, I've also never noticed it requiring compression or decompression, which is partly what boggled my mind at the statement.
In any case, the question was: why should autoconf have a hard dependency on xz? Your response to that was autoconf is complicated and has circular dependencies? How is that a response? That was the premise of the question, not the answer.
Because its dependencies are distributed in a compressed tarball.
autoconf itself doesn't need xz, but in Nixpkgs xz is part of the stdenv, meaning essentially every package has a build-time dependency on xz.
For the case of xz not using the upstream-generated configure would probably be doable with some effort but doing the same for glibc, gcc, gnumake etc. would be much more difficult.
I'm fairly sure xz _isn't_ a general dependency of autoconf etc. Some projects might use xz in their tests, but thats a general bootstrapping problem for xz, not autoconf.
(Autoconf is a pain and I would try to avoid for new projects, but for detecting all kinds of crazy old unices I'm not sure what is better)
Autoconf releases are distributed as .tar.xz archives.
[flagged]
If you know of a reason for autoconf and xz to be hard dependencies of one another, you can just say it.
I think it's a vuln to think of OSS in terms of 'a community'. It's an abstract thought construct that does not represent reality (though it helps to make sense of it in a rather specific manner) xz happened because of the absence of community. It could happen inside this abstract thought of a community as well but here it did not.
xz targeted deb and rpm. The vast majority of what is facing the world.
Nix did not stop it.
I believe this article feeds the possible vuln rather than prevent it.
could have / should have => being smart retrospectively
This article suggests that Nix could have prevented xz backdoor, only to conclude that backdoor could be avoided by building from a git tag rather than source tarball.
This is true for every distro and it grinds me that Nix is even mentioned.
What's with all the baseless claims about NixOS lately? Is Sam Altman invested in it or something?
This kind of argument is like how Gentoo used to be better than anything else, because everyone knows how little piece is built, and here we are two decades later, who still is wasting nights compiling everything.
https://news.ycombinator.com/item?id=41486565 -> possible targets due to high impact and low maintainers
Learned a lot reading this article!
it's somehow immensely funny to me that some state probably had an entire project to land this backdoor in xz, spend literal years to make it happen. And then it was immediately detected and all effort was for nothing.
Or they have N other such projects in flight. The xz backdoor wasn’t that much work, just playing the long game. The person doing the xz project could easily do several other projects at the same time.
A lot of issues do get undetected. E.g. the Debian OpenSSL security accident was only detected when a gazillion servers had predictable SSH keys.
AFAIK Debian OpenSSL was detected when two people had the same GitHub key and since GitHub identifies people by their key it conflicted.
Most likely yes. That doesn't really impact the funny factor for me.
[flagged]
Yeah, without the latency regression, it probably would have gone undetected much longer. Using a secondary thread and spreading the CPU load over a few seconds would have made it not even register as a spike in CPU usage.
Or do cheap ECDSA instead of expensive RSA. Even if the backdoor is hidden inside RSA decryption and the rest of the system thinks the thing being decrypted should be encrypted with RSA, you don't have to use it for the back door.
[flagged]
Is the massive number of spam messages on this thread an attempt to suppress the article / discussion around it?
I've not seen this many from multiple but evidently related green accounts before. Given the implications about nation state actors in play, it's tempting to jump to conclusions here.
It's multiple threads, not just this one, and it's been on and off the past few days.
You can add these kind of lines:
to any uBlock or AdBlockPlus type extensions with manual compatible custom filters that you might have added to your browser.> It's quite amazing that these new "green" accounts can post as many messages as they want while I'm being restricted
That's not what is happening .. look at the account names and the number of comments made by each.
Still leaves me curious about the "why". Someone mad at HN about something? Just having a laugh?
[dead]
No, this has been happening for a few days now. Browse threads from the past week with showdead enabled and you'll see them.
> showdead enabled
Thanks, I didn't know this was something we could turn off. Although it does feel like they could just default to collapsed too.
I don't like how this site causes my headphones to crackle...
[flagged]
there's a disappointing amount of firm conviction here by people who are clueless about NixOS and/or the xz attack...
Still better than vague rants that add nothing concrete to the discussion.
[dead]
[dead]
[dead]
[dead]
[dead]
[dead]
[dead]
[flagged]
[flagged]
[flagged]
[flagged]
[flagged]
[flagged]
[flagged]
[flagged]
[flagged]
[flagged]
There are plenty of Nix users who are skeptical of such claims (e.g. me included).
I generally believe that NixOS is less secure than some other systems. No secure boot by default, no SELinux, too few maintainers for a huge package set, and relatively easy to gain commit access.
But it does have a lot of other benefits.
llm commit scanning might be an interesting approach to the oss supply chain security problem.
Commit scanning probably wouldn't have caught this, since the backdoor happened outside of any commit.
Comparing the tarball's contents against the VCS repository would've likely made this easier to catch, but at that point you might as well just use the VCS repository directly.
i wonder if an llm could have spotted the malicious patches to autotools in the dist tarball...
a public, deterministic and assured build facility for oss would be cool. also maybe a deprecation of autotools.
Maybe, but good luck getting an LLM (one which does not include analysis of this particular attack in its training data) to spot this attack with a prompt that doesn't also create thousands of false positives when focused on the millions of non-malicious commits out there. I think we're decades away from them being that good.
[dead]