Woah, very neat! I may have to add this to the examples in Ansible for DevOps. Great idea, looks like for many cases it will help move hand config into automation.
This looks like a great way to learn Ansible too. Instead of learning alongside random examples, you can setup your server and see how it would look like in Ansible.
My thoughts exactly. As someone who has generally learned better and faster through labs or real world work, this is exactly how I intend to teach myself Ansible while also migrating some stuff to containers: throw at my current VMs, identify configs, and then migrate or enroll accordingly.
I have quite a few machines that were constructed using Ansible ... When I get a chance, I'll reverse then and compare the results to the IaC that created them
Yup - it can be pretty overwhelming, it depends on what it detected on your system! The state.json will usually explain why it 'harvested' something (perhaps it was because it found a running systemd service, perhaps it was due to detecting a package having been manually installed, etc)
There is the --exclude option which might help (also keep in mind you can define an enroll.ini file to manage the flags so it's less cumbersome). Otherwise, you can always prune the roles from the ansible dir/playbook.
I'm going to continue to work on easy options to skip stuff. In particular I do think many of the 'lib' packages could be omitted if they are just dependencies of other packages already detected as part of the harvest. (Need to basically build a dependency graph)
Can you create a baseline system to create the ignores?
What I mean is in some large companies you are given a host that already has lots of config changes, possibly by ansible. Then your team has to configure on top of those changes, maybe ansible again. I'd like to run on the baseline system given to create a baseline, then on a production host to see how it drifted.
An incredible undertaking! How much testing have you done with regards to harvesting a manual configuration into Ansible, creating a new machine and then applying that to see whether the machine is a functional representation of the old machine?
The reason I'm asking is because I'm interested in how much confidence could be lent to this tool with regards to more old and obscure machines that have been running for years.
Indeed! I'm showing my age, but I do remember using this with Puppet and it was one of my inspirations :D (no commits in nearly 13 years, ouch) https://github.com/devstructure/blueprint
Yes! I always thought that was a very clever project, and was sad when it ceased development. Very excited to try this out, and glad to have stayed on Debian all these years.
The other comment already answers part of it, there is no real need for it for a NixOS system as you usually either can consult the store on the machine (and recursively build a graph of a all transitive dependencies of a generation), have a system that stores the config along with the generation (option `system.copySystemConfiguration` or a flake-based system will store the config in the store itself).
A system that has neither a store nor the config (container image) not easily reconstructable as you miss too much metadata.
Bravo, I will play with it. I haven't played with ansible till now but I know that its related to automation.
If something can make ansible easier for me to try out like this tool while being pragmatic, I will give this a try someday thank you!
How accurate does this tool end up becoming though? Like can I just run some bunch of commands to setup a server and then use this with ansible?
Would this end up being a good use for it or would I just re-invent something similar to cloud-init on wrong abstraction. (On all fairness, one thing troubling me about cloud-init is that I would need to probably have a list of all commands that I want to run and all changes which sometimes if history command might have some issues or you do end up writing files etc. ends up being a little messy)
I haven't played that much with both cloud-init and ansible either but I am super interested to know more about enroll and others as well as I found it really cool!
Great questions! OP here, let me answer them below:
> How accurate does this tool end up becoming though? Like can I just run some bunch of commands to setup a server and then use this with ansible?
Yes, exactly: let's say you provision a VPS and then install some stuff on it, configure some configs, create a crontab, create a user account. Running 'enroll harvest' on it will detect all of that, and 'enroll manifest' will then convert that 'harvest' into Ansible roles/playbooks.
> Would this end up being a good use for it or would I just re-invent something similar to cloud-init on wrong abstraction. (On all fairness, one thing troubling me about cloud-init is that I would need to probably have a list of all commands that I want to run and all changes which sometimes if history command might have some issues or you do end up writing files etc. ends up being a little messy)
Yeah, your instinct is right on the latter point. Ansible and Cloud-init are similar in that they are both 'declarative' systems to say what should exist on the machine. Ansible has some advantages in that it compares with the current setup to see if it needs to change anything. Cloud-init (in my experience) is a bit more crude: 'just run this stuff the first time the machine is booted'.
I'm sure there are different ways of using it, but in my experience, cloud-init is really designed to 'run once' (first time setup). For example, if you provision a machine with Terraform or OpenTofu, and you pass in cloud-init, then later if you change the cloud-init data, it wants to destroy the machine and rebuild it (unless you configure it explicitly not to do that, by which you have to tell it to 'ignore' changes to the cloud-init).
Whereas with Ansible, you're at least equipped with a solid foundation that you can extend over time - you'll no doubt eventually need to make changes to your server post the initial setup.
If you're new to Ansible, Enroll will be a quick way to get up and running with working Ansible configuration and you can adapt it from there as you learn it.
Admittedly, to satisfy a lot of corner cases (or support different Linux distros), the Ansible code that Enroll generates is a bit complex to read compared to a 'bespoke' home-grown playbook, on the other hand, it's perfectly correct code and you'd be immediately exposed to good practice syntax.
Indeed when using single-shot, unless you're using the --remote modes (in which case, the harvest is pulled down to a machine-generated path locally), indeed you need to supply the path to the harvest so that the 'manifest' part under the hood, knows what to use.
(By contrast, if you are using just the 'enroll harvest' command by itself, and omit the --out option, it will by default store the harvest in a random directory in ~/.cache/enroll/harvest/xxxxxxx)
This is a great idea. I have done this manually, and it was a lot of work.
Even with a tool, people will still have to understand the output, enough that they can spot situations like "this part doesn't make sense at all", "that bit isn't static", "holy crud, there's an unsecured secret", "this part suggests a dependency on this other server we didn't know was involved, and which the tool doesn't investigate".
I agree! It's always a 'best effort' tool. There's going to be corner cases where something that might end up in the 'logrotate' role could arguably be better placed in a more specific app's role.
It does an okay job at this sort of thing, but definitely human eyes are needed :)
This is a fantastic idea. I can imagine using this to pull in any manual changes I might have made to the server because I’m not the most disciplined person.
Haha, same! I ran it on a server I've been shepherding along since 2008 and wow, it was insightful, there were even cron jobs it found that I had forgotten about :)
If you are using a Debian-like or Fedora-like workstation, it's also really useful to 'ansibilize' your desktop OS in case you need to reinstall :)
Really cool! I’m in a similar situation and I’m going to try this tool out right away to see if it can speed up Ansible onboarding for some particularly crusty old servers. Thanks for sharing!
Wonderful! I wish that tool was existed a few years ago, when I had no experience with Ansible. Anyway, will try it and compare outcome of Enroll with my current playbooks
Does the playbook generation have support for some totally custom/one-off application? (Eg, not just system/well-known stuff). If so, that would be insane!
It does! There are several sort of 'catch-alls' in place:
1) stuff in /etc that doesn't belong to any obvious package, ends up in an 'etc_custom' role
2) stuff in /usr/local ends up in a 'usr_local_custom' role
3) Anything you include with --include will end up in a special 'extra_paths' role.
Here's a demo (which is good, helped me spot a small bug, the role is included twice in the playbook :) I'll get that fixed!) https://asciinema.org/a/765385
Nuts, I'm going to have to try this out. Pretty sure nothing like this exists, at least not for Ansible (?). This would certainly help converting chef cookbooks (we have a ton of custom applications along with system stuff of course) to Ansible (I guess it's not really converting in this scenario, just scanning the host(s), super neat!). We are still using chef, and use Ansible for one-off jobs/playbooks/remediations etc, but would like to pivot to Ansible for config mgmt of everything for deployments at some point. This definitely looks useful in that effort.
Could it also detect changed package files; if there are per-package-file checksums like with `debsums` and `rpm -V`?
Does it check extended filesystem labels with e.g. getfacl for SELinux support?
I've also done this more than a few times and not written a tool.
At least once I've scripted better then regex to convert a configuration file to a Jinja2 templated configuration file (from the current package's default commented config file with the latest options). And then the need is to diff: non-executable and executable guidelines, the package default config (on each platform), and our config.
Sometimes it's better not to re-specify a default config param and value, but only if the defaults are sane on every platform. Cipher lists for example.
P2V (physical to virtual) workflows don't result in auditable system policy like this.
Most of the OS and Userspace packages backed up in full system images (as with typical P2V workflows) are exploitably out of date in weeks or months.
To do immutable upgrades with rollback,
Rpm-ostree distros install the RPM packages atop the latest signed immutable rootfs image, and then layer /etc on top (and mounts in /var which hosts flatpaks and /var/home). It keeps a list of packages to reinstall and it does a smart merge of /etc. Unfortunately etckeeper (which auto-git-commits /etc before and after package upgrades) doesn't yet work with rpm-ostree distros.
Ansible does not yet work with rpm-ostree distros. IIRC the primary challenge is that ansible wants to run each `dnf install` individually and that takes forever with rpm-ostree. It is or is not the same to install one long list of packages or to install multiple groups of packages in the same sequence. It should be equivalent if the package install and post-install scripts are idempotent, but is not equivalent if e.g. `useradd` is called multiply without an explicit UID in package scripts which run as root too.
I wrote a PR to get structured output (JSON) from `dnf history`, but it was for dnf4.
> upgrading the layered firefox RPM without a reboot requires -A/--apply-live (which runs twice) and upgrading the firefox flatpak doesn't require a reboot, but SELinux policies don't apply to flatpaks which run unconfined FWIU.
Does it log a list of running processes and their contexts; with `ps -Z`?
There are also VM-level diff'ing utilities for forensic-level differencing.
It also tries to ignore packages that came with the distro automatically, e.g focusing on stuff that was explicitly installed (based on 'apt-mark showmanual' for Debian, and 'dnf -q repoquery --userinstalled' (and related commands, like dnf -q history userinstalled) for RH-like)
> Does it check extended filesystem labels with e.g. getfacl for SELinux support?
Not yet, but that's interesting, I'll look into it.
> At least once I've scripted better then regex to convert a configuration file to a Jinja2 templated configuration file (from the current package's default commented config file with the latest options).
Yep, that was the inspiration for my companion tool https://git.mig5.net/mig5/jinjaturtle (which enroll will automatically try and use if it finds it on the $PATH - if it can't find it, it will just use 'copy' mode for Ansible tasks, and the original files).
Note that running the `enroll manifest` command against multiple separate 'harvests' (e.g harvested from separate machines) but storing it in the same common manifest location, will 'merge' the Ansible manifests. Thereby 'growing' the Ansible manifest as needed. But each host 'feature flips' on/off which files/templates should be deployed on it, based on what was 'harvested' from that host.
> Does it log a list of running processes and their contexts; with `ps -Z`?
Does it already indirectly diff the output of `systemd-analyze security`?
Would there be value to it knowing the precedence order of systemd config files? (`man systemd.unit`)
How to transform the generated playbooks to - instead of ansible builtins - use a role from ansible-galaxy to create users for example?
How to generate tests or stub tests (or a HEALTHCHECK command/script, or k8s Liveness/Readiness/Startup probes, and/or a Nagios or a Prometheus monitoring config,) given ansible inventory and/or just enroll?
Ansible Molecule used to default to pytest-testinfra for the verify step but the docs now mention an ansible-native way that works with normal inventory that can presumably still run testinfra tests as a verify step. https://docs.ansible.com/projects/molecule/configuration/?h=...
Re: vm live migration, memory forensics, and diff'ing whole servers:
Live migration and replication solutions already have tested bit-level ~diffing that would also be useful to compare total machine state between 2 or more instances. At >2 nodes, what's anomalous? And how and why do the costs of convergence-based configuration management differ from golden image -based configuration management?
E.g. vmdiff diffs VMs. The README says it only diffs RAM on Windows. E.g. AVML and linpmem and volatility3 work with Linux.
Very cool! Managing ones boxes as cattle and not pets almost always seems like a better idea in retrospect but historically it is easier said than done. Moreover, I like the idea of being able to diff a box's actual state from a current Ansible system to verify that it actually is as configured for further parity between deployed/planned.
Definitely! It's all too easy to make a direct change and later forget to 'fold it in' to Ansible and run a playbook. My hope is that `enroll diff` serves as a good reminder if nothing else.
I'm pondering adding some sort of `--enforce` argument to make it re-apply a 'golden' harvest state if you really want to be strictly against drift. For now, it's notifications only though.
Woah, very neat! I may have to add this to the examples in Ansible for DevOps. Great idea, looks like for many cases it will help move hand config into automation.
Here's a video of JinjaTurtle, the companion tool that converts configs to Jinja2 templates and Ansible vars:
https://asciinema.org/a/765293
Enroll will automatically make use of jinjaturtle if it's on the $PATH, to generate templates.
This looks like a great way to learn Ansible too. Instead of learning alongside random examples, you can setup your server and see how it would look like in Ansible.
Awesome stuff!
My thoughts exactly. As someone who has generally learned better and faster through labs or real world work, this is exactly how I intend to teach myself Ansible while also migrating some stuff to containers: throw at my current VMs, identify configs, and then migrate or enroll accordingly.
I have quite a few machines that were constructed using Ansible ... When I get a chance, I'll reverse then and compare the results to the IaC that created them
That's a really cute looking tool. I ran it without installing via:
It generated almost a thousand roles, and at quick glance it identified many changes which I expected and some that I didn't.Yup - it can be pretty overwhelming, it depends on what it detected on your system! The state.json will usually explain why it 'harvested' something (perhaps it was because it found a running systemd service, perhaps it was due to detecting a package having been manually installed, etc)
There is the --exclude option which might help (also keep in mind you can define an enroll.ini file to manage the flags so it's less cumbersome). Otherwise, you can always prune the roles from the ansible dir/playbook.
I'm going to continue to work on easy options to skip stuff. In particular I do think many of the 'lib' packages could be omitted if they are just dependencies of other packages already detected as part of the harvest. (Need to basically build a dependency graph)
Thanks for trying it out!
I've just run it against my desktop PC.
I had documented everything up to a point on this beatie and then things have got out of hand. I now have all the changes from after I went off piste.
What a great tool.
Thank you.
Can you create a baseline system to create the ignores?
What I mean is in some large companies you are given a host that already has lots of config changes, possibly by ansible. Then your team has to configure on top of those changes, maybe ansible again. I'd like to run on the baseline system given to create a baseline, then on a production host to see how it drifted.
Sorry if this is in the docs, cool tool!
An incredible undertaking! How much testing have you done with regards to harvesting a manual configuration into Ansible, creating a new machine and then applying that to see whether the machine is a functional representation of the old machine?
The reason I'm asking is because I'm interested in how much confidence could be lent to this tool with regards to more old and obscure machines that have been running for years.
I've been looking for something like this, awesome!
Is it expected that it does not allocate a TTY for sudo password prompts when connecting to a remote machine via SSH? How would I use it otherwise?
This makes me think of the now defunct https://github.com/SUSE/machinery
Indeed! I'm showing my age, but I do remember using this with Puppet and it was one of my inspirations :D (no commits in nearly 13 years, ouch) https://github.com/devstructure/blueprint
Yes! I always thought that was a very clever project, and was sad when it ceased development. Very excited to try this out, and glad to have stayed on Debian all these years.
I wonder if Nix has similar tools, as it is famous for declarative system management, which is quite suitable for server provisioning.
The other comment already answers part of it, there is no real need for it for a NixOS system as you usually either can consult the store on the machine (and recursively build a graph of a all transitive dependencies of a generation), have a system that stores the config along with the generation (option `system.copySystemConfiguration` or a flake-based system will store the config in the store itself).
A system that has neither a store nor the config (container image) not easily reconstructable as you miss too much metadata.
It's hard with nix to end up with a system without first having a config for that system
Bravo, I will play with it. I haven't played with ansible till now but I know that its related to automation.
If something can make ansible easier for me to try out like this tool while being pragmatic, I will give this a try someday thank you!
How accurate does this tool end up becoming though? Like can I just run some bunch of commands to setup a server and then use this with ansible?
Would this end up being a good use for it or would I just re-invent something similar to cloud-init on wrong abstraction. (On all fairness, one thing troubling me about cloud-init is that I would need to probably have a list of all commands that I want to run and all changes which sometimes if history command might have some issues or you do end up writing files etc. ends up being a little messy)
I haven't played that much with both cloud-init and ansible either but I am super interested to know more about enroll and others as well as I found it really cool!
Great questions! OP here, let me answer them below:
> How accurate does this tool end up becoming though? Like can I just run some bunch of commands to setup a server and then use this with ansible?
Yes, exactly: let's say you provision a VPS and then install some stuff on it, configure some configs, create a crontab, create a user account. Running 'enroll harvest' on it will detect all of that, and 'enroll manifest' will then convert that 'harvest' into Ansible roles/playbooks.
> Would this end up being a good use for it or would I just re-invent something similar to cloud-init on wrong abstraction. (On all fairness, one thing troubling me about cloud-init is that I would need to probably have a list of all commands that I want to run and all changes which sometimes if history command might have some issues or you do end up writing files etc. ends up being a little messy)
Yeah, your instinct is right on the latter point. Ansible and Cloud-init are similar in that they are both 'declarative' systems to say what should exist on the machine. Ansible has some advantages in that it compares with the current setup to see if it needs to change anything. Cloud-init (in my experience) is a bit more crude: 'just run this stuff the first time the machine is booted'.
I'm sure there are different ways of using it, but in my experience, cloud-init is really designed to 'run once' (first time setup). For example, if you provision a machine with Terraform or OpenTofu, and you pass in cloud-init, then later if you change the cloud-init data, it wants to destroy the machine and rebuild it (unless you configure it explicitly not to do that, by which you have to tell it to 'ignore' changes to the cloud-init).
Whereas with Ansible, you're at least equipped with a solid foundation that you can extend over time - you'll no doubt eventually need to make changes to your server post the initial setup.
If you're new to Ansible, Enroll will be a quick way to get up and running with working Ansible configuration and you can adapt it from there as you learn it.
Admittedly, to satisfy a lot of corner cases (or support different Linux distros), the Ansible code that Enroll generates is a bit complex to read compared to a 'bespoke' home-grown playbook, on the other hand, it's perfectly correct code and you'd be immediately exposed to good practice syntax.
Let me know if you get to try it!
Very cool.
I just saved the state of my WSL2 instance, pushed it to github. Amazingly simple.
FWIW, I was required to add the --harvest, which your quick start seems to be missing?
ie I used:
uvx enroll single-shot --harvest ./harvest --out ./ansible
Whoops, thanks, I'll adjust that example!
Indeed when using single-shot, unless you're using the --remote modes (in which case, the harvest is pulled down to a machine-generated path locally), indeed you need to supply the path to the harvest so that the 'manifest' part under the hood, knows what to use.
(By contrast, if you are using just the 'enroll harvest' command by itself, and omit the --out option, it will by default store the harvest in a random directory in ~/.cache/enroll/harvest/xxxxxxx)
Thanks for trying it out!
This is a great idea. I have done this manually, and it was a lot of work.
Even with a tool, people will still have to understand the output, enough that they can spot situations like "this part doesn't make sense at all", "that bit isn't static", "holy crud, there's an unsecured secret", "this part suggests a dependency on this other server we didn't know was involved, and which the tool doesn't investigate".
I agree! It's always a 'best effort' tool. There's going to be corner cases where something that might end up in the 'logrotate' role could arguably be better placed in a more specific app's role.
It does an okay job at this sort of thing, but definitely human eyes are needed :)
Genuenly the thing i've been dreaming about for a while. Nice work.
This is a fantastic idea. I can imagine using this to pull in any manual changes I might have made to the server because I’m not the most disciplined person.
Haha, same! I ran it on a server I've been shepherding along since 2008 and wow, it was insightful, there were even cron jobs it found that I had forgotten about :)
If you are using a Debian-like or Fedora-like workstation, it's also really useful to 'ansibilize' your desktop OS in case you need to reinstall :)
I’ve been looking for something exactly like this, thank you! Now I just need to find the same thing for Windows and macOS…
Really cool! I’m in a similar situation and I’m going to try this tool out right away to see if it can speed up Ansible onboarding for some particularly crusty old servers. Thanks for sharing!
Very cool idea and kudos for building and making the idea into a reality.
Wonderful! I wish that tool was existed a few years ago, when I had no experience with Ansible. Anyway, will try it and compare outcome of Enroll with my current playbooks
Can it do Oracle? That would be a gamechanger.
Does the playbook generation have support for some totally custom/one-off application? (Eg, not just system/well-known stuff). If so, that would be insane!
It does! There are several sort of 'catch-alls' in place:
1) stuff in /etc that doesn't belong to any obvious package, ends up in an 'etc_custom' role
2) stuff in /usr/local ends up in a 'usr_local_custom' role
3) Anything you include with --include will end up in a special 'extra_paths' role.
Here's a demo (which is good, helped me spot a small bug, the role is included twice in the playbook :) I'll get that fixed!) https://asciinema.org/a/765385
Thanks for your interest!
Nuts, I'm going to have to try this out. Pretty sure nothing like this exists, at least not for Ansible (?). This would certainly help converting chef cookbooks (we have a ton of custom applications along with system stuff of course) to Ansible (I guess it's not really converting in this scenario, just scanning the host(s), super neat!). We are still using chef, and use Ansible for one-off jobs/playbooks/remediations etc, but would like to pivot to Ansible for config mgmt of everything for deployments at some point. This definitely looks useful in that effort.
Could it also detect changed package files; if there are per-package-file checksums like with `debsums` and `rpm -V`?
Does it check extended filesystem labels with e.g. getfacl for SELinux support?
I've also done this more than a few times and not written a tool.
At least once I've scripted better then regex to convert a configuration file to a Jinja2 templated configuration file (from the current package's default commented config file with the latest options). And then the need is to diff: non-executable and executable guidelines, the package default config (on each platform), and our config.
Sometimes it's better not to re-specify a default config param and value, but only if the defaults are sane on every platform. Cipher lists for example.
P2V (physical to virtual) workflows don't result in auditable system policy like this.
Most of the OS and Userspace packages backed up in full system images (as with typical P2V workflows) are exploitably out of date in weeks or months.
To do immutable upgrades with rollback, Rpm-ostree distros install the RPM packages atop the latest signed immutable rootfs image, and then layer /etc on top (and mounts in /var which hosts flatpaks and /var/home). It keeps a list of packages to reinstall and it does a smart merge of /etc. Unfortunately etckeeper (which auto-git-commits /etc before and after package upgrades) doesn't yet work with rpm-ostree distros.
Ansible does not yet work with rpm-ostree distros. IIRC the primary challenge is that ansible wants to run each `dnf install` individually and that takes forever with rpm-ostree. It is or is not the same to install one long list of packages or to install multiple groups of packages in the same sequence. It should be equivalent if the package install and post-install scripts are idempotent, but is not equivalent if e.g. `useradd` is called multiply without an explicit UID in package scripts which run as root too.
I wrote a PR to get structured output (JSON) from `dnf history`, but it was for dnf4.
From https://news.ycombinator.com/item?id=43617363 :
> upgrading the layered firefox RPM without a reboot requires -A/--apply-live (which runs twice) and upgrading the firefox flatpak doesn't require a reboot, but SELinux policies don't apply to flatpaks which run unconfined FWIU.
Does it log a list of running processes and their contexts; with `ps -Z`?
There are also VM-level diff'ing utilities for forensic-level differencing.
Hi westurner!
> Could it also detect changed package files; if there are per-package-file checksums like with debsums and `rpm -V`?
Yes, that's exactly what it does. See https://git.mig5.net/mig5/enroll/src/branch/main/enroll/plat... and https://git.mig5.net/mig5/enroll/src/branch/main/enroll/rpm....
It also tries to ignore packages that came with the distro automatically, e.g focusing on stuff that was explicitly installed (based on 'apt-mark showmanual' for Debian, and 'dnf -q repoquery --userinstalled' (and related commands, like dnf -q history userinstalled) for RH-like)
> Does it check extended filesystem labels with e.g. getfacl for SELinux support?
Not yet, but that's interesting, I'll look into it.
> At least once I've scripted better then regex to convert a configuration file to a Jinja2 templated configuration file (from the current package's default commented config file with the latest options).
Yep, that was the inspiration for my companion tool https://git.mig5.net/mig5/jinjaturtle (which enroll will automatically try and use if it finds it on the $PATH - if it can't find it, it will just use 'copy' mode for Ansible tasks, and the original files).
Note that running the `enroll manifest` command against multiple separate 'harvests' (e.g harvested from separate machines) but storing it in the same common manifest location, will 'merge' the Ansible manifests. Thereby 'growing' the Ansible manifest as needed. But each host 'feature flips' on/off which files/templates should be deployed on it, based on what was 'harvested' from that host.
> Does it log a list of running processes and their contexts; with `ps -Z`?
It doesn't use ps, but it examines systemctl to get a list of running services and also timers. Have a look at https://git.mig5.net/mig5/enroll/src/branch/main/enroll/syst...
Thanks for the other ideas! I'll look into them.
Thanks for your reply. As well; otoh:
Does it already indirectly diff the output of `systemd-analyze security`?
Would there be value to it knowing the precedence order of systemd config files? (`man systemd.unit`)
How to transform the generated playbooks to - instead of ansible builtins - use a role from ansible-galaxy to create users for example?
How to generate tests or stub tests (or a HEALTHCHECK command/script, or k8s Liveness/Readiness/Startup probes, and/or a Nagios or a Prometheus monitoring config,) given ansible inventory and/or just enroll?
Ansible Molecule used to default to pytest-testinfra for the verify step but the docs now mention an ansible-native way that works with normal inventory that can presumably still run testinfra tests as a verify step. https://docs.ansible.com/projects/molecule/configuration/?h=...
MacOS: honebrew_tap_module, homebrew_module, homebrew_cask_module, osx_defaults_module
Conda (Win/Mac/Lin, AMD64, ARM64, PPC64, RISC-V 64 (*), WASM)
CycloneDX/cyclonedx-python generates SBOMs from venv, conda, pip requirements.txt, pipenv, poetry, pdm, uv: https://github.com/CycloneDX/cyclonedx-python
Container config: /var, $DOCKER_HOST, Podman, Docker, $KUBECONFIG defaults to ~/.kube/config (kube config view), Podman rootless containers
Re: vm live migration, memory forensics, and diff'ing whole servers:
Live migration and replication solutions already have tested bit-level ~diffing that would also be useful to compare total machine state between 2 or more instances. At >2 nodes, what's anomalous? And how and why do the costs of convergence-based configuration management differ from golden image -based configuration management?
E.g. vmdiff diffs VMs. The README says it only diffs RAM on Windows. E.g. AVML and linpmem and volatility3 work with Linux.
/? volatility avml inurl:awesome https://www.google.com/search?q=volatiloty+avml+inurl%3Aawes...
Very cool! Managing ones boxes as cattle and not pets almost always seems like a better idea in retrospect but historically it is easier said than done. Moreover, I like the idea of being able to diff a box's actual state from a current Ansible system to verify that it actually is as configured for further parity between deployed/planned.
Definitely! It's all too easy to make a direct change and later forget to 'fold it in' to Ansible and run a playbook. My hope is that `enroll diff` serves as a good reminder if nothing else.
I'm pondering adding some sort of `--enforce` argument to make it re-apply a 'golden' harvest state if you really want to be strictly against drift. For now, it's notifications only though.
poor man’s nixOS
If NixOS was this easy to onboard, we'd have an easier time of it.
- Sent from my NixOS daily-driver, which only cost me a small number of grey hairs