I hope this is a signal that a third cloud option, BYOC (build your own cloud), is finally becoming practical. Yes, the physical management of racks is a massive part of managing a cloud but the software stack is honestly why AWS and the like are winning much of the time, at least for the small use cases I have been a part of. I priced out some medium servers and the cost of buying enough for load plus extras for fail over, and host them, was -way- under AWS and other cloud vendors (these were GPU loads) but the management of them was the issue. 'just spin up an instance...' is such a massive enabler for ideas. Something that gives me a viable software stack to build my own cloud on easily is a huge win for abandoning the major cloud vendors. Keep it coming!
I think the main selling point for SME (wtih a small IT team) is that Proxmox is very easy to setup (download iso, install debian, ready to go). CloudStack seems to require a lot of work just to get it running: https://docs.cloudstack.apache.org/en/latest/quickinstallati...
Maybe I'm wrong - but where I am from, companies with less than 500 employees are like 95% of the workforce of the country. That's big enough for a small cluster (in-house/colocation), but to small for something bigger.
Yeah. The keys here are 'easy' and 'I can play with it at home first'. Let's be honest, being able to throw together a bunch of old dead boxes and put proxmox on them in a weekend is a game changer for a learning curve.
The main reason I never tried OpenStack was that the official requirements were more than I had in my home VM host, and I couldn't figure out if the hardware requirements were real or suggested.
Proxmox has very little overhead. I've since moved to Incus. There are some really decent options out there, although Incus still has some gaps in the functionality Proxmox fills out of the box.
PLEASE DON'T DOWN VOTE ME TO HELL THIS IS A DISCLAIMER I AM JUST SHARING WHAT I'VE READ I AM NOT CLAIMING THEM AS FACTS.
...ahem...
When I was researching about this a few years ago I read some really long in-depth scathing posts about Open stack. One of them explicitly called it a childish set of glued together python scripts that fall apart very quickly when you get off the happy path.
> When I was researching about this a few years ago I read some really long in-depth scathing posts about Open stack. One of them explicitly called it a childish set of glued together python scripts that fall apart very quickly when you get off the happy path.
And according to every ex-Amazoner I've ment: the core of AWS is a bunch of Perl scripts glued together
A vCentre runs one or more datacentres but only for one organisation or org umbrella. A PDM can connect to and control multiple "trusting" parties.
I (we) have several customers with PVE deployments and VPNs etc to manage them. PDM allows me to use a single pane of glass to manage the lot, with no loss of security. My PDM does need to be properly secured and I need to ensure that each customer is properly separated from each other (minimal IPSEC P2s and also firewall ingress and egress rules at all ends for good measure).
I should also point out that a vCentre is a Linux box with two Tomcat deployments and 15 virty discs. One TC is the management and monitoring system for the actual vCentre effort. Each one is a monster. Then you slap on all the other bits and pieces - their SDN efforts have probably improved since I laughed at them 10+ years ago. VMware encourage you to run a separate management cluster which is a bit crap for any org sub say 5000 users.
PDM is just a controller of controllers and that's all you need. Small, fast and a bit lovely.
Just migrated from xcp-ng 7 to Proxmox 9.1 for a client this week.
Honestly the whole process was incredibly smooth, loving the web management, native ZFS. Wouldn't consider anything else as a type 1 hypervisor at this stage - and really unless I needed live VM migrations I can't see a future where I'd need anything else.
Managed to get rid of a few docker cloud VPS servers and my TrueNAS box at the same time.
I'd prefer if it was BSD based, but I'm just getting picky now.
Budget sensitive client that didn't want to pay for xcp-ng tools needed in version 8, as well as the server needed a hardware upgrade anyway from SSDs to nVME drives so just ripped the bandaid off at the same time.
I love Proxmox as a virtual server manager - I can't imagine running anything else as a base for a homelab. Free, powerful, VMs or CTs operating quickly, graphical shell for administration, well documented and used, ZFS is a first class citizen.
I've kind of wanted to build a three node cluster with some low end stuff to expand my knowledge of it. Now they have a datacenter controller. I'd need to build twice the nodes.
Question: Does anyone know large businesses that utilize proxmox for datacenter operations?
Yes!
In this great video from level1techs Wendel walks around in a brand new ai gpu datacenter, an engineer tells what they use for all the normal stuff :-)
The company I work for is migrating a few hundred VMWare hosts to Proxmox due to licensing and cost considerations. In our case, since most of the hosts are not clustered, the migration process is quite straightforward. The built-in migration tool proves to be exceptionally effective.
Both my current org and previous org (large) have mentioned it many times as an option, but both ended up choosing other commercial alternatives: HyperV and XenServer.
I think the missing datacenter manager was causing a lot of hesitation for those that don't manage via automation
We run proxmox on a bunch of hardware servers, but for "homelab" we use Ubuntu on ZFS + Incus cluster. What I look at is IncusOS: a radically new approach to base cluster OS: no SSH, no configuration. So far it looks too radical, but eventually I see that as the only way to go for somebody who has a "zoo" of servers behind Tailscale: just base OS which upgrades safely, immutable and encrypted, without any unique configuration. The vision looks beautiful.
I run roughly 30 PVE hosts across several customers (all ex-VMware). Few more to migrate.
You can migrate a three node cluster from VMware to PVE using the same hardware if you have a proper n+1 cluster.
iSCSI SANs don't (yet) do snapshots on PVE. I did take a three node Dell + flash SAN and an additional temporary box with rather a lot of RAM and disc (ZFS) and took the SSDs out of the SAN and whistled up a Ceph cluster on the hosts.
Another customer, I simply migrated their elderly VMware based cluster (a bit of a mess with an Equallogic) to a smart new set of HPEs with flash on board - Ceph cluster. That was about two years ago. I patched it today, as it turns out. Zero downtime.
PVE's high availability will auto evacuate a box when you put it into maintenance mode, so you get something akin to VMware's DRS out of the box, for free.
PDM is rather handy for the likes of me that have loads of disparate systems down the end of VPNs. You do have to take security rather seriously and it has things like MFA built in out of the box, as does PVE itself.
PVE and PDM support ACME too and have done for years. VMware ... doesn't.
I could go on somewhat about what I think of "Enterprise" with a capital E software. I won't but I was a VMware fanboi for over 20 years. I put up with it now. I also look after quite a bit of Hyper-V (I was clearly a very bad boy in a former life).
It does three things, It adds a viewport meta tag for a proper mobile scaling. Prevents long words/URLs from breaking thr page layout and disables automatic font size adjustment on Safari in landscape mode
I've heard good things about XCP-ng as well and tried it out at home and proxmox seems much easier to use out of the box. Not saying XCP-ng is bad just that it wasn't as intuitive to me as proxmox was when we were moving away from vmware
Ex XCP-ng user here. The web management portal requires Xen Orchestra and needs to be installed as a seperate VM which can be irritating, with a seperate paid license. Proxmox has a web GUI natively on install which is super convenient and pretty much free for 90% of use cases.
K8S doesn't scale nearly as well due to etcd and latency sensitivity. Multi-site K8S is messy. The whole K8S model is overly-complex for what almost any org actually needs. Proxmox, Incus, and Nomad are much better designed for ease of use and large scale.
That said, I still run K8S in my homelab. It's an (unfortunately) important skill to maintain, and operators for Ceph and databases are worth the up-front trouble for ease of management and consumption.
multi-site k8s is also very "interesting" if you encounter anything like variable latency in your network paths. etcd is definitly not designed for use across large distances. (more then a 10km single mode fiber path).
I hope this is a signal that a third cloud option, BYOC (build your own cloud), is finally becoming practical. Yes, the physical management of racks is a massive part of managing a cloud but the software stack is honestly why AWS and the like are winning much of the time, at least for the small use cases I have been a part of. I priced out some medium servers and the cost of buying enough for load plus extras for fail over, and host them, was -way- under AWS and other cloud vendors (these were GPU loads) but the management of them was the issue. 'just spin up an instance...' is such a massive enabler for ideas. Something that gives me a viable software stack to build my own cloud on easily is a huge win for abandoning the major cloud vendors. Keep it coming!
What about OpenStack, or even CloudStack?
I think the main selling point for SME (wtih a small IT team) is that Proxmox is very easy to setup (download iso, install debian, ready to go). CloudStack seems to require a lot of work just to get it running: https://docs.cloudstack.apache.org/en/latest/quickinstallati...
Maybe I'm wrong - but where I am from, companies with less than 500 employees are like 95% of the workforce of the country. That's big enough for a small cluster (in-house/colocation), but to small for something bigger.
Yeah. The keys here are 'easy' and 'I can play with it at home first'. Let's be honest, being able to throw together a bunch of old dead boxes and put proxmox on them in a weekend is a game changer for a learning curve.
The main reason I never tried OpenStack was that the official requirements were more than I had in my home VM host, and I couldn't figure out if the hardware requirements were real or suggested.
Proxmox has very little overhead. I've since moved to Incus. There are some really decent options out there, although Incus still has some gaps in the functionality Proxmox fills out of the box.
PLEASE DON'T DOWN VOTE ME TO HELL THIS IS A DISCLAIMER I AM JUST SHARING WHAT I'VE READ I AM NOT CLAIMING THEM AS FACTS.
...ahem...
When I was researching about this a few years ago I read some really long in-depth scathing posts about Open stack. One of them explicitly called it a childish set of glued together python scripts that fall apart very quickly when you get off the happy path.
OTH opinions on Proxmox were very measured.
> When I was researching about this a few years ago I read some really long in-depth scathing posts about Open stack. One of them explicitly called it a childish set of glued together python scripts that fall apart very quickly when you get off the happy path.
And according to every ex-Amazoner I've ment: the core of AWS is a bunch of Perl scripts glued together
This looks like exactly what everyone wanted before VMWare decided to release that bloated pig named vcloud director.
If it scales and the proxmox team can grow their support organization, they’ll have a real shot at capturing significant vmware marketshare.
This seems to be more of a VCenter counterpart, vcloud director was more about the multi tenancy (and multi cloud).
But a great step nonetheless! Hope they grow too.
A vCentre runs one or more datacentres but only for one organisation or org umbrella. A PDM can connect to and control multiple "trusting" parties.
I (we) have several customers with PVE deployments and VPNs etc to manage them. PDM allows me to use a single pane of glass to manage the lot, with no loss of security. My PDM does need to be properly secured and I need to ensure that each customer is properly separated from each other (minimal IPSEC P2s and also firewall ingress and egress rules at all ends for good measure).
I should also point out that a vCentre is a Linux box with two Tomcat deployments and 15 virty discs. One TC is the management and monitoring system for the actual vCentre effort. Each one is a monster. Then you slap on all the other bits and pieces - their SDN efforts have probably improved since I laughed at them 10+ years ago. VMware encourage you to run a separate management cluster which is a bit crap for any org sub say 5000 users.
PDM is just a controller of controllers and that's all you need. Small, fast and a bit lovely.
Wouldn't this be more like
Proxmox Datacenter Manager = VMware vcenter
Proxmox VE = VMware ESXi
Just migrated from xcp-ng 7 to Proxmox 9.1 for a client this week.
Honestly the whole process was incredibly smooth, loving the web management, native ZFS. Wouldn't consider anything else as a type 1 hypervisor at this stage - and really unless I needed live VM migrations I can't see a future where I'd need anything else.
Managed to get rid of a few docker cloud VPS servers and my TrueNAS box at the same time.
I'd prefer if it was BSD based, but I'm just getting picky now.
Why did you leave xcp? It also seems really nice?
Budget sensitive client that didn't want to pay for xcp-ng tools needed in version 8, as well as the server needed a hardware upgrade anyway from SSDs to nVME drives so just ripped the bandaid off at the same time.
Finally, what we have all been waiting for!
Though I dont quite get the requirement for a hardware server, wouldn't it make much more sense to run this in a VM? Or is this just worded poorly?
You can absolutely run it in a VM. I spun up an instance the other day in a VM and have had no problems.
I assume you want to run it outside the clusters it manages.
I love Proxmox as a virtual server manager - I can't imagine running anything else as a base for a homelab. Free, powerful, VMs or CTs operating quickly, graphical shell for administration, well documented and used, ZFS is a first class citizen.
I've kind of wanted to build a three node cluster with some low end stuff to expand my knowledge of it. Now they have a datacenter controller. I'd need to build twice the nodes.
Question: Does anyone know large businesses that utilize proxmox for datacenter operations?
Yes! In this great video from level1techs Wendel walks around in a brand new ai gpu datacenter, an engineer tells what they use for all the normal stuff :-)
Inside the Modern Data Center! SuperClusters at Applied Digital https://youtu.be/zcwqTkbaZ0o?si=V2uPScjyN_sJcIh7&t=696
The company I work for is migrating a few hundred VMWare hosts to Proxmox due to licensing and cost considerations. In our case, since most of the hosts are not clustered, the migration process is quite straightforward. The built-in migration tool proves to be exceptionally effective.
Both my current org and previous org (large) have mentioned it many times as an option, but both ended up choosing other commercial alternatives: HyperV and XenServer.
I think the missing datacenter manager was causing a lot of hesitation for those that don't manage via automation
We run proxmox on a bunch of hardware servers, but for "homelab" we use Ubuntu on ZFS + Incus cluster. What I look at is IncusOS: a radically new approach to base cluster OS: no SSH, no configuration. So far it looks too radical, but eventually I see that as the only way to go for somebody who has a "zoo" of servers behind Tailscale: just base OS which upgrades safely, immutable and encrypted, without any unique configuration. The vision looks beautiful.
> I've kind of wanted to build a three node cluster with some low end stuff to expand my knowledge of it. Now they have a datacenter controller.
You can set up a cluster to play with multiple nodes without the just-announced PDM 1.0. Or you can use PDM to manage three stand alone nodes.
If you want to do both, perhaps a 3-node cluster plus a 1-node stand alone with a PDM 'overlay'. So just a +1 versus a 2x.
I run roughly 30 PVE hosts across several customers (all ex-VMware). Few more to migrate.
You can migrate a three node cluster from VMware to PVE using the same hardware if you have a proper n+1 cluster.
iSCSI SANs don't (yet) do snapshots on PVE. I did take a three node Dell + flash SAN and an additional temporary box with rather a lot of RAM and disc (ZFS) and took the SSDs out of the SAN and whistled up a Ceph cluster on the hosts.
Another customer, I simply migrated their elderly VMware based cluster (a bit of a mess with an Equallogic) to a smart new set of HPEs with flash on board - Ceph cluster. That was about two years ago. I patched it today, as it turns out. Zero downtime.
PVE's high availability will auto evacuate a box when you put it into maintenance mode, so you get something akin to VMware's DRS out of the box, for free.
PDM is rather handy for the likes of me that have loads of disparate systems down the end of VPNs. You do have to take security rather seriously and it has things like MFA built in out of the box, as does PVE itself.
PVE and PDM support ACME too and have done for years. VMware ... doesn't.
I could go on somewhat about what I think of "Enterprise" with a capital E software. I won't but I was a VMware fanboi for over 20 years. I put up with it now. I also look after quite a bit of Hyper-V (I was clearly a very bad boy in a former life).
> I'd need to build twice the nodes.
Why twice the nodes? The manager is optional -- but do you need multiples?
Also, when I looked into clusters (that I haven't implemented,) I did see qdevices. It's a way to have a cheap and weak third node just to break ties.
I set it up for my small company 5 years ago. Couldn't be happier with it honestly.
Some screenshots would be nice.
sadly I hoped they add:
> Off-site replication of guests for manual recovery in case of datacenter failure.
which would've been an actual killer feature
You can use ZFS to replicate your VMs. IIRC each VM has its own ZFS dataset. There's probably a config file somewhere that you also need to replicate.
Unreadable webpage on mobile. Text goes off the screen, and if you zoom out, the overflown text is on a white background.
I use this bookmarklet on phone when I encounter a page like that and it usually make things better.
javascript:(function(){document.head.insertAdjacentHTML('beforeend','<meta name="viewport" content="width=device-width"/><style>body{word-break:break-word;-webkit-text-size-adjust:none;text-size-adjust:none;}</style>');})();
It does three things, It adds a viewport meta tag for a proper mobile scaling. Prevents long words/URLs from breaking thr page layout and disables automatic font size adjustment on Safari in landscape mode
Another VM platform I've heard good things about (but not used personally) is XCP-ng:
* https://en.wikipedia.org/wiki/XCP-ng
(There's also OpenStack.)
I've heard good things about XCP-ng as well and tried it out at home and proxmox seems much easier to use out of the box. Not saying XCP-ng is bad just that it wasn't as intuitive to me as proxmox was when we were moving away from vmware
Ex XCP-ng user here. The web management portal requires Xen Orchestra and needs to be installed as a seperate VM which can be irritating, with a seperate paid license. Proxmox has a web GUI natively on install which is super convenient and pretty much free for 90% of use cases.
And there's also Triton[0] and vanilla SmartOS[1] it's based on
[0] https://github.com/TritonDataCenter/triton
[1] https://docs.smartos.org/
So why not kubernetes?
K8S doesn't scale nearly as well due to etcd and latency sensitivity. Multi-site K8S is messy. The whole K8S model is overly-complex for what almost any org actually needs. Proxmox, Incus, and Nomad are much better designed for ease of use and large scale.
That said, I still run K8S in my homelab. It's an (unfortunately) important skill to maintain, and operators for Ceph and databases are worth the up-front trouble for ease of management and consumption.
multi-site k8s is also very "interesting" if you encounter anything like variable latency in your network paths. etcd is definitly not designed for use across large distances. (more then a 10km single mode fiber path).