HCI works well in small "edge" deployments where you need a couple server VMs running close to the users. The more you scale up the more it falls apart.
So you get a 1U/2U Dell box with HyperV or ESXi free or ROBO (RIP) and local storage.HCI works well in small "edge" deployments where you need a couple server VMs running close to the users.
3 nodes of - 1x 6326 CPU, 256GB RAM, 4x 3.84TB SSD, 2x25gb mellanox card and optics@Whittey right, I think a 3-node cluster is their minimum size, but even there, 3-nodes can run a pretty decent workload. Mind saying what those 3-nodes roughly cost? Last I priced it out, it was getting too close to six figures for us to really look at it too seriously when we could easily get away with an existing SAN and two traditional hosts.
I forget the licensing option we went with, but I don't think it was the full AHV HCI stack. 50k for that hardware and all isn't bad, but at 6 VM's, its a bit $$ on a per/VM standpoint for general use VMs, but I get it not too bad if its dedicated to LoB applications.3 nodes of - 1x 6326 CPU, 256GB RAM, 4x 3.84TB SSD, 2x25gb mellanox card and optics
6 VM ROBO licensing
3 years production (not MC) support
~50k USD
Then add in some windows licensing for our use case. It certainly isn't cheap, but I don't see it as all that bad.
VRTX went EOL 1/31 2022, and I don't know of a good floor-standing, quiet-enough-in-an-office platform that is similar. There are some Supermicro rackmount chassis (look up the Twin series) that will host 4 blades in a single chassis with integrated storage, but I don't know whether that storage is shared or dedicated to each blade.I think it goes EOL later this year or early next.
Supermicros are for copy paste nodes with their own storage, I haven't seen any with an embedded SAS shared storage.VRTX went EOL 1/31 2022, and I don't know of a good floor-standing, quiet-enough-in-an-office platform that is similar. There are some Supermicro rackmount chassis (look up the Twin series) that will host 4 blades in a single chassis with integrated storage, but I don't know whether that storage is shared or dedicated to each blade.
Thanks, I kind of suspected that. Yeah, VRTX was pretty unique. I actually had a demo unit here at my house a couple years ago. Unfortunately it had definitely seen better days. The management card wouldn't boot up, and the box didn't have a redundant one.Supermicros are for copy paste nodes with their own storage, I haven't seen any with an embedded SAS shared storage.
The VRTX is unique that it includes two weird "shared" SAS card, so each node is redundantly connected to the mini MD disk array inside. The storage isn't completely dumb and you define your RAID1/5/whatever pools there and carve out virtual disks to serve as VMFS / Cluster attached volume.
I think you could even attach external MD1200s and use it the same way, two special sauce cards for 4 nodes instead of the usual 2 SAS cards per node for redundancy and 8 cables.
Pity that.VRTX went EOL 1/31 2022, and I don't know of a good floor-standing, quiet-enough-in-an-office platform that is similar. There are some Supermicro rackmount chassis (look up the Twin series) that will host 4 blades in a single chassis with integrated storage, but I don't know whether that storage is shared or dedicated to each blade.
how painful did you find Ceph? I've not done it but I understand it needs feeding and watering. I'm (seriously) home-labbing Triton SDC for a variety of reasons and it's very much in this mould; pros are arguably a better security model, cons being it's idiosyncraticBack in 2018 or so we switched from Dell+NetApp hardware with vmware over to proxmox hyperconverged on supermicro whitebox
typical whitebox: 1U max CPU (e.g. 2 x epyc 96-core) max RAM (~2TB) and a couple of nvme (e.g. 2 x 3.6TB), 40Gbps interconnect
You need at least three boxes for a cluster, with default Ceph RBD you get 3x replication, so you get one third usable space compared to raw.
You can get three such boxes for ~$100k capex, with no software licensing costs.
Our bigger cluster is now up to 12 such boxes, hosting generic Linux and Windows VMs. $0 licensing cost over the last few years.
If you want to test it out, set up any 3 spare machines of any spec. Though with HDD storage over 1GbE network, the storage will not be very usable, but good enough for a POC.
Looking at your mention of Triton SDC, I think that's the Joyent SmartOS stuff, so that's going to be way more complicated, IMHO, since proxmox is just Debian Linux. But I guess if you're not that familiar with Linux or Ceph, then it's not any easier than any other system.
Joyent was based on Illumos so that was always not that interesting to me as a Linux sysadmin, as far as I can tell, all the stuff is obsolete now, maybe like vmware will be in another 5 years if broadcom locks it all down to high-end enterprise only.
that may be but I would stick with all-Linux stuffthe feeling I have in me bones that there's going to be a big swing away from the public cloud in the next year or two
Now this I also didn't know. So what's their plan there? TrueNAS is the 'free' offering (still BSD based) but TrueNAS SCALE is a commercial linux offering? honestly this is news to me but yeah they seem to be squaring up to proxmox no?that may be but I would stick with all-Linux stuff
For example, even most of the FreeBSD-based commercial and semi-commercial products are moving to Linux underneath (e.g. FreeNAS)
It's been a couple years but our last POC w/ Proxmox it was lacking even the polish of Hyper-V.
No, close, IIRC TrueNAS SCALE is Linux-based and then there is a community and a commercial enterprise version. And then they (ixsystems) also have a hardware offering that has a dual-controller HA filer and runs that same software stack.Now this I also didn't know. So what's their plan there? TrueNAS is the 'free' offering (still BSD based) but TrueNAS SCALE is a commercial linux offering? honestly this is news to me but yeah they seem to be squaring up to proxmox no?
You want shiny and Enterprise-grade support, Nutanix is the only real alternative.It's been a couple years but our last POC w/ Proxmox it was lacking even the polish of Hyper-V.
interesting! familiar with rancher but not this (given it's like 4 years old I feel very out of the loop). Seems to do a lot of things right (immutable, looks like a slick kubevirt integration) but... longhorn.Running Harvester at home. Works pretty good but the guests aren't getting full 10GbE performance on the network, only about 5Gb.
Yeah, I hear ya there. So far, for home at least, it's working well enough.but... longhorn.
I've been through several RedHat hands-on labs with OpenShift (which is just their k8s distro) and the OpenShift Virtualization feature (which is k8s+knative+kubevirt) and it's clear that it's way more complicated than you need if you just want to run some VMs, unless you really are a whole enterprise development team doing a multi-year rewrite of all your apps into "cloud-native" architecture. And obviously the smaller the environment, the less sense it makes.
OTOH, I don't know that proxmox has a lot of 100+ node clusters deployed out there. So there is definitely a limit where your needs can be "too big" for proxmox.
Yup, we went through that song and dance twice in around 5 or 6 years and the problem got worse from the first deployment attempt to the second. They have a huge problem of project scope creep/sprawl and a real issue of understanding who they are targeting. They don't make it clear at all that openstack is really aimed at huge enterprise deployment or whatever where you talk in terms of thousands of hosts, not tens. Basically it seems to be designed around the assumption that you need so much hardware to support your anticipated load that you don't care how much of it gets consumed by the overbloated hypervisor and integration and other code to string it all together. It's weird. On a fresh deployment hand crafted by Ubuntu's own in house people, we had 9 hosts and 3 management hosts and 6 stroage nodes and almost all of them were around 40% CPU and RAM load before a single virtual machine or container was up and running.Similar story on OpenStack, which one of the nova developers admitted they'd failed at the mission.
It can be fine at scale with multiple people managing it full time but I've seen one guy throw it up as a side project, abandon it and everyone is stuck with the house of cards.
I wouldn't even blink an eye building a 1, or 10, or 100 node vmware cluster. It's not so simple anywhere else.