What are you replacing vSphere with? Or: Broadcom gets absurd on pricing.

Whittey

Ars Tribunus Militum
1,849
Redundancy is important in some edge scenarios. We ended up rolling out a gaggle of 3 node Nutanix clusters to our medical centers to host an EMR downtime box, a DC or two, SCCM box, and a couple must-be-local type servers that require higher availability. That said, all the other sites large enough to want/need a DC and SCCM box just get a single host.

But in the datacenter? I just don't see the ROI and seriously question the storage performance. Also, last I checked, Nutanix can't do stretched clusters without VMWare? That doesn't sound ideal.

For me, the SAN I've got is pretty much a requirement for any possible VMware replacement. At the moment I believe that leaves only HyperV as a possible alternative, but good lord do Microsoft try to make things difficult.
 

oikjn

Ars Scholae Palatinae
969
Subscriptor++
@Whittey right, I think a 3-node cluster is their minimum size, but even there, 3-nodes can run a pretty decent workload. Mind saying what those 3-nodes roughly cost? Last I priced it out, it was getting too close to six figures for us to really look at it too seriously when we could easily get away with an existing SAN and two traditional hosts. I fear we will also go back to Hyper-V, but I really don't want to go through that transition of VMs again as we did going from Hyper-V to vcenter.
 

Whittey

Ars Tribunus Militum
1,849
@Whittey right, I think a 3-node cluster is their minimum size, but even there, 3-nodes can run a pretty decent workload. Mind saying what those 3-nodes roughly cost? Last I priced it out, it was getting too close to six figures for us to really look at it too seriously when we could easily get away with an existing SAN and two traditional hosts.
3 nodes of - 1x 6326 CPU, 256GB RAM, 4x 3.84TB SSD, 2x25gb mellanox card and optics
6 VM ROBO licensing
3 years production (not MC) support
~50k USD

Then add in some windows licensing for our use case. It certainly isn't cheap, but I don't see it as all that bad.
 

oikjn

Ars Scholae Palatinae
969
Subscriptor++
3 nodes of - 1x 6326 CPU, 256GB RAM, 4x 3.84TB SSD, 2x25gb mellanox card and optics
6 VM ROBO licensing
3 years production (not MC) support
~50k USD

Then add in some windows licensing for our use case. It certainly isn't cheap, but I don't see it as all that bad.
I forget the licensing option we went with, but I don't think it was the full AHV HCI stack. 50k for that hardware and all isn't bad, but at 6 VM's, its a bit $$ on a per/VM standpoint for general use VMs, but I get it not too bad if its dedicated to LoB applications.
 

r0twhylr

Ars Tribunus Militum
2,131
Subscriptor++
I think it goes EOL later this year or early next.
VRTX went EOL 1/31 2022, and I don't know of a good floor-standing, quiet-enough-in-an-office platform that is similar. There are some Supermicro rackmount chassis (look up the Twin series) that will host 4 blades in a single chassis with integrated storage, but I don't know whether that storage is shared or dedicated to each blade.
 

w00key

Ars Praefectus
5,908
Subscriptor
VRTX went EOL 1/31 2022, and I don't know of a good floor-standing, quiet-enough-in-an-office platform that is similar. There are some Supermicro rackmount chassis (look up the Twin series) that will host 4 blades in a single chassis with integrated storage, but I don't know whether that storage is shared or dedicated to each blade.
Supermicros are for copy paste nodes with their own storage, I haven't seen any with an embedded SAS shared storage.

The VRTX is unique that it includes two weird "shared" SAS card, so each node is redundantly connected to the mini MD disk array inside. The storage isn't completely dumb and you define your RAID1/5/whatever pools there and carve out virtual disks to serve as VMFS / Cluster attached volume.

I think you could even attach external MD1200s and use it the same way, two special sauce cards for 4 nodes instead of the usual 2 SAS cards per node for redundancy and 8 cables.
 

r0twhylr

Ars Tribunus Militum
2,131
Subscriptor++
Supermicros are for copy paste nodes with their own storage, I haven't seen any with an embedded SAS shared storage.

The VRTX is unique that it includes two weird "shared" SAS card, so each node is redundantly connected to the mini MD disk array inside. The storage isn't completely dumb and you define your RAID1/5/whatever pools there and carve out virtual disks to serve as VMFS / Cluster attached volume.

I think you could even attach external MD1200s and use it the same way, two special sauce cards for 4 nodes instead of the usual 2 SAS cards per node for redundancy and 8 cables.
Thanks, I kind of suspected that. Yeah, VRTX was pretty unique. I actually had a demo unit here at my house a couple years ago. Unfortunately it had definitely seen better days. The management card wouldn't boot up, and the box didn't have a redundant one.

Even with the non-shared storage on a Supermicro though, you could put Proxmox on that and use it as HCI.
 

SandyTech

Ars Legatus Legionis
13,235
Subscriptor++
VRTX went EOL 1/31 2022, and I don't know of a good floor-standing, quiet-enough-in-an-office platform that is similar. There are some Supermicro rackmount chassis (look up the Twin series) that will host 4 blades in a single chassis with integrated storage, but I don't know whether that storage is shared or dedicated to each blade.
Pity that.

Honestly I don't think there is (or possibly ever will be) an equivalent replacement. Our deployments were we would have used a VRTX for a client are probably going to be replaced with a couple T3xx or T5xx host with one of the gruntier Synology boxes for storage.
 
  • Like
Reactions: r0twhylr

Demento

Ars Legatus Legionis
13,754
Subscriptor
We're on VXRail, because the contractors that set it up had to assume the worst in who was getting hired to replace them. :p
And we've just updated with the same because other issues meant we were too close to end of support on the existing ones to really re-engineer a solution.

I was assuming Nutanix would provide a good non-VMWare approach, but I hadn't realised they can't do a stretch cluster without VMW. That's a bit of a bastard, since running dual site normal, but able to run everything single site is a required function. Which pretty much only leaves ProxMox as our next choice, I think. We're going to re-engineer for 5 years from now now to avoid the current mess where we had to go Dell/VMW whether we wanted to or not.
 

tremere98

Smack-Fu Master, in training
29
A PSA for any of you in Citrix’s sphere - Citrix will give you up to 10,000 socket licenses of XenServer AND co-term them to your current Citrix licenses:


I’ve never personally used it, but if our AWS migration continues to drag closer to our VMWare renewal I’m going to take a pretty hard look at it for my farm.
 
  • Like
Reactions: Demento

chalex

Ars Legatus Legionis
11,286
Subscriptor++
Back in 2018 or so we switched from Dell+NetApp hardware with vmware over to proxmox hyperconverged on supermicro whitebox

typical whitebox: 1U max CPU (e.g. 2 x epyc 96-core) max RAM (~2TB) and a couple of nvme (e.g. 2 x 3.6TB), 40Gbps interconnect
You need at least three boxes for a cluster, with default Ceph RBD you get 3x replication, so you get one third usable space compared to raw.
You can get three such boxes for ~$100k capex, with no software licensing costs.

Our bigger cluster is now up to 12 such boxes, hosting generic Linux and Windows VMs. $0 licensing cost over the last few years.

If you want to test it out, set up any 3 spare machines of any spec. Though with HDD storage over 1GbE network, the storage will not be very usable, but good enough for a POC.
 
Back in 2018 or so we switched from Dell+NetApp hardware with vmware over to proxmox hyperconverged on supermicro whitebox

typical whitebox: 1U max CPU (e.g. 2 x epyc 96-core) max RAM (~2TB) and a couple of nvme (e.g. 2 x 3.6TB), 40Gbps interconnect
You need at least three boxes for a cluster, with default Ceph RBD you get 3x replication, so you get one third usable space compared to raw.
You can get three such boxes for ~$100k capex, with no software licensing costs.

Our bigger cluster is now up to 12 such boxes, hosting generic Linux and Windows VMs. $0 licensing cost over the last few years.

If you want to test it out, set up any 3 spare machines of any spec. Though with HDD storage over 1GbE network, the storage will not be very usable, but good enough for a POC.
how painful did you find Ceph? I've not done it but I understand it needs feeding and watering. I'm (seriously) home-labbing Triton SDC for a variety of reasons and it's very much in this mould; pros are arguably a better security model, cons being it's idiosyncratic
 

chalex

Ars Legatus Legionis
11,286
Subscriptor++
My background is distributed storage so I found ceph super-easy but anyway this is proxmox-managed ceph so you really just let proxmox handle it for you in the proxmox GUI, following the proxmox manual; no need to mess with ceph directly. I don't have any site-specific configuration. You just need to have suitable hardware, enough disks and enough interconnect. The default block-only ceph with 3x replication means you probably want at least two disks per host and at least 3 hosts. And I only use that for VM OS disks, so doesn't have to be big.

https://pve.proxmox.com/wiki/Deploy_Hyper-Converged_Ceph_Cluster (ignore the CephFS stuff, you don't need that)
 

chalex

Ars Legatus Legionis
11,286
Subscriptor++
Looking at your mention of Triton SDC, I think that's the Joyent SmartOS stuff, so that's going to be way more complicated, IMHO, since proxmox is just Debian Linux. But I guess if you're not that familiar with Linux or Ceph, then it's not any easier than any other system.

Joyent was based on Illumos so that was always not that interesting to me as a Linux sysadmin, as far as I can tell, all the stuff is obsolete now, maybe like vmware will be in another 5 years if broadcom locks it all down to high-end enterprise only.
 
Looking at your mention of Triton SDC, I think that's the Joyent SmartOS stuff, so that's going to be way more complicated, IMHO, since proxmox is just Debian Linux. But I guess if you're not that familiar with Linux or Ceph, then it's not any easier than any other system.

Joyent was based on Illumos so that was always not that interesting to me as a Linux sysadmin, as far as I can tell, all the stuff is obsolete now, maybe like vmware will be in another 5 years if broadcom locks it all down to high-end enterprise only.

Yeah so Joyent got bought by Samsung, a bunch of prominent folks left but then at some point Samsung offloaded the Triton stuff to a hosting provider in Michigan (MNX) who seem to employ engineers working on it and offer support for private cloud. I haven't looked at Solaris seriously since about 2008 (job at the time still had Sun boxen) so catching up on OpenSolaris which begat Illumos and handful of distributions was a bit of a revelation. Anyway, their flavour (SmartOS) lives on as does Triton and both are actively maintained.

To say that Triton is idiosyncratic would be an understatement but what particularly attracts me to it is the API driven-ness principally, the tight bhyve/zfs/zones integration and the state that is storage solutions for bare metal k8s.

Anyway, I didn't come here to be a shill for that: all of the above is also spurred on by the feeling I have in me bones that there's going to be a big swing away from the public cloud in the next year or two. On the one hand, I think the Broadcom VMware stuff muddies the water for mid-sized folks (at best) but smaller shops will be running for the hills. I can only hope that that means a bit more innovation and competition
 

chalex

Ars Legatus Legionis
11,286
Subscriptor++
the feeling I have in me bones that there's going to be a big swing away from the public cloud in the next year or two
that may be but I would stick with all-Linux stuff
For example, even most of the FreeBSD-based commercial and semi-commercial products are moving to Linux underneath (e.g. FreeNAS)
 

Arbelac

Ars Tribunus Angusticlavius
7,449
I'm swapping my home systems over to Proxmox from vSphere. Long overdue anyways, but just using Proxmox for hosting (no Ceph, I have 2 iSCSI arrays).

So far validating it and seems good. Had to dick around with Windows VirtIO drivers a bit.

Will be trying a Linux VM this week (hopefully) and if it pans out, I'll start migrating VMs over and reinstalling Prox on my old VMWare hosts.
 
  • Like
Reactions: IncrHulk

Paladin

Ars Legatus Legionis
32,552
Subscriptor
I tried Proxmox a couple years back at work as a proof of concept. We already use KVM regularly but I wanted to try the improved management, integration and features on Proxmox. We use VMWare on a few very small setups as well so there is some motivation to look around again. We use HyperV on a couple of setups as well. Multiple personality disorder around here. We have a Virtuozzo deployment as well that we just got up and running a few months back and is not very productive yet.

I really like Proxmox initially but we ran into a really weird performance issue with using it on Dell M620 and M630 blade servers for compute (with iscsi to Equallogic storage). The general networking was fine but for some reason the iscsi storage networking was very unreliable. At times it would be fine for minutes at least and then just crawl but we could never figure it out. Plain Centos, or Debian live installs on the same hardware with KVM or VMWare or Windows on the same hardware worked great but something about the config or driver packages that came in Proxmox made it freak out occasionally. It would drop iscsi traffic almost to a standstill for 10 to 30 seconds, sometimes enough to let the session drop and the storage volume go to read only unless we extended the timeout.

Even after weeks of research and testing and fiddling with the hardware and the configurations we just gave it up. Tried a few community support posts and stuff but no one ever came up with anything useful. The fact that things worked fine on basic Linux installs, Windows and VMWare seems to indicate the hardware and basic config is fine but something about Proxmox is tweaked differently somehow. Could never find it though.

I would guess we'll try it again sometime soonish just to see if anything changed or maybe we made a mistake of some kind. Personally, I would rather use rackmounts with local, fast storage pooled for replication or something similar but we'll see.
 
that may be but I would stick with all-Linux stuff
For example, even most of the FreeBSD-based commercial and semi-commercial products are moving to Linux underneath (e.g. FreeNAS)
Now this I also didn't know. So what's their plan there? TrueNAS is the 'free' offering (still BSD based) but TrueNAS SCALE is a commercial linux offering? honestly this is news to me but yeah they seem to be squaring up to proxmox no?
 

chalex

Ars Legatus Legionis
11,286
Subscriptor++
Now this I also didn't know. So what's their plan there? TrueNAS is the 'free' offering (still BSD based) but TrueNAS SCALE is a commercial linux offering? honestly this is news to me but yeah they seem to be squaring up to proxmox no?
No, close, IIRC TrueNAS SCALE is Linux-based and then there is a community and a commercial enterprise version. And then they (ixsystems) also have a hardware offering that has a dual-controller HA filer and runs that same software stack.
I have the new TrueNAS SCALE (community version) on a couple of filers (~45-disk supermicro boxes) and I don't really like it because it's too appliance-y but at the same time it's fine. It's like Synology, you get what you get for the OS and you can't change the config besides the disk layout and the share settings. But in the long run because it's Linux you can have all kinds of "apps" on top, like docker and VMs and anything else.

So to bring it back to this thread, if you're more familiar with ZFS than Ceph, you could do a TrueNAS SCALE box and NFS share the storage to your proxmox boxes and that would be a $0 license cost alternative to like a NetApp filer + VMWare vsphere.
 

Demento

Ars Legatus Legionis
13,754
Subscriptor
It's been a couple years but our last POC w/ Proxmox it was lacking even the polish of Hyper-V.
You want shiny and Enterprise-grade support, Nutanix is the only real alternative.
Proxmox is brilliant. But I wouldn't use it as the foundation for a sizeable company's Live environment. I am totally going to have one in the lab though; getting ready for when it is shiny enough for my corporate masters.
 
  • Like
Reactions: r0twhylr
We're going to be doing a PoC of OpenShift. OpenShift AI looks interesting as well.

Running Harvester at home. Works pretty good but the guests aren't getting full 10GbE performance on the network, only about 5Gb. Might be that the guests need more vCPU to get up there but I haven't really found anything on it in the forums and haven't tried it yet as it's not a huge deal at home. Otherwise Plex and OPV have been solid on it.
 
  • Like
Reactions: wobblytickle
Running Harvester at home. Works pretty good but the guests aren't getting full 10GbE performance on the network, only about 5Gb.
interesting! familiar with rancher but not this (given it's like 4 years old I feel very out of the loop). Seems to do a lot of things right (immutable, looks like a slick kubevirt integration) but... longhorn.
 
  • Like
Reactions: WingMan

chalex

Ars Legatus Legionis
11,286
Subscriptor++
I've been through several RedHat hands-on labs with OpenShift (which is just their k8s distro) and the OpenShift Virtualization feature (which is k8s+knative+kubevirt) and it's clear that it's way more complicated than you need if you just want to run some VMs, unless you really are a whole enterprise development team doing a multi-year rewrite of all your apps into "cloud-native" architecture. And obviously the smaller the environment, the less sense it makes.

OTOH, I don't know that proxmox has a lot of 100+ node clusters deployed out there. So there is definitely a limit where your needs can be "too big" for proxmox.
 
  • Like
Reactions: sryan2k1

sryan2k1

Ars Legatus Legionis
44,493
Subscriptor++
I've been through several RedHat hands-on labs with OpenShift (which is just their k8s distro) and the OpenShift Virtualization feature (which is k8s+knative+kubevirt) and it's clear that it's way more complicated than you need if you just want to run some VMs, unless you really are a whole enterprise development team doing a multi-year rewrite of all your apps into "cloud-native" architecture. And obviously the smaller the environment, the less sense it makes.

OTOH, I don't know that proxmox has a lot of 100+ node clusters deployed out there. So there is definitely a limit where your needs can be "too big" for proxmox.

Similar story on OpenStack, which one of the nova developers admitted they'd failed at the mission.

It can be fine at scale with multiple people managing it full time but I've seen one guy throw it up as a side project, abandon it and everyone is stuck with the house of cards.


I wouldn't even blink an eye building a 1, or 10, or 100 node vmware cluster. It's not so simple anywhere else.
 

Paladin

Ars Legatus Legionis
32,552
Subscriptor
Similar story on OpenStack, which one of the nova developers admitted they'd failed at the mission.

It can be fine at scale with multiple people managing it full time but I've seen one guy throw it up as a side project, abandon it and everyone is stuck with the house of cards.


I wouldn't even blink an eye building a 1, or 10, or 100 node vmware cluster. It's not so simple anywhere else.
Yup, we went through that song and dance twice in around 5 or 6 years and the problem got worse from the first deployment attempt to the second. They have a huge problem of project scope creep/sprawl and a real issue of understanding who they are targeting. They don't make it clear at all that openstack is really aimed at huge enterprise deployment or whatever where you talk in terms of thousands of hosts, not tens. Basically it seems to be designed around the assumption that you need so much hardware to support your anticipated load that you don't care how much of it gets consumed by the overbloated hypervisor and integration and other code to string it all together. It's weird. On a fresh deployment hand crafted by Ubuntu's own in house people, we had 9 hosts and 3 management hosts and 6 stroage nodes and almost all of them were around 40% CPU and RAM load before a single virtual machine or container was up and running. D:


Virtuozzo is basically openstack with a lot of the bloat removed. It still takes a lot more host resources than you might like and it is much less easy to maintain and manage than you would hope but it is a lot closer to a viable vSphere replacement than straight openstack is.

Sometimes I think openstack exists solely to make it easy for the developers to deploy test instances for developing openstack. :sneaky: