Plex with Intel Arc 380

Xelas

Ars Praefectus
5,444
Subscriptor++
I just received an Intel Arc 380 GPU today (bought for ~$115) that I intend to use exclusively as a HW decoder for my Plex system as my system (based on an Intel Xeon E3-1245v3) can't transcode 4k video at all and struggles with more than 2-3 1080 transcodes. I think it's the cheapest way to get a hardware transcoder and for on-the-fly transcoding, Quicksync quality is decent enough. Numerous reports indicate that it performs well in that role even without REBAR, which my motherboard doesn't support and doesn't work via PCI pass-through in any case. We'll see. I plan to pass the video card through to my Plex OS that's running as a VM in ESXi - i haven't found anyone running it that way so I thought I'd take the plunge and report back.

The stupid hiccup is that my motherboard is a SuperMicro X10SL7-F that has an x16 and an x4 slot and I just discovered that they are 1 slot apart and I can't fit both the GPU and my HBA, so I just ordered a vertical GPU re-mounting kit for my PC case that should come in this weekend. The video card is 2 PCI slots wide because of the bulky cooler. Meh.

Hopefully I'll have time to do some testing and I'll update this thread.
 
Last edited:
  • Like
Reactions: Nevarre

Xelas

Ars Praefectus
5,444
Subscriptor++
Supposedly the A40 or A310 are single slot, but I can't find any sign of them at any retailer. Unfortunately. Would be very handy as a transcode horsepower for fairly low power and price usage.
Yep. The Arc380 is failry cheap, though, at $115 so it was either get one now or wait for those to turn up.

Installing the PCI-E flexible riser cable and slot turned into a bit of an ordeal. Amazon didn't deliver the damned thing until after 8pm on Sunday ("Next Day" Prime - LOL), and installing it cascaded into me rerouting 1/2 the cables in the system, so sorry for the wait.

So, no dice on the A380 via Passthrough. It's detected by the host (ESXi), allows me to toggle Passthrough, and then it shows up in the VM but shows up consistently with the frustatingly vague "Hardware Error 43" in Device Manager within the WIndows 10 VM and just stubbornly doesn't work. Intel Arc drivers don't see or find it. If I connect a monitor to it and reboot or shut down and restart, the BIOS boot screens show up when I boot the host hardware, but then it goes black when ESXi boots and stays black when the VM boots. The card shows up as two devices in the PCI list - as a video hard and as a sound source, and I tried passing through just the video and both video and sound. My system doesn't support SR-IOV, but does have proper VT-d and other devices have worked in passthrough without issues, so I think the base system itself is sound.

I had no picture at all until I turned on "4G Decoding" in the BIOS. I never realized that my VM was installed with "BIOS" instead of "UEFI" compatibility, and I'm wondering if that is having a detrimental effect.1 I found some chatter on other forums of people trying to virtualize Intel ARC GPUs, and there are mostly failures interspersed with a few successes, but very little info on what it needs to work properly (99% of the posts are "it works for me, don't know what your problem is"). Most people complain that even when it does work, it's not entirely stable and some people have issues with the VM hanging during shutdown, or needing to reboot the VM once after a cold start to get it to work, or other hackish workarounds.

I'll try spinning up a Linux guest and maybe a Win11 guest to try for luck as well.

1Nope - changing the VM to UEFI and also "upgrading" the VM from ESXi 6.5 to ESXi 7.x compatiblity didn't help.
 
Last edited:

Xelas

Ars Praefectus
5,444
Subscriptor++
"X10 Uniprocessor PCI-E Graphics does not support SR-IOV"
Boooo!!! The x16 PCI-E hangs off the CPU, so I'm not sure why there is a limitation, but bolded below. The FAQ mentions a compatibility chart for SR-IOV, but I have not been able to find it anywhere.
CPU definitely supports VT-D, so it looks like I'm held back by the motherboard in some way.
There is no "SR-IOV Enable/Disable" in the BIOS anywhere. VT-D enabled and verified working.
pixel.gif
Question
For SR-IOV and device pass-through support by using Supermicro X10/X11 single processor motherboard, X10/X11 dual processors motherboard and X11/X10 multiple processors motherboard, does any limitation or requirement to support both of features?
pixel.gif
Answer
SR-IOV is required CPU with IOMMU (VT-d or AMD-V) support. User can initially check CPU feature support list. With Intel CPU, please check Intel ark,

Second, system BIOS is also required to support.
  1. Intel Virtualization Technology (VT) with enabled state.
  2. VT-d to enabled state
  3. ACS support and AFI forwarding (for some paritcular cards, such as nVidia T4)
In addition, network adapter, controller or LOM are required to support SR-IOV capability. For Intel network NIC, user can check the article with supported matrix,

Supermicro X10/X11 dual processors and multiple processors motherboards will support SR-IOV in supported Windows and Linux OS matrix chart. User can check Supermicro compatiblity chart for best practice.

Trying to deploy SR-IOV with X10 UP may suffer from failure of assignment due on ACS support inside Windows, this is due on limitation of the support from PEG and C220 series PCH.
For X10 UP, such as X10SL7-F/X10SLE-DF/X10SDD-16C-F/X10SDD-F/X10SLA-F/X10SLD-F/X10SLD-HF/X10SLE-F/X10SLE-HF/X10SLH-F/X10SLL+-F/X10SLL-S/X10SLL-SF/X10SLM+-F/X10SLM+-LN4F/X10SLM-F, the default LOM/MicroLP network controller of those boards is with i210 mostly. Intel i210 does not support SR-IOV feature. X10 UP PEG does not support SR-IOV, either.
 

Xelas

Ars Praefectus
5,444
Subscriptor++
I run my home server on Proxmox, and have Plex in a container. The nice thing about that is it's super easy to share host hardware with containers. GPU passthrough is fraught with headaches.
Just Plex in a docker in Proxmox? How does it access the graphics card? Intel recently released Linux drivers for the Arc cards, so this may be a viable option.

I would have to pretty much tear down and rebuild my whole setup, though. I'm running TrueNAS, and I guess if I were to move to Proxmox I should be able to move the drive pools over? I'd need to re-create all of my shares though, right?
I have Plex in a WIndows VM and pretty much the full ***arr stack set up in another Windows VM (and a couple of Linux VMs running some other stuff, but that should be easy enough to move) and it's all been running flawlessly for almost a decade. Because paths would change for pretty much everything, I'd need to rebuild lists and metadata from scratch. Ugh.
 

Xelas

Ars Praefectus
5,444
Subscriptor++
Progress! Kinda.

I can only work on this for a few hours per weekend, so progress has been in lurches.

I finally got the GPU to work in pass-through in ESXi to an Ubuntu guest, after a ton of time figuring out the magic combo of tweaks and what felt like hundreds of host reboots. Intel only officially supports Ubuntu 20.04 (focal) or 22.04 (jammy), so I went with jammy.
I get video-out. In fact, you have to remove the ESXi SVGA device from the VM to get this to work (not uncommon with other GPUs as well), which kills the GUI console, which meant I had to get USB device passthrough working (another ordeal in itself) and dedicate a monitor for the effort. The drivers load and running. The bad news is that the VM is REALLY unstable. The GUI (Gnome? Wayland?) in the VM dies occasionally (but daemons and applications continue to run - my test Plex server install was still working and serving video), the interface is REALLY laggy and lurchy, and the whole VM is prone to occasionally completely shutting down at random intervals. I haven't had time to look at log files yet. The disheartening part is that, even after all the effort to get though the crashes and slowness to get Plex installed and working, it does NOT use hardware transcoding when I tried a few test runs (yes, I have Plex Pass and I checked all the right boxes in Settings). I'll try to look at the Plex logs later this week to see why it decides to use software and not HW - if the drivers are crashing or if Plex just can't find the GPU in this setup for some reason.

Replicating the exact same ESXi settings and tweaks for the WIndows VM didn't help - still get the Code 43 error. If I have to rebuild the Plex library to move it from Windows to Linux (the paths change when changing OSs, even when files don't move) then I might as well just jump into Proxmox wholesale, containerize everything, and bail on ESXi.
 
Last edited:

Cool Modine

Ars Tribunus Angusticlavius
8,539
Subscriptor
Just Plex in a docker in Proxmox? How does it access the graphics card? Intel recently released Linux drivers for the Arc cards, so this may be a viable option.

I would have to pretty much tear down and rebuild my whole setup, though. I'm running TrueNAS, and I guess if I were to move to Proxmox I should be able to move the drive pools over? I'd need to re-create all of my shares though, right?
I have Plex in a WIndows VM and pretty much the full ***arr stack set up in another Windows VM (and a couple of Linux VMs running some other stuff, but that should be easy enough to move) and it's all been running flawlessly for almost a decade. Because paths would change for pretty much everything, I'd need to rebuild lists and metadata from scratch. Ugh.
Yeah, it would be a bit of work to change OS. You could probably migrate Windows VMs, but there's a bit of work in setting up all the new stuff, rebuilding configuration, etc... Another option could be to make a dedicated transcoding PC. A used Dell desktop can go pretty cheap, doesn't draw much power, and an I3 or Pentium of new enough generation can handle a lot of streams.

But if you're leaning towards Proxmox, here's the notes I took on setting up video passthrough from a few years ago. I've had my Plex server running this way for a while without issue. Containers run under the host's OS kernel, unlike VMs, so all you're really doing is taking the /dev entries for the GPU and mapping them into the container.

Code:
=========================
Plex with Intel video passthrough:
https://forums.plex.tv/t/pms-installation-guide-when-using-a-proxmox-5-1-lxc-container/219728
https://www.reddit.com/r/Proxmox/comments/glog5j/lxc_gpu_passthrough/


QSV requires fb0 device.  VAAPI does not.  
fb0 does not populate on the host unless a monitor is connected.  However, QSV does not work in a container if we pass through fb0.
QSV only works if monitor is not connect, no fb0 is passed through, and we use the mknod method in the container to create the device.


On the host:
apt-get install i965-va-driver vainfo -y


Get the device IDs:

ls -l /dev/dri
total 0
drwxr-xr-x 2 root root         80 Jul 17 15:02 by-path
crw-rw---- 1 root video  226,   0 Jul 17 15:02 card0
crw-rw---- 1 root render 226, 128 Jul 17 15:02 renderD128


Create a container, then edit the conf file.  Device IDs go into the conf file:

vi /etc/pve/lxc/103.conf
Add the following lines:
    lxc.cgroup.devices.allow = c 226:0 rwm
    lxc.cgroup.devices.allow = c 226:128 rwm
    lxc.mount.entry: /dev/dri/card0 dev/dri/card0 none bind,optional,create=file,perms=666
    lxc.mount.entry: /dev/dri/renderD128 dev/dri/renderD128 none bind,optional,create=file,perms=666

If using Jellyfin, select VAAPI HW acceleration.


For QSV:
vi /etc/pve/lxc/103.conf

lxc.cgroup.devices.allow = c 226:0 rwm
lxc.cgroup.devices.allow = c 226:128 rwm
lxc.cgroup.devices.allow = c 29:0 rwm
lxc.autodev: 1
lxc.hook.autodev:/var/lib/lxc/103/mount_hook.sh

vi /var/lib/lxc/103/mount_hook.sh

mkdir -p ${LXC_ROOTFS_MOUNT}/dev/dri9
mknod -m 666 ${LXC_ROOTFS_MOUNT}/dev/dri/card0 c 226 0
mknod -m 666 ${LXC_ROOTFS_MOUNT}/dev/dri/renderD128 c 226 128
mknod -m 666 ${LXC_ROOTFS_MOUNT}/dev/fb0 c 29 0

chmod 755 /var/lib/lxc/103/mount_hook.sh




=========================
 

Xelas

Ars Praefectus
5,444
Subscriptor++
Cool Modine,
Great writeup (seriously, thank you). Problem is that the Intel Arc GPUs seem to need a different driver than the one integrated GPUs use, and Intel released drivers that ONLY seem to work in Ubuntu, so it will not work in ProxMox for use in a container.
That said, I'm seeing some success stories with users getting pass-through to work under ProxMox to a full-VM Ubuntu guest with ARC GPUs, and that setup is viable in my case but will mean a complete rebuild of, well, everything.

Yes, I can also buy a dedicated Plex-only host, but I travel a ton (over 100 hotel nights in 2022), and the ability to 100% remotely manage my system via VPN is a big benefit. I lose that with a stand-alone host.

EDIT: I just realized that I can probably migrate my 3 x VMs (TrueNas, a Windows VM, and an Ubunu VM) as-is from ESXi to Proxmox (even if I can;t find a VM conversion utility I can probably do a backup/restore within the VM itself), then peel out services from the VMs into containers on-by-one and shut them down in the VMs, then kill the VMs once I have all of the services containerized. That's a lot less daunting than trying to rebuild everything over a single crazy 3-day sprint on a long weekend. I even have a new 256GB SSD drive I can use to stand up ProxMox on without wiping ESXi out, so I can always fall back if I screw ProxMox up. In any case, My ESXi install is still USB-based (continuously upgraded from ESXi 5.0 to 7.0U3!), and I would have needed to migrate that to an SSD soon anyway - that's why I bought the new SSD in the first place.

Hmm.
 
Last edited:

GeneralFailureDriveA

Ars Scholae Palatinae
1,185
Subscriptor
Would not a USB external multi-TB drive with media you care to watch not be easier for hotel trips? Or, alternately, Plex sync of media to your device before travel, transcoded as needed?

GPU vendors seem oddly set on breaking any sort of virtual use of GPUs that otherwise would fully support it, as a method to drive people to the far more costly variants that have no changes beyond drivers that do not actively seek to foil VM use of GPUs.

I am interested in results, though!
 

Xelas

Ars Praefectus
5,444
Subscriptor++
That'll have to wait a for a few more weeks. I want to wait for a long weekend to stand up Proxmox and port the VMs over in case it takes more than a few hours, and long weekends with free time a very rare thing lately. I am definitely interested in doing this, so the project is NOT DEAD.

Plex sync is great, but still requires transcoding because there is no way I'll get a 4k movie onto my phone via hotel WiFi, and it takes FOREVER to transcode via CPU. I don't more than 720p when I'm traveling, and 720p is significantly smaller than even 1080p. There any many more reasons, including general laziness, so I'll rather work to get HW transcode working so that I never have to worry about file versions or incompatible subtitles again.

Yes, I can also carry around an external drive with stuff, but that's a hassle to deal with, especially on a plane.
 

Xelas

Ars Praefectus
5,444
Subscriptor++
No, I actually haven't. Life and work caught up with me, so the card is just gathering dust. That said, my setup has been based on free VMWare ESXi for the last 10 years, but Broadcom just killed that, so I'm no longer able to get updates, downloads, or possibly even get the ESXi license reactivated if I suddenly have an issue. It's all been rock-steady for over a decade, but it's time to move before I have an issue, probably to Proxmox. I'll have some time to deal with this towards June/July, and I'll definitely try to get the A380 working in pass-through in Proxmox.
 

gryerse

Smack-Fu Master, in training
2
Sorry to hear that it's been sidelined - and it sucks having to migrate out of ESXi due to policy changes.
One of my servers is fairly similar to what you are trying to accomplish.
I have a Dell 3930 which I spec'd an E-2288G into (for QuickSync) and added a P2000 for AI and transcoding on seperate VM's. I'm much more familiar and comfortable with XCP-ng, but ended up having to use ProxMox for the hypervisor due to lack of TB support (used by the iGPU for monitor out and thusly enabling QSV transcoding).
I'm sure I'm going to miss some key points since it was years ago that I set it all up, but here are the VM configs currently;

VM1.jpg

VM2.jpg

Items of note:
  • Use the same boot method for vm's as host (in my case UEFI)
  • Use a Q35 machine type
  • Dump the entire PCI address to the VM
  • You may need to install PVE headers prior to splitting off the PCI address
  • Exclude your GPU from being used by Dom0 prior to VM creation

There are many great write-ups of ProxMox with GPU passthrough already online. I had to cobble quite a few together for my particular instance (iGPU w/TB-out and dGPU - highly do not recommend) but a single dGPU should be relative cake.
CraftComputing on YT is very easy to follow and has some unique use cases if you want to share GPU resources (Nvidia-only).

Best of luck!!
 
  • Like
Reactions: Xelas