For a while, I’ve been wanting to put together a little homelab server. Right now, I have a lot of services I run as a few stacks of containers on a laptop. These are things like:

  • A media stack, with Plex + Ombi + Prowlarr/Sonarr/Radarr + Transmission/OpenVPN
  • A pastebin
  • Dedicated servers for various games
  • This blog, a container to build it, and a live preview
  • An Nginx instance
  • A homepage for me, with quick links to my service
  • Nextcloud
  • Wireguard
  • Portainer
  • A VM running home-assistant and more…

The laptop actually runs all of these reasonably well, though the game servers stress it, and transcoding 4K content (especially Dolby Vision) with Plex is a little painful. Most importantly though, I feel like running all my services off a laptop in a closet is bad for my street cred.

Additionally, I do a lot of parallel + distributed computing in my research, usually on HPC clusters provided by universities or various national labs. I really enjoy working on these, and I find making code/algorithms scalable really satisfying. So, it would be nice to have a local testbed for my work there.

The server

I found a deal (I think?) on Craigslist for a SuperMicro X8QB6-F quad-socket motherboard. I couldn’t really find much on the board itself, so I frankly have no idea if the price I paid was a good one… But it was relatively cheap, so I’m not too worried. And my nerd cred couldn’t pass up a quad-socket board!

It came with:

  • 4x Xeon E7-4807 (6-core, 12-threads, 1.94GHz)
  • 16GB (4x 4GB) of ECC DDR3-1333 (This was supposed to be 32GB… Still waiting on the seller to hook me up with the remaining 16GB, but hopefully that’ll happen this weekend!)

Eventually, I’d like to get a rack going – for now, though, it’s just sitting on my 3D printer enclosure.

I also picked up

  • 2x 1TB 10k SAS HDDs just to have some storage to get going.

First impressions

Holy crap, the fan array in this thing is LOUD! I’m not sure what I was expecting, but the jet engine on boot is a little funny. First thing I did was set the fan profile in BIOS to “Energy-Saver”, which lets the fans idle at about 30%.

Speaking of “energy saving”… With the 4807s, this thing idles at around 300W! Seeing as my office gets a little cold in the mornings, we’ll call this a feature.

This server has an onboard SATA controller, but no SAS controller, so I put in a SATA SSD from an old laptop. I got the part number wrong when I was researching the server motherboard and thought it had an SAS controller, so I picked up a RocketRaid 2720 HBA to run the SAS drives (also on Craigslist). More on that in another post, maybe.

I went ahead and installed Proxmox, set up a simple Ubuntu VM, and was able to load up the desktop and interact with it! Success!

I also went ahead and made a template for an LXC container with Docker installed, so I could hit the ground running launching Docker containers. While Proxmox supports LXC containers, there are a few extra steps to get Docker running on it, so having a template container with Docker pre-installed will save me some time in the future. I’m not sure if I’m doing something non-idiomatic by using Docker on Proxmox in the first place, but I want compatibility with some of the other stuff I use Docker for.

Running ray for parallel work

I use the Python library ray for a lot of parallelization tasks in my research. With ray, you can set up a pool of workers, and submit work to them, which will get parallelized over them. What’s nice about ray is that it tries to make it easy to take a pool of workers on your local machine and add workers from a remote node, building up a little cluster.

In practice, this ends up being sort of a pain in the ass for a lot of reasons that are outside the scope of this particular blog post. But I wanted to just do some simple benchmarking, so I

  1. Created a container in Proxmox from my Docker template
  2. Set up miniconda and created an environment with scipy and ray
  3. Launched a Jupyter notebook on my local machine
  4. Launched a ray cluster on the server, with ray start --head --num-cpus 48
  5. Connected to the ray cluster from Jupyter, with ray.init('ray://server_ip:10001) and was able to start submitting work! Gotta admit, it’s pretty fun launching 48 tasks and having them all complete simultaneously.

I’ll give more detailed benchmarks in another post, but first, just a quick overview of the setup as currently described. The catch with this whole gig is my desktop’s Ryzen 3800x absolutely SMOKES an E7-4807 in single-core performance. (This is expected, of course – server chips trade off single core performance for lots of cores.) But that means in order to actually capture the performance boost from the multicore server, I have to do some multithreaded workloads.

Now, frankly, even with a 48-thread workload, these 4807s are a little anemic, and don’t beat out my 3800x by much. Single-core, my 3800x is about 4x faster, and at max utilization with 48 threads the quad 4807s only provide a small ~25% improvement in performance.

So, seems like we’ll need to upgrade! Stay tuned…