ur-tardis
Back in 2013 I was able to cram an Ivy Bridge i5 and 6 3TB hard drives into
a Mini-ITX Lian Li case. I called the system tardis
as it was intended to be
both a NAS and a hypervisor and it has served me well since. The only drawback
with the system was the limited memory support. Intel's Ivy Bridge processors
support 32GB of RAM but the Intel H61 Express chipset used on the ECS H61H2-MV
motherboard I chose only supports 16GB and that turns out to have been the
main limiting factor as that was the first resource I ran out of.
Shortly after building tardis I ended up building a VMware lab that I used for the next couple years to do demos for work. Once I moved on from that position and I no longer needed those systems (more mini-ITX systems with Ivy Bridge processors) so I set them aside. Eventually both systems returned to service, now being called virt02 and archive01. virt02 was added because I was running low on RAM and archive01 was added because I started running low on storage. I have been running all 3 systems now for a year and a bit and it's been enough to make me finally decide that it's time upgrade tardis and collapse back down to one system again.



Component | Description |
---|---|
Case | Lian LI PC-Q08B Mini-ITX |
Power Supply | Antec EarthWatts Green EA-380D Green 380W |
Motherboard | ECS H61H2-MV |
CPU | Intel Core i5 3470S (BX80637I53470S) |
RAM | AMD Radeon™ RE1600 Entertainment Series 16GB (2x8GB) DDR3 1600 |
Video | Intel HD Graphics 2500 |
RAID Controller | Areca ARC-1220 |
Storage | 6x Western Digital Cavair Green 3TB HDD |
How I design computers
I've been building my own PCs for a very long time and over the years have gotten fairly opinionated about how to go about doing it. There are a lot of guides and YouTube videos online about building computers but most of them just tell you which parts to buy as if everyone is out for the same thing. They tend to elide the why, which is the most important part. If you first think about what you are designing for there are essentially 4 resources you need to plan for, compute, memory, storage, and power. These all drive choices you need to make with each component. More power consumption necessarily drives more heat which means different cooling, enclosure and obviously power supply options must be considered than a lower power budget. In some cases compute is so important that everything else can be compromised on. Since I have a known workload, I have a starting point from which to work. I built some custom graphs based on the monitoring data so that I could look at the 3 hosts I'm trying to combine together. I knew I was constrained by RAM so I looked at this first.
As you can pretty plainly see I am using around 20GB of RAM, since I want to get 5+ years out of this hardware, preferably more, the old rule of thumb of 'double it and round up' made sense. 64GB of RAM became my target. Now lots of people have opinions on things like memory speed and CAS timing for your memory. Honesty, I don't care. I tend to buy whatever gets me a reasonable price for the quantity I want. Why? Well understand that a clock cycle on a modern processor takes something like 250 picoseconds (4.0GHz clock), and if it has to wait around for data from just about anywhere it ends up just wasting time. At the moment DDR4-3600 is pretty state of the art with CAS latencies of around 16. The CAS latency is how long the memory controller has to hold the address signals before data starts to come out. DDR memory is named because data is read/written on both the rising and falling edge of the clock causing Double Data Rate, which means the actual clock of the RAM module is 1/2 of the 'speed rating'. So DDR4-3600 runs at 1.8GHz and with a CAS latency of 16 cycles that means you have to wait for 8.89 nanoseconds or around 36 clock cycles to fetch information from RAM. If you drop down to DDR4-2400 you go up to 13.33 nanoseconds or 53 clock cycles. Not really a huge difference in the grand scheme of things, especially when you figure it's the fastest thing outside of CPU cache in your computer so looking at the timings it should be clear that more is better than faster.
Next up is compute. I know I don't need a GPU as I'm not doing anything that needs it. I have one small MPEG4 live encoding workload but that is not enough to put a GPU in a server. On the CPU end I can't say it has felt like I've been constrained much. Most of the compiling and traditionally compute intensive work that I do tends to actually run on my laptop. Just to be sure I wasn't missing something a quick look in Grafana confirmed my suspicion.
CPU selection started in the same way as RAM. Today I am looking at 8 cores with an average of around 90% idle. I did also look at the cpufreq metric a bit and these processors spend most of their time throttled down to around 1.9GHz, further pointing to CPU as being a non-issue. This is where I became comfortable stepping up to a Ryzen 5 3400G as 4 cores would very likely replace the 8 older cores easily. Tardis and virt02 both run VMs and I have 16 vCPUs provisioned so in my case core count is more important than raw single core performance as you might have to care about with single-thread workloads (like video gaming and streaming applications). I'm not entirely convinced that SMT helps a lot in virtual server workloads, with the general observation being a 'thread' is around 30% of a core as far as raw performance in many cases. I suppose that remains to be seen in my case but I decided to base my decision on core count, ignoring the thread count completely. Some video game engines may be able to take advantage of threads as they are more likely than general purpose workloads to have been optimized for it, but honestly I find most video games end up being alarmingly single threaded so stick with core counts. In my case, the Zen+ powered Ryzen 5 3400G with its built in video, 4 cores and 65 watt power footprint was the CPU of choice. A quick word on Intel vs AMD. I spent many years being an AMD fanboy, in fact I still have several systems running AMD Athlon II X2 CPUs from 2009 humming happily away in production. I switched to Intel in my last generation of systems because the price vs performance tradeoff was just a no-brainer. Now, as with much in the technology world the trophy has gone back to AMD so this time around I was focused on them.
Next up is storage. Tardis is bulk storage. I have been happily running it on 5400RPM drives in a RAID-6 array for years and other than capacity I have never have problems. I am not putting SSDs in here, the price versus reliability versus capacity is a no-brainer. You can listen to the people who pooh pooh 'spinning rust' all you want but at the time of writing you could build an 8x10TB RAID-6 array, which would net you around 54.57TiB of usable space or an 8x1TB array out of SSDs, netting you around 5.6TiB of space for around $1500. Obviously if you are doing something that requires low latency, high volume, small, random accesses then a SSD may make sense but you probably aren't, and I know I'm not. Also, I know there are a million fancy filesystems and volume managers, some even build in RAID. Don't do that. I have tinkered with most of the combinations out there and at the end of the day I have found a reasonable RAID card is more performant than the onboard SATA of your motherboard, and more reliable than even the soft RAID stack in Linux. I have found that most of the more exotic filesystems really don't provide anything above the good old LVM with ext4 or XFS on top. I plan on experimenting with btrfs this time around, but I don't expect it to matter much for my usecase, honestly. Say what you will about ext4 and XFS being old, but I bet you every line of code has seen eyes at some point and when we're talking about storing your data there is nothing more important than a proven history of not getting it wrong. Be suspicious if your filesystem can't drink, be really suspicious if it can't even get a learner permit (21 and 16 in the US, if you aren't).
Finally I'll talk about power. For me this is something I don't explicitly think about but rather try to balance throughout. If I know I need a GPU then I know I'm going to have to accommodate the power consumption, conversely if I'm trying to build a server I'm aiming for lower power in general as it is on all the time. Over the years I have replaced many power supplies in many systems and what I can tell you is this: don't spend money on fancy features, they do nothing for you. I would rather spend more on a 'basic' power supply than get higher rated efficiency or modular cables, or RGB LEDs. The power supply literally is the heart of your computer. It is connected to everything and if it fails in an unsafe manner it can easily destroy your whole PC. They can also cause any number of insanely difficult to troubleshoot problems, or premature failures. Long, long ago, I remember a winter where I had to keep my computer outside just to keep it stable. I tried everything I could think of (and afford at the time). I replaced voltage regulators, fans, and heatsinks but it just kept getting causing crashes when it got hot. I suspected the wacky Cyrix 486DX4/100 that I had was overheating, but it turned out that under load the power supply was getting hot which was causing the 5V rail to sag which would brown out the VRM for the CPU. Long story short, buy good brand name supplies that are rated for more than you need and spend what you would spend on a fancy blinkenlight one on a basic featured one.
With the general parts list in mind mid-last year I started hunting around for parts, trying to optimize cost while keeping everything as compact as possible. I wanted to re-use the really nice Lian Li case that ur-tardis is currently in but just wasn't to be. The problem staying mini-ITX turned out to be a complete lack of the Zen+ APUs. AMD doesn't seem to be shipping any of the Zen2 APUs to retail so I had picked out the Ryzen 5 3400G which is a Zen+ APU and they were basically unobtanium unless I wanted to pay way too much on eBay. Losing the on-package video means I need to use an external video card and that excludes mini-ITX as I need at least 2 decent PCIe slots (an x16 for graphics and at least an x4 for the RAID adapter). Luckily it turns out that going up to micro-ATX got me the additional PCIe slot I needed and as a bonus it also gave me space in the case for two additional hard drives. This means I can use the 3TB drives I have in stock instead of buying new, larger drives to get the capacity I need saving me essentially the cost of the entire system.
na-tardis
Below is the build I ended up with. This motherboard supports 128GB of RAM so upgrading in the future is easy enough. The RAID card supports 4 external channels and an additional 4 channels internally so I can always add more drives or leverage the in-place expansion and upgrade to larger drives if I decide I need to go that way. Ideally this system will last as long as ur-tardis has. I ordered everything except the CPU and video card, hoping that the 3400G might show back up in stock, but as I was pretty happy with the way things were coming together I gave up waiting and ordered a generic video card and the Ryzen 5 3600.






Component | Description |
---|---|
Case | Fractal Design Node 804 |
Power Supply | Seasonic SSR-550GB3 80+ Bronze 550W |
Motherboard | Gigabyte B550M DS3H |
CPU | AMD Ryzen 5 3600 (100-100000031BOX) |
RAM | Corsair Vengance LPX 64GB DDR4 2400 |
Video | MSI GeForce GT 710 1GB |
RAID Controller | Areca ARC1880-iXL-12 |
NIC 2 | Intel EXPI9301CTBLK 10/100/1000 PCIe NIC |
Storage | 8x Hitachi Ultrastar HUA723030ALA640 3TB HDD |
Assembly was easy as you might expect these days, PCs being giant legos and all. The only real gripe I had was with the Fractal Design case. The indicator LEDs are a bit poorly designed for my use case. The power LED is a blue line on the front of the case adjacent to the side-mounted power switch, and the HDD activity indicator is underneath the lip on the front of the bezel sort of pointing downwards. I guess it might look neat if this is on your desk and it can kind of make a little puddle of light in front of your PC but it is not at all useful in a rack when you want at-a-glance indication of the status of a server. Thankfully it was an easy fix, I just reversed the wiring so the blue power LED is now HDD activity.
If you are going to build a system with more than a couple drives in it I can't recommend a storage controller that has SFF-8087 connectors strongly enough. There are only two cables running from the RAID card back to the HDDs in this system which is so much easier to wrangle than 8 SATA cables.
It took 14 hours to initialize the RAID-6 array (basically writing 0s to all 8 drives) which is around what I expected, it then took the Debian installer around 4 hours to write random data over the entire resultant array (as part of the encrypted filesystem setup process) which was actually a little faster than I expected.
I'm reserving judgment until it is up and in production for a bit, but so far so good. A mini-ITX version of this with a reasonable GPU would make a pretty nice gaming system. If I didn't already have a Coffee Lake system with a GTX 1080 in it I could see upgrading.