Gentoo Forums
Gentoo Forums
Gentoo Forums
Quick Search: in
Supermicro ZFS Workstation
View unanswered posts
View posts from last 24 hours

 
Reply to topic    Gentoo Forums Forum Index Kernel & Hardware
View previous topic :: View next topic  
Author Message
Spargeltarzan
Guru
Guru


Joined: 23 Jul 2017
Posts: 300

PostPosted: Mon Dec 18, 2017 7:57 pm    Post subject: Supermicro ZFS Workstation Reply with quote

Hello dear Community,

I am about to plan a high-end workstation with much Ram (128 GB, maybe later 256 GB) to serve my ZFS workstation for engineering and virtualization purposes. (of course with Gentoo, what else?)

Currently I found the Supermicro X11DAi-N (TDP 205W, 7.1 HD Audio, Thunderbolt AOC support, 2 TB RAM, 4 x16, 2 x8 PCI, 2 UPI 10,4 GT/s) - € 532

Intel Xeon Silver 4110, 8x 2.10GHz, tray (CD8067303561400) - € 510

2x KINGSTON 64GB 2666MHz DDR4 ECC Reg CL19 DIMM 2Rx8 Hynix A IDT

VGA Passthrough to Windows

Rest of the components (hard disks, SSDs, case, etc.) I have already.

The prize should not become much higher, actually even less, but for more min. 128Gb RAM I didn't find something - what do you think about it? Any good alternatives?

ADD: Other manufactures than supermicro are also welcome
_________________
___________________
Regards

Spargeltarzan

Notebook: Lenovo YOGA 900-13ISK: Gentoo stable amd64, GNOME systemd, KVM/QEMU
Desktop-PC: Intel Core i7-4770K, 8GB Ram, AMD Radeon R9 280X, ZFS Storage, GNOME openrc, Dantrell, Xen
Back to top
View user's profile Send private message
1clue
Advocate
Advocate


Joined: 05 Feb 2006
Posts: 2562

PostPosted: Wed Dec 20, 2017 12:02 am    Post subject: Reply with quote

Responding to your request from another thread:

Observations that may be pertinent:

  1. Your board uses the Aspeed 2500 vga controller. This, like my system which uses the Aspeed 2400, is not a high performance graphics card. It's designed to facilitate remote desktop-style console access. You'll want another video card on top of this.
  2. I strongly recommend that you get at least one pcie-based SSD. They can outperform any SATA-based devices.
  3. You want to research this, but you may want to not use hardware RAID. There are several well-written arguments available from google, 'hardware vs software raid' with respect to performance (GOOD hardware is faster but mediocre hardware may not be) and more importantly to me, compatibility with different hardware, which you won't get with hardware RAID but is easy with software RAID.


I'm not sure how much help I can be here. I love supermicro hardware, they don't take shortcuts for the sake of price.

However I barely know what ZFS is, and I have zero experience with GPU passthrough. My supermicro hardware uses an aspeed video chip (similar to the one on your board) which is designed to facilitate console over the LAN, which is pretty much the opposite direction you're taking.

My virtualization experience is limited to two or three scenarios:

  1. Mostly headless VMs for enterprise servers
  2. Desktop VMs to facilitate software testing on Windows/Linux systems similar to those used by customers.
  3. I also made a few Windows Server VMs for my employer, for things like Microsoft SQL Server and emulating Windows-based app servers used by customers.


In none of those cases are rapid GUI performance a priority. Mostly I'm using straightforward minimal installations, mainstream filesystems and no frills. I have a few special-purpose setups but nothing even close to what you're describing.
Back to top
View user's profile Send private message
Jaglover
Watchman
Watchman


Joined: 29 May 2005
Posts: 7328
Location: Saint Amant, Acadiana

PostPosted: Wed Dec 20, 2017 12:10 am    Post subject: Reply with quote

According to this hardware RAID has advantage when more than 10 disks are attached.
_________________
Please learn how to denote units correctly!
Back to top
View user's profile Send private message
Spargeltarzan
Guru
Guru


Joined: 23 Jul 2017
Posts: 300

PostPosted: Wed Dec 20, 2017 12:21 am    Post subject: Reply with quote

For ZFS neither of them is necessary, the best is to use a HBA or the internal motherboard ports and simply create a ZFS pool with the whole raw devices. ZFS does all the work. I did some intense academic research in ZFS, I know much about it, but I know very little about supermicro,

I will use a dedicated graphics card for sure and will also consider a pcie ssd.Thanks for your input :) What I am unsure about is which motherboard to buy, because supermicro is super untransparant and there are so many different versions available
_________________
___________________
Regards

Spargeltarzan

Notebook: Lenovo YOGA 900-13ISK: Gentoo stable amd64, GNOME systemd, KVM/QEMU
Desktop-PC: Intel Core i7-4770K, 8GB Ram, AMD Radeon R9 280X, ZFS Storage, GNOME openrc, Dantrell, Xen
Back to top
View user's profile Send private message
mike155
Veteran
Veteran


Joined: 17 Sep 2010
Posts: 1718
Location: Frankfurt, Germany

PostPosted: Wed Dec 20, 2017 1:38 am    Post subject: Reply with quote

Quote:
8x 2.10GHz

My personal experience is that a CPU with less cores, but more GHz (e.g. 4 cores with 4 GHz) nearly always outperforms CPUs with more cores but less MHz. Of course, it depends on the programs you want to run on your system... But if you want a high-end workstation, you should definitely buy a CPU with more than 2.1 GHz.
Back to top
View user's profile Send private message
1clue
Advocate
Advocate


Joined: 05 Feb 2006
Posts: 2562

PostPosted: Wed Dec 20, 2017 1:46 am    Post subject: Reply with quote

I almost never use RAID, and when I do it's usually raid1 and there are never 10 disks attached. My use cases tend to favor speed over "hot" redundancy, and a good backup plan keeps me fairly safe with respect to device failures and more.

ZFS, the more I hear about it, the more it seems like black magic. At some point I'll read up on it, but not today. :)

Supermicro, here's what I know:

  1. I have direct experience with 2 boxes. One from my employer which runs ESXi and also the atom board here, for my home office: http://www.supermicro.com/products/motherboard/Atom/X10/A1SRM-LN7F-2758.cfm
  2. The bigger, older e3 box from work has been the most reliable box I've ever administered.
  3. The aspeed controller and IPMI 2.0 is incredible if you want to manage a server remotely. My atom box has never had a keyboard or monitor. I can watch the boot process, get into the bios, whatever I need.
  4. That same aspeed controller is a closed-source lump of code which can't really be reached by your CPU which can give somebody from 1000 miles away hands-on control of your box. Some people have a problem with that. I call it a feature. I also take care that the IPMI-attached NIC does not have any access to the outside world. It's attached directly to my workstation by a separate NIC on a network which has no default route.
  5. Supermicro makes hardware for data centers.
  6. They sell no junk, but their choices might not be yours if you're making a desktop box.
  7. All their tested operating systems, RAM, etc are the sort of testing you'd want to know about if you run a data center. Which means Linux probably runs fine on it, but you'll have to ask around to be sure.
  8. I've fooled around, it's very easy to spend a six digit number of dollars to build a single box there.


Personal observations about boards in general:

  1. PCIe-v3 is mature.
  2. USB3 is mature.
  3. With hardware like you're describing, consider 10GBase-T networking for at least one NIC, preferably something that will also sync down to gigabit.
  4. M2 is perhaps not yet mature, but I think it's likely the devices that need work, not the interfaces on the motherboards.
  5. M2.pcie seems a better arrangement than M2.sata.
  6. For a small office or home office, 1U is noisier, more expensive and harder to cool than 4U, and 4U is the same compared to a desktop, and a desktop likewise compared to a tower.
  7. Server hardware tends to assume a controlled environment for cooling and humidity and is often in a cleaner room than you might find in a small site. It's best to go a little overboard on cooling in the box for home/small office use.


Based on that, I'd personally go for hardware that supports PCIe-v3, USB3 and M2.pcie. I'd get a roomy box with external, washable air filters, put a supersized cooler on the CPU(s) and take extra care for nice big quiet fans and use internal cables that improve air flow. I'd also plan to open the box and suck out the dust bunnies on a regular basis.
Back to top
View user's profile Send private message
Ant P.
Watchman
Watchman


Joined: 18 Apr 2009
Posts: 6013

PostPosted: Wed Dec 20, 2017 5:06 am    Post subject: Reply with quote

I'd also add that Intel scalps customers who want to buy a CPU and motherboard with the ECC-disabling bit turned off. You should research both sides, especially now that Threadripper is on sale. All AMD CPUs support ECC (but not all motherboards, though you should be safe with SuperMicro).
Back to top
View user's profile Send private message
Akkara
Administrator
Administrator


Joined: 28 Mar 2006
Posts: 6702
Location: &akkara

PostPosted: Wed Dec 20, 2017 7:57 am    Post subject: Reply with quote

Quote:
X11DAi-N

I don't know about this motherboard specifically. But I have some experience with Supermicro stuff. Generally excellent hardware. However there's a few things to watch for that caught me by surprise:

Do you need suspend-to-RAM? Apparently, not all of their motherboards support it. It isn't clear from their site which ones do and which don't. I had set up a X10DRi which doesn't. No idea why. I had tried "everything". Updated bios, played with settings, tried several different flavors of Linux, and no luck. The kernel goes thru all the right motions. All the right things appear in dmesg. Screen goes blank. But the hardware never powers down. Fans keep spinning, CPUs seem to be idle but still drawing power, 90-100 watts for the whole system. Push a key on the keyboard to "wake" it and all the right things happen again: screen comes on, dmesg shows the devices being re-initialized and so on. Repeated attempts to contact them are met with silence.

Regarding M2.pcie: The X10DRi cannot boot it. It is a trivial driver that's missing from the EFI. It is easy to install via USB using the EFI command-prompt, and then suddenly the M2 storage is "seen" and can be booted. Unfortunately, there doesn't seem to be a way of permanently adding it to the EFI-BIOS. I emailed them and was told my query was being forwarded to their bios specialist. And that's the last I heard from them. Followup "pings" went unanswered.

There's a USB socket on the motherboard itself where you can plug a small memory-stick, and boot from that, to get around the M2 problem.

And, finally, it takes a inordinately long time for the bios to do its thing before it starts to boot. This normally would not be an issue. But when the machine can't suspend, one ends up booting a lot more than usual.

Other than all this, it works great.
_________________
Many think that Dilbert is a comic. Unfortunately it is a documentary.
Back to top
View user's profile Send private message
1clue
Advocate
Advocate


Joined: 05 Feb 2006
Posts: 2562

PostPosted: Wed Dec 20, 2017 11:50 am    Post subject: Reply with quote

Interesting.

I can't say whether either of the boards I've touched support suspend. They're both server hardware used as servers. My gentoo atom box has suspend disabled in the kernel. However it does handle low power states.

Also I never had to open a trouble ticket with supermicro so I can't say if your experience is unique or typical.
Back to top
View user's profile Send private message
1clue
Advocate
Advocate


Joined: 05 Feb 2006
Posts: 2562

PostPosted: Thu Dec 21, 2017 6:01 pm    Post subject: Reply with quote

Experimented with my atom box with respect to Akkara's comments.


  1. My system does not support sleep states.
  2. It DOES support a ton of power management, c-states, etc for pretty much every system.
  3. It DOES support wake-on-LAN, controllable for each of the 7 nics.
  4. My system boots in about 2 minutes from power on to gentoo login screen.
  5. My system supports fast boot which skips some steps. Mine is using "slow" boot.


It has always been my observation that server hardware takes a long time to boot compared to a workstation or laptop. I'm accustomed to having server hardware take anywhere from 5 minutes to 45 minutes to boot, depending on what kind of hardware and how much junk is packed into the box. The 45 minutes is not an exaggeration. It was an AS/400 I worked with some 15-20 years ago. I was told that the IBM mainframes took even longer.

My bios is far more complicated than any desktop motherboard's bios, based on what I've owned. I have never sat down and gone through it step by step like I have with desktop systems, because I could never dedicate enough time to understand all the options. That said, mine is configured for UEFI-only, which of course means UEFI boot and the first option is the boot loader.

I'm using UEFI+grub2, so that adds a bit of time to the process.
Back to top
View user's profile Send private message
szatox
Veteran
Veteran


Joined: 27 Aug 2013
Posts: 1776

PostPosted: Thu Dec 21, 2017 8:08 pm    Post subject: Reply with quote

Quote:
The 45 minutes is not an exaggeration. It was an AS/400 I worked with some 15-20 years ago. I was told that the IBM mainframes took even longer.
I've been working with cisco blades fairly recently. Half TB RAM, roughly 30 minutes spent in bios. I have seen blades with 1TB RAM too, I imagine those would take more time to test all the hardware, though perhaps not twice as long, since part of that time was related to firmware compatibility tests (so the manager could force upgrade/downgrade if it found a difference).
Back to top
View user's profile Send private message
Display posts from previous:   
Reply to topic    Gentoo Forums Forum Index Kernel & Hardware All times are GMT
Page 1 of 1

 
Jump to:  
You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum