Lets get the most out of this beast of a SC 846 Mod and create a silent home server with massive storage potential. Okay so I will give you some burn-down now that this is in production here. I also took time to read all the posts up to this point and wow. Just wow. This is an epic thread. I wont rehash too much that was covered unless I saw an open question that I found an answer to. So here is the agenda:

I. Tear-down
II. Build-out
III. OS Selections
IV. Use Cases
V. Additional Mods
VI. Power Breakdown
VII. Conclusions

So grab a drink and lets get this on the road.

N.B. I got my server from here > http://www.ebay.com/itm/Supermicro-24-Bay-Chassis-SATA-SAS846TQ-Server-AMD-QC-1-80GHz-16GB-H8DME-2

N.B.B. I started this over on AVSFORUMS.COM but the images wouldn’t update. Massive Thread. Epic Info.

I. Tear-down

This came very well packed. The UPS guy handled it like it was nothing, but at nearly 70 lbs in a block its a waiting disaster to fumble. Caution. The molded package is one of the best server shipping cases I have seen aside from a new server arrive in. Awesome. Unboxing.

IMG_0162

Still had protective tape and only some very minor scratches around it. I did a quick post check to ensure I was not dreaming. Plugged in, grabbed my ear protection, and hit the power. 😮

 

It booted up! OMG LOUD. I have spent more then a bit of time around servers, even though I am not an admin. This is louder then a room of 2950s.

I then proceeded to flash my bios with the RUFUS util and the 3.5a from SM so I could use my 2425 HE pair.

IMG_0171

Something I noticed you may want to look out for. I had a fine graphite looking material that was appearing after the internal fan wall. Bad I thought, yet the drive bays where clean and lacked any appearance or the material. I tracked it down to the rubber liners for the cable pass-through. Here is a pic.

server smut

I ended up removing them, as it turns out they had been covering already rounded edges. The build quality on this case is very very high. Worth well over the price paid. Plus as an added benefit rubbing down things to clean them with 99 % alcohol is about my favorite thing in the world….:rolleyes:

IMG_0174

I decided to also ensure the entire case was clean, and ruff fit the PSU.

IMG_0176

In all the cleanup of the sooty plastic dust took a good deal of time. An hour or so but at least it wont happen again. I removed everything as per the first post also, those are great directions. On to the build-out!

II. Build-out

I like to workbench an item before I case it up, just to look for any problem spots and also to run my cheap thermal gun over it and look for any hot spots. The Southbridge is the hottest case item, clocking in at about 55C. Thats within range so it is fine. I used just 2 of the cheapo AM3 cpu fans I had around from when I was doing lots of lightcoin mining. Very quiet and they do a good job cooling the chips. The HE and EE don’t get hot so its pretty easy.

IMG_0173

Time for more Arctic 7 it seems.

Next I replaced the fans for the back with 2 Silenx 80mm ones. Very quiet 3 pin pwm controlled fans. They work great.

IMG_0182

Next I reused the double sided sticky pads (green items in pic 2 above) that came with my garage door insulation kit (they gave me a lot extra and I saved them for some unknown reason….until now) On the outer rim I lined that with window molding on the back and side that sits flat against the wall also. So the PSU can vibrate on its own if it needs to crank up at all without shaking the case. This turns out to be way more then is needed, the case is not going to get to a high fan situation I think.

IMG_0172

 

I then made a fan wall with 3 120mm fans and a piece of wire mesh from an antenna project I had left over.

IMG_0185

Nifty.

IMG_0207

And the finished fan wall now with some minor attempt at bundling the sata cable horde.

IMG_0210

Its a bunch of cables no doubt about that.

IMG_0190

Everything went back in and yea. That is a build. Tested again and worked properly. I did a light check on the backplane at this point. Here is how I did that.

I took out every caddy. Then used the reset switch several times to reboot to post and watch the lights in the back flash red. Then I used groups of 3 drives in caddies to swap in and out each time to check the SAT-MV8 cards functioned. All good.

The cards do work with more then 2 TB also, they wont go past that in post but windows and *nix will see their true size. The Additional Mods section below will have more info also and pics, with cool stuff so read on.

III. OS Selections

The original intent was to run the rig as a bare-metal HV and do everything I needed inside that. I decided to do a few deploys to test functionality and things like pass-thru capabilities before committing to anything. Good thing also!

I started with Windows HyperV 2008 R2.

I have used this often in the past, and knew pass-through would be essentially non-existent. I was able to flash the SONNET 4xi drivers from the cmd line and get the machine up and running. Performance however was bad. I loaded in a standard few hyperv images from Veem of a windows 7 machine and also a Ubuntu 14 running a complete Atlassian stack. On a reference bare metal 2008 R2 performance was good with a single X5650 and 24GB ram. I deployed to my SSD and that was when I noticed something. The on-board SATA runs past a MPC55pro…which our boards actually run the nForce Pro 3600 chipset, they called several chipsets the mpc line. This has some generic standard drivers, and they suck. I was bummed about this, but glad I tested. I could use another controller card for the ssd’s and indeed that is what I ended up at, but that’s a bit ahead of myself.

Hyperv 2012 R2

IMG_0199

New to me, but maybe better supported drivers. Indeed performance was better, however I would need to make a one way conversion of all my images to migrate them up. No reverse on that, and while I can use a backup I dev on these so not ideal. I/O was much faster in 2012, why I do not know. I decided that I may come back to 2012 R2 but that I would try next Xenserver

Xenserver 6.2

Why not 6.5 should be the first question. The answer is because the “Sat2-MV8” is also the MV88SX6081 8-port SATA II PCI-X Controller from Marvell and that is supported in 6.2, at least in the HCL it is. After an install I did a bit of digging into why I couldnt find my disks. The implementation of this controller seems to have many issues. This was when I decided to look at the IOMMU capacities we have in our SC846 server closer, and what it led me to was shocking. Kind of. More on that in a bit.

Windows 7 x64

Works GREAT, except the graphics adapter and sleep! It is snappy, the controllers with the SONNET drivers are fast, and even with Fernandos modified repacked chipset drivers it works great. If you are on windows 7 on this machine, its is a great choice. No doubt paired with Flexraid ect it can be a fine option. I am even going to show you how to get a full sized PCIe 16 card working in this board. This does have great potential along these lines of use. I then turned on a virtualbox VM and again was like….whoa that is bad. It was just a straight LEMP image but performance was lets say very not good. Throwing resources like ram and cores at it made no difference. In the end this wouldn’t be a good choice, as I need to be able to run a few test VMs with a Centos 6.5 image to do some specific things very fast. The Virtualbox was a no go. At this point I am hitting enough constraints to begin to seriously question any possible dual usage for the machine.

Sleep does not work in Windows 7 however due to the MPC 55 chipset, again. S3 level of support is not possible and hibernation is about the only soft off you can get. In windows look into the PowerCFG options to explore more on your own here.

Quick Side note… Just be sure to read about the usage and specifically the node lock part and hardware lock with flexraid please before you buy it. You cant just go jumping around in machines and with various drives in it. You will need new license(s). Plus I personally lost some data with FR in the past, and since I am already thinking about a new mobo tossing money at that direction doesn’t solve anything. plus I need 1 VM that can run at full speed.

Centos 6.7 amd64

I can rave on about it, but trust me it is awesome. The devices work like a charm, amazing DD and unixbench scores considering the age. Totally able to crush on my database operations. MariaDB sings with RAM > 64GB. So I knew that this was at its heart going to work for this alone. Then I got curious and installed KVM qemu-kvm and decided to copy the exact image to that and hit it again. It was the same performance. I couldn’t see a difference in my testing, so this pointed to me something.

*Nix implementation of virt on this SM board is drastically better then windows, when its supported upstream. I didnt do any other distros, but I am guessing that ubu/suse would be fine with this chipset/controller as well.

I needed to chart out some stuff and try to figure out my optimum solution with this board. It had options, but it had constraints. Plus I had been screwing around at this point until like 4AM so I crashed.

IV. Use Cases Evolved

I knew I needed to rethink my idealized use cases now that I had better knowledge of the board, its capabilities and limitations. One option that I had yet to consider but have used in the past with some nice results was container VM tech such as docker.

Right off the bat UnRaid 6 jumped into my mind. This would allow for both situations provided I consider that I will not be able to pass-thru PCI bus items to the machine. This is specifically due to the MPC 55 chipset, again the n3600 variant, that we are running on this MOBO. A little background here is needed I think. AMD-V and IOMMU specifically are what we are looking at here. IOMMU is the ability to have your bus I/O pass to the host and VM machines. This is some good reference material here >>>

http://developer.amd.com/wordpress/media/2012/10/48882.pdf

https://www.microway.com/download/whitepaper/AMD_Opteron_Istanbul_Architecture_Whitepaper.pdf

Specifically in the second it directly mentions the issue. IOMMU for interrupts and memory directly is implemented in the AMD SR5690/SP5100 chipset. This does offer some options for a 1207 socket F opteron 2000 series processor and a few boards do even have the chipset and PCI-x interfaces, but not cost effective. You may as well upgrade to a more recent platform and everything at that point.

Given this info I am now happy to report I am running UnRaid 6 and it is doing great. A KVM and some dockers have met the needs gap in a most effective package. While some hardware specs limit functions, I will cover those more in the mods section.

V. Additional Mods

So I wanted to share a few items that I did to get a littttle bit more out of my box. I love high utilization of a machines available resources, and with the setup configured the way it is currently from the folks @ TAMS and SM, we have a few limits.

Access to the 4th pci-x slot.

IMG_0209

Modify your PCI 8x slot to fit a 16x card.

IMG_0219

IMG_0216

VI. Power Breakdown

Sufficent to say, it is a power hungly beast. Getting it to idle under 125w doesnt seem possible with any CPU combination.

VII. Conclusions

I have replaced the motherboard in mine, and it works well as a file server. I wouldn’t use this for a unraid server currently, with the legacy AMD board and opterons. DDR2 also is not efficient. The case, and power supply however are still awesome.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.