Select Page

N.B. I got my server from here > http://www.ebay.com/itm/Supermicro-24-Bay-Chassis-SATA-SAS846TQ-Server-AMD-QC-1-80GHz-16GB-H8DME-2

N.B.B. I started this over on AVSFORUMS.COM but the images wouldn’t update. Massive Thread. Epic Info.

Okay so I will give you some burn-down now that this is in production here. I also took time to read all the posts up to this point and wow. Just wow. This is an epic thread. I wont rehash too much that was covered unless I saw an open question that I found an answer to. So here is the agenda:
I. Tear-down
II. Build-out
III. OS Selections
IV. Use Cases
V. Additional Mods
VI. Power Breakdown
VII. Conclusions

So grab a drink and lets get this on the road.

I. Tear-down

This came very well packed. The UPS guy handled it like it was nothing, but at nearly 70 lbs in a block its a waiting disaster to fumble. Caution. The molded package is one of the best server shipping cases I have seen aside from a new server arrive in. Awesome. Unboxing.

IMG_0162

Still had protective tape and only some very minor scratches around it. I did a quick post check to ensure I was not dreaming. Plugged in, grabbed my ear protection, and hit the power. 😮

 

It booted up! OMG LOUD. I have spent more then a bit of time around servers, even though I am not an admin. This is louder then a room of 2950s.

I then proceeded to flash my bios with the RUFUS util and the 3.5a from SM so I could use my 2425 HE pair.

IMG_0171

Something I noticed you may want to look out for. I had a fine graphite looking material that was appearing after the internal fan wall. Bad I thought, yet the drive bays where clean and lacked any appearance or the material. I tracked it down to the rubber liners for the cable pass-through. Here is a pic.

server smut

I ended up removing them, as it turns out they had been covering already rounded edges. The build quality on this case is very very high. Worth well over the price paid. Plus as an added benefit rubbing down things to clean them with 99 % alcohol is about my favorite thing in the world….:rolleyes:

IMG_0174

I decided to also ensure the entire case was clean, and ruff fit the PSU.

IMG_0176

In all the cleanup of the sooty plastic dust took a good deal of time. An hour or so but at least it wont happen again. I removed everything as per the first post also, those are great directions. On to the build-out!

II. Build-out

I like to workbench an item before I case it up, just to look for any problem spots and also to run my cheap thermal gun over it and look for any hot spots. The Southbridge is the hottest case item, clocking in at about 55C. Thats within range so it is fine. I used just 2 of the cheapo AM3 cpu fans I had around from when I was doing lots of lightcoin mining. Very quiet and they do a good job cooling the chips. The HE and EE don’t get hot so its pretty easy.

IMG_0173

Time for more Arctic 7 it seems.

Next I replaced the fans for the back with 2 Silenx 80mm ones. Very quiet 3 pin pwm controlled fans. They work great.

IMG_0182

Next I reused the double sided sticky pads (green items in pic 2 above) that came with my garage door insulation kit (they gave me a lot extra and I saved them for some unknown reason….until now) On the outter rim I lined that with window moulding on the back and side that sits flat against the wall also. So the PSU can vibrate on its own if it needs to crank up at all without shaking the case. This turns out to be way more then is needed, the case is not going to get to a high fan situation I think.

IMG_0172

 

I then made a fan wall with 3 120mm fans and a piece of wire mesh from an antenna project I had left over.

IMG_0185

Nifty.

IMG_0207

And the finished fan wall now with some minor attempt at bundling the sata cable horde.

IMG_0210

Its a bunch of cables no doubt about that.

IMG_0190

Everything went back in and yea. That is a build. Tested again and worked properly. I did a light check on the backplane at this point. Here is how I did that.

I took out every caddy. Then used the reset switch several times to reboot to post and watch the lights in the back flash red. Then I used groups of 3 drives in caddies to swap in and out each time to check the SAT-MV8 cards functioned. All good.

The cards do work with more then 2 TB also, they wont go past that in post but windows and *nix will see their true size. The Additional Mods section below will have more info also and pics, with cool stuff so read on.

III. OS Selections

The original intent was to run the rig as a bare-metal HV and do everything I needed inside that. I decided to do a few deploys to test functionality and things like pass-thru capabilities before committing to anything. Good thing also!

I started with Windows HyperV 2008 R2.

I have used this often in the past, and knew pass-through would be essentially non-existent. I was able to flash the SONNET 4xi drivers from the cmd line and get the machine up and running. Performance however was bad. I loaded in a standard few hyperv images from Veem of a windows 7 machine and also a Ubuntu 14 running a complete Atlassian stack. On a reference bare metal 2008 R2 performance was good with a single X5650 and 24GB ram. I deployed to my SSD and that was when I noticed something. The onboard SATA runs past a MPC55pro…which our boards actually run the nForce Pro 3600 chipset, they called several chipsets the mpc line. This has some generic standard drivers, and they suck. I was bummed about this, but glad I tested. I could use another controller card for the ssd’s and indeed that is what I ended up at, but that’s a bit ahead of myself.

Hyperv 2012 R2

IMG_0199

New to me, but maybe better supported drivers. Indeed performance was better, however I would need to make a one way conversion of all my images to migrate them up. No reverse on that, and while I can use a backup I dev on these so not ideal. I/O was much faster in 2012, why I do not know. I decided that I may come back to 2012 R2 but that I would try next Xenserver

Xenserver 6.2

Why not 6.5 should be the first question. The answer is because the “Sat2-MV8” is also the MV88SX6081 8-port SATA II PCI-X Controller from Marvell and that is supported in 6.2, at least in the HCL it is. After an install I did a bit of digging into why I couldnt find my disks. The implementation of this controller seems to have many issues. This was when I decided to look at the IOMMU capacities we have in our SC846 server closer, and what it led me to was shocking. Kind of. More on that in a bit.

Windows 7 x64

Works GREAT, except the graphics adapter and sleep! It is snappy, the controllers with the SONNET drivers are fast, and even with Fernandos modified repacked chipset drivers it works great. If you are on windows 7 on this machine, its is a great choice. No doubt paired with Flexraid ect it can be a fine option. I am even going to show you how to get a full sized PCIe 16 card working in this board. This does have great potential along these lines of use. I then turned on a virtualbox VM and again was like….whoa that is bad. It was just a straight LEMP image but performance was lets say very not good. Throwing resources like ram and cores at it made no difference. In the end this wouldnt be a good choice, as I need to be able to run a few test VMs with a Centos 6.5 image to do some specific things very fast. The Virtualbox was a nogo. At this point I am hitting enough constraints to begin to seriously question any possible dual usage for the machine.

Sleep does not work in Windows 7 however due to the MPC 55 chipset, again. S3 level of support is not possible and hibernation is about the only softoff you can get. In windows look into the PowerCFG options to explore more on your own here.

Quick Sidenote… Just be sure to read about the usage and specifically the node lock part and hardware lock with flexraid please before you buy it. You cant just go jumping around in machines and with various drives in it. You will need new license(s). Plus I personally lost some data with FR in the past, and since I am already thinking about a new mobo tossing money at that direction doesnt solve anything. plus I need 1 VM that can run at full speed.

Centos 6.7 amd64

I can rave on about it, but trust me it is awesome. The devices work like a charm, amazing DD and unixbench scores considering the age. Totally able to crush on my database operations. MariaDB sings with RAM > 64GB. So I knew that this was at its heart going to work for this alone. Then I got curious and installed KVM qemu-kvm and decided to copy the exact image to that and hit it again. It was the same performance. I couldnt see a difference in my testing, so this pointed to me something.

*Nix implementation of virt on this SM board is drastically better then windows, when its supported upstream. I didnt do any other distros, but I am guessing that ubu/suse would be fine with this chipset/controller as well.

I needed to chart out some stuff and try to figure out my optimum solution with this board. It had options, but it had constraints. Plus I had been screwing around at this point until like 4AM so I crashed.

IV. Use Cases Evolved

I knew I needed to rethink my idealized use cases now that I had better knowledge of the board, its capabilities and limitations. One option that I had yet to consider but have used in the past with some nice results was container VM tech such as docker.

Right off the bat UnRaid 6 jumped into my mind. This would allow for both situations provided I consider that I will not be able to pass-thru PCI bus items to the machine. This is specifically due to the MPC 55 chipset, again the n3600 variant, that we are running on this MOBO. A little background here is needed I think. AMD-V and IOMMU specifically are what we are looking at here. IOMMU is the ability to have your bus I/O pass to the host and VM machines. This is some good reference material here >>>

http://developer.amd.com/wordpress/media/2012/10/48882.pdf

https://www.microway.com/download/whitepaper/AMD_Opteron_Istanbul_Architecture_Whitepaper.pdf

Specifically in the second it directly mentions the issue. IOMMU for interrupts and memory directly is implemented in the AMD SR5690/SP5100 chipset. This does offer some options for a 1207 socket F opteron 2000 series processor and a few boards do even have the chipset and PCI-x interfaces, but not cost effective. You may as well upgrade to a more recent platform and everything at that point.

Given this info I am now happy to report I am running UnRaid 6 and it is doing great. A KVM and some dockers have met the needs gap in a most effective package. While some hardware specs limit functions, I will cover those more in the mods section.

V. Additional Mods

So I wanted to share a few items that I did to get a littttle bit more out of my box. I love high utilization of a machines available resources, and with the setup configured the way it is currently from the folks @ TAMS and SM, we have a few limits.

Access to the 4th pci-x slot.

  • Reason: This is helpful especially in say UnRaid 6 as it allows you to add in a cheap extra nic card such as the awesome Intel 4 port GT Quad for about 35 bucks or 15 bucks for a Intel dual port GT version. I had a dell J1679 hanging around and am using that. Unraid has a cool mode for networking called balance-rr that even on my cheapo switch ups my BW performance substantially.
  • Solution: We need to relocate and extend our IMPI usb to a new slot. This is possible. Lets do it!
    • We need to remove the card from the powered down system. Then unplug the cables on the card. This is a proprietary USB cable made for supermicro, so I didnt find any others with that particular end style anyplace else. That is okay however, we dont need a new cable. We just need to modify our cable. I did this by putting my new NIC card in slot 4 and the impi card in its slot and then manuvering in the cable around to see its friction points. I needed to shave the bottom half of the cables buldge.
      • Dremal cutting wheel > shaved off very slow. There is a cable in the budge, probably a termination point, that you need to avoid. Cut slow and check often and you can mainly avoid it. If you do hit it, just electrical tape over it so it doesnt short out on anything.

IMG_0209

Modify your PCI 8x slot to fit a 16x card.

  • Reason: We can use a regular 16x graphics card. If you use a 5450 you can do so with about 10 extra watts also it is a known good card for iommu and virt, on the low cost end. Plus I had one laying around. (this was done prior to my learning about why the MPC55 and IOMMU would not play nice) You can still use this to get a card in for direct access in say a windows 7 situation. That would allow you to use the WMC features also since the ati es1000 will give you the vague message in WMC when you hit a TV tuner…. “Video Error” I verified that WMC TV tuner works in windows 7 with the mod you are about to see.
  • Solution: A few ways to get a full sized card in, however I had the Dremal out already and love cutting in dangerous places. This led me to do a port cut, especially since I am good with the Dremal and dont trust pci-e riser cards 100% back from when I was lightcoin mining.
    • First create a “dam” around the 8x port. I use a flat piece of plastic cut with an exacto to fit around the port. I use the second port also since we have a odd additional riser port behind slot 1 pcie.
      • Remove all cards, leave CPU and coolers in place. I left ram in place. Move all the sata cables of course.
        • I use painters tape over the pci-x risers to prevent gunk from getting in.
      • With the dam in place you will cut at a 90 degree angle to the back of the pcie riser.
      • The back of the dremal will be torward the front, right near the southbridge heatsink.
        • Do NOT cut vertically into the slot. That can lead to pin damage.
    • Go slow (on the dremal speed) and go slow on the motions. Pivot the front end down to do the cuts. Make them slow.
      • Watch the spray of plastic from the cuts to ensure its not going off the plastic (or paper ect) dam you created.
      • Be extra cautious about hitting the mobo or anything else. This is ALL ON YOU! I am not responsible for any damages.
    • Now that you have the cut, before you remove the dam, get a shopvac if you have one to vac out the cut plastic pieces. DO NOT SPRAY THEM WITH AIR.

IMG_0219

  • Now we will make a jumper to indicate to the card we have presence. If you dont do this you wont get the card to boot.
    • Here is another picture of the placement. I use a stripped section of cat5 single cable and it works great.
    • IMG_0217
    • Number 1 on the top side and number 17 on the bottom. Just push them into the holes behind each pin. Triple check you have the right spots!
    • Insert your cards and you are ready to go! I am only using 5 drives right now, so I took out 2 of the SAT8 cards, but you will have clearance for them to all fit. I will add in a pair of 40mm fans above the slots when I get to that point so I have a direct draw of the heat out from that area. This would also be advisable with a “hot” sas/sata card. The SAT8 is rather cool.

IMG_0216

VI. Power Breakdown

Soon!
VII. Conclusions

Soon!