So it is more than two years since I posted my first blog about the E300-8D and why I bought SuperMicro over an Intel NUC. It’s about time for an update and an upgrade of the home lab.
How has it been
Before I jump into all the new stuff. Let me look at how it has been the last two years with the SuperMicro server in the lab. It has been excellent, no issues at all I have flashed BIOS, with no hiccups. IPMI is a life saver, not because I have had issues, but it is just to nice to be able do something stupid network wise and still be able to log in to the hosts and troubleshoot the issue. Still my only complaint about the SuperMicro, is that I need to buy license to use all the IPMI features like flashing the BIOS the easy way, instead of needing to create an USB stick and plug it in. It the same for all server vendors like HPE and DELL. Never the less it is just another cash cow. One you always want to have.
There are a few things that I have changed. Lets start with a list of items that I have bought.
- 6 x 32 GB Samsung DDR4 PC2666 REG (M393A4K40BB2-CTD)
- 2 x WD Blue 3D NAND SATA-SSD 2TB 6GB/s M.2 2280 (WDS200T2B0B)
- Intel NUC NUC8i3BEK2 – WHAT!?!
- Intel SSD Solid-State Drive 660p Series 512GB M.2 PCI Express 3.0 x4 (NVMe)
- 2 x 3PORT M.2 NGFF SSD CARD ADAPTER PCI EXPRESS 3.0 M.2 NGFF CARD
- 2 x RSC-RR1U-E8 RISERCARD 1HE | PASSIVE PCI-E X8 (RSC-RR1U-E8)
Also bought Noctua fans and a Micro SD card which turned out to be mistakes.
The price for memory has been high since I bought the 32 GB memory there is in each of my E300-8D’s. Therefore I have never found the cash to buy the 128 GB each E300-8D can handle, until now! Since I already had two sticks of memory from another vendor, I have group those together with the new Samsung memory and there has been no issues. This was one of my biggest concerns, but I took a leap of fate and I worked no issues. So the lab has 256 GB of memory and it is still no enough, someday, just someday I’ll hope that will be the case until then this will have to do 🙂
I needed to be able to run a vSAN cluster, without to much trouble, when upgrade time comes around and without having to rebuild the home lab. That made me do two thing.
1. Buy a Intel NUC, as the SuperMicro server is just too expensive. This will be my “management cluster”, single node. With Intel 660p M.2 and 32 GB memory. All it does is host the vSAN witness node and vCenter. Giving me 256 GB for all the fun stuff. A NSX manager will most likely also find its way over there, but for now just two VMs.
2. Storage upgrade! I ran hybrid mode, with an SATA cable to an external HDD. Not optimal and slow. When I first bough the E300-8D’s I also bough some ASUS PCIe M.2 adaptors, but they are just one or two millimeters to tall, to fit inside the case. So I dropped them. Had a chat with a Canadian company, but it was WAY to much trouble buying from them, asking me to fax stuff to them!!!
Finally I found the Startech PEXM2SAT32N1, which is a PCIe card the holds one PCIe M.2 NVMe and two SATA M.2 cards. Just perfect for my needs, only one problem, it was also to tall. Luckly SuperMicro has a riser for PCIe slot. Making this the perfect solution. I could get more fast storage in the E300-8D and I could also expand the storage if you should need to do so at a later time.
I already had Samsung 960 Pro as cache tier, so I looked a something like it, but I found the options to be too expensive. So I settled for the Western Digital 2 TB Blue Line M.2 card. This should give me some decent amount of storage when combined with vSAN.
Now for some pictures
Not much to say here. I had bought a micro SD card I thought I could install ESXi on, but It would not detect it. So I used an USB stick instead. I did not install on the Intel 660p card as I would like as much space available for VMs, after all it is only 512 GB of Storage, so I will be gone fast.
This is a very cheap card, I paid less than 50 USD per card. Very cheap if you ask me, the riser, which I will come to in a moment, was less than 30 USD.
1 NVMe M.2 card which uses the PCIe port on the motherboard and two SATA M.2 cards with has to be connected to the motherboard via one SATA cable each. Power is served from the PCIe port.
Not much else to say all the bits are there including a small screwdriver, which is used to change the back plate of the card. The screwdriver does not work with the screws used to hold the M.2 card in place, which is a bit weird. Why ship a screwdriver with the package if it only solves one problem.
RSC-RR1U-E8 RISERCARD 1HE | PASSIVE PCI-E X8 (RSC-RR1U-E8)
I started out lazy. Did not want to put I all together to test if the StarTech card worked. So here you can see how the riser looks and fit if only half assembled.
Note the use need to break parts of the riser to make it fit. I had to break even more of to make it fit inside the Riser card bracket, but on that in a moment.
Riser card bracket
This come with the E300-8D as default. First I mounted the riser card in the bracket and then I mounted the StarTech card with the SATA M.2 card. I have tested it with two SATA and one NVMe on card and one on the motherboard and ESXi detected them all 🙂
The SATA cabling is a bit of a hassle to fit. A SATA cable with both ends being L shaped, I think would be better. I any case this works as well.
The only issue
The SATADOM, which I use for ESXi, touches the Riser card bracket. Is this an issue, I do not know, but just to be on the safe side. I have moved the satadom to the other dedicated slot just next to it. So be warned if you want to have multiple satadom, you might need a sata cable or verify if this is an issue.
Lastly this is what 128 GB of memory looks like an small form factor!
That is all for this time thanks for reading this far!
ps. the Noctua fans did not work out for multiple reasons.
I bought the wrong once – This is my mistake.
- They are too small compared to the once ready in
- I’m not a fan of the rubber fittings that came with fans. They broke to easily. I used strips instead.
- They are way too low in RPMs – The E300-8D get hot with them in. Around 80-90 degrees Celsius with “normal” load. I have not tested with 100% load.
The first picture shows the new fan. Second mounted. Third The old once. Fourth old vs new.