Part 2: SuperMicro SuperServer E300-8D deep dive

Posted on Posted in Hardware


Part 1: The NUC killer: SuperMicro SuperServer E300-8D
Part 2: SuperMicro SuperServer E300-8D deep dive

Part 2: SuperMicro SuperServer E300-8D deep dive

In the last post, I wrote about the SuperMicro SuperServer E300-8D in comparison to Intels NUC series, but also covered networking and the reason for me buying into SuperMicro over NUC. In this next part of the serie. I am going to dig into the E300-8D’s technical specifications, show case its good and bad sides and talk about noise and power consumption.

A closer look at E300-8D from the outside

Before I dig in deep, let me just sum up the specifications of the E300-8D.

SuperServer E300-8D
CPUIntel Xeon D-1518 2,2Ghz
Cores/Threads4 Cores / 8 Threads
Memory128 GB (4x 32GB) 2133Mhz DDR4 ECC RDIMM
Network6x 1Gbe (RJ45) + 2x 10Gb (SFP+)
Disk4x SATA3

1x M.2 – M key 2242/2280/22110 – PCI-E 3.0 x4

1x mSATA (Mini PCI-E) – PCI-E 2.0 x1

Peripheral2x USB 3.0
Expansion2x PCI-E 3.0 x8
SATADOM2x SuperDOM (Support for)
IPMIYES via dedicated Ethernet port
PSU12V 7A DC (84 Watt in EU – US might only be 60W as per documentation) – Brick style
Form factorMini-1U
Dimensions25,4cm(w) x 4,3cm(h) x 22,6cm (d)


That is the overall hardware specifications; now let us take a look at what the E300-8D hardware specifications look like. First of is the rear of the E300-8D via a drawing that I have borrowed from the SuperMicro documentation.


This diagram clearly shows how the ports are layout at the back of the E300-8D. What it does not show is that next to the VGA port is a slot for a horizontal PCI-E card. In order to use the slot a riser card is needed.


This is not part of the default package but has to be purchased separately. Though the bracket for holding the PCI-E card in place in the horizontal slot does come as part of the system.



The front of the E300-8D is quite simple. Five LEDs, a reset button and a power on button. The black grille is hiding two fans and the options to install a third one. I will talk about the fans later on, but for now that is all you need to know about the front of the E300-8D. Now let me show you how the E300-8D looks inside.



Power supply

The power supply or PSU is not internal, but comes with the E300-8D as a brick style PSU. It is quite large. Think of the huge once some people carry around with their laptops. The size of the PSU brick is 16cm (l) x 5,5cm (w) x 3cm (h), so not small. The PSU have the option of mountable on the side of the E300-8D, so it can fit nicely into a rack with the supplied rack mounting brackets. The specification seems to suggest there are two PSU options or regions. A 60W or an 84W. The one I got, is the 84W option. Whether that means this is the EU option (240-volt option) and the 60W option is the US option (120-volt option), I cannot say. All I know is that it should be plenty of power for what I need the E300-8D to do.

One thing is what it can draw of power another thing is what it does draw. To measure power draw of the E300-8D I used my Eaton ePDU G3 (EMIB10 – ). Which only does input metering, so I cannot see if there is a big difference between the two nodes.

Single node E300-8D power usage:
Idle – Not powered on8W
During POST/Boot45-49W
Idle – ESXi booted (ACPI C and P state enabled)40W
Peak usage59,5W


Two Node vSAN E300-8D power usage:
2 VMs running measured after boot90W
After 4 hours 3 VMs – Steady state80-85W
After 4 hours peak usage100W

There are a few things that can make the power usage go up or down. I only have one RAM module in my E300-8D and with an expected power usage of 5 watts per module; It would increase as more RAM is added. The two other changes would be the number of storage devices and the usage of the PCI-E slots.

A closer look at E300-8D from the inside

As they say, a picture is worth a 1000 words. Therefore, I start again by showing you a block diagram of the system board and how everything is tied together.


Note that there seems to be an error on the system block diagram. In the documentation the M.2 slot specifications is PCI-E 3.0 x4, but the in the block diagram it only says x3 lanes. One PCI-E lanes can transfer up to 1GB/s (the real number is up to 985 MB/s, but I would like to keep the math simple). Therefore, the difference is whether the M.2 slot can transfer 3GB/s or 4GB/s. Some might think that this is a small difference given that most home labs do not need that kind of performance, but in an enterprise, this can be the difference between paying a million dollar fine or not, or even just beating the competitors, in a first to market race.

I am not going to pretend that I have done many storage benchmarks, but then again this is not a benchmark as much as it is a question of whether the M.2 slot has three or four lanes at its disposal. If you look at the dead simple (read: I cannot screw this up) benchmark, that is ATTO. It clearly shows that read transfers comes very close to 3,5GB/s, which is close enough to the theoretical transfer speed of up to 4GB/s and is way beyond the 3GB/s mark that a x3 lane configuration would have given the M.2 slot. By the way, the test here were done with a Samsung 960 Pro 1TB M.2 drive, but more on that in a later blog post.

The numbers seems to suggest that the SuperMicro E300-8D and the Samsung 960 Pro delivers what is to be expected.  A quick round of google fu, verifies that the Samsung 960 Pro maxes out around 3,5GB/s in ATTO read tests from independent review sites.

There can never be too many USB ports

That is M.2 sorted; now let us move on to USB, which is one of the areas where I had complaints in the first blog post. The “problem” here is that there is only two USB ports on the back of the E300-8D. This can be a problem under installations of Windows OS, where keyboard navigations can be somewhat cumbersome.

There is no CD-ROM drive (nor should there be one) or any other way of installing an OS than by the use of one of the USB ports. I do the OS installation via a USB thumb drive, which means that I only have one port left. If you had a wireless mouse/keyboard option at home, you could have both mouse and keyboard working at the same time with only the use of one USB port. I do like to use wireless mouse and keyboard when working with the lab, but they need a USB dongle each, which means doing installation of OS, I am stuck with only keyboard.

It would have been nice to have more USB options on the back. Not all hope is lost, as if you really want more USB ports there are two USB headers on the motherboard. These you can use to expand the number of available USB ports internally. You might also be able to get it externally, with either fitting two USB ports in the spare RJ45 expansion option on the back of the E300-8D or use the PCI-E slot bracket, which means you will be giving up the option of using the PCI-E slots for other expansions.

Lastly, all this you could consider needless, as the E300-8D has an IPMI interface, making the need for USB ports all together redundant. I do however, like the fact that it has an option to use USB, but I must also say, I am using IPMI more and more every day. When I get that time I would also like to play with the IPMI’s API interface and hopefully making the need for interactive use of IPMI al together a thing of the past. That is yet another thing I will have to reserve a future blog post for.

Enjoy the silence

One of the areas where I have seen the most resistance towards SuperMicro’s SuperServers, be it the E200-8D, E300-8D or any other for that matter, has been the use of fans and more notably the noise they produce. I can only agree that this is the most profound weak point of SuperMicro’s SuperServer lines of seen from a home lab perspective. They are not silent! To try to give you an idea of the level of noise the two fans in an E300-8D produces I measured it with an app on my phone, one meter away for the source of the noise. Note the test has not done with professional equipment, but I still believe this will give you a reasonable idea of the expected level of noise.

The tests were done in my home office, in the office at the time of testing the ambient noise level were between 15-33db, with the needle being in the area of 20-25db most of the time. I do live in an urban area and as such, noise from people in transit is expected to be heard from time to time.

So how does these numbers compare to the noise of an E300-8D being powered up? Well, there is two stages of noise. The noise produced while the server is booting up and the second stage is when OS drivers are being loaded and the OS starts to control the speed of the fans and therefore, also the noise of the fans. As for noise levels during boot, they were between 42 and 43 decibel and as soon as ESXi has been loaded up the noise levels dropped to 29-30db. Still a lot louder than ambient room noise, but much less than a lot of other IT equipment.

I would have loved to be able to provide a comparison to a Cisco Catalyst switch, as it is one of the noisiest devices I can think of. Seeing as I do not have a Catalyst switch at hand. I went with what I have, the Ubiquiti ES-16-XG. Much like the Cisco Catalyst switches, it only seems to have one noise level and hardly ever changes speed of the fan. The ES-16-XG produces a noise level of 33-37db. Clearly, noisier than the E300-8D.

Even though some of the numbers here might seem very close to each other, I mean 25, 30, 37 db. Hardly a wide spread when you just look at the numbers. There is a very good reason for that. The Decibel scale is not linear. For every three decibels you add, the amount of noise perceived has doubled. Again just to give you an idea of what normal noise levels are a conversation is estimated to be at around 65db and me sitting in a quiet room typing this blog post produces between 28-40 decibels of sound, just from me typing on the keyboard.

There are two 40mm fans in the E300-8D from the factory, but there is room for another one, if you need to cool a PCI-E card, I presume or if you opt to change the fans to a low noise option, this might also come in handy. The motherboard itself has outlets for six fans, so there might be other ways to cool the system down that I have not thought of, at least you will not being held back by the system if you want to explorer other options.

My E300-8D is not going to be in a room where the noise is going to be a problem. Therefore, I do not plan to change the fans, but if needed low noise fans will not cost you more than $5-10 dependent of maker, airflow and decibels. After all, it is a small price to pay for a happy wife.

Ethernet is Ethernet

On the other hand, there might be more to it… The E300-8D comes packed with an array of ethernet ports, in multiple speeds and connectivity options. I will quickly walk you through the goodies we are getting with the E300-8D.

To begin with you get a total of eight ethernet port. These ports are using three different ethernet controllers. Two ethernet, RJ45 ports are provided by the Intel I210 controller. These port are capable of up to 1Gb/s of transfer speed. The I210 controller does not support SR-IOV, but can be used for passthrough if needed. Next controller is the Intel I350, yet another 1Gb/s network controller, but whereas the I210 was a dual port controller, the I350 is a quad port network card. The I350 does support SR-IOV as well as passthrough. Last up is the most interesting network option, at least for me. It gets recognized as an Intel X552, dual port network adapter. Whereas the other two network controllers only were 1Gb/s and RJ45 based.

The X552 is a dual 10Gb/s network card and does not use the RJ45 connection type, but rather use the more enterprise friendly option, of SFP+. SFP+ allows us to choose which kind of network cable technology we are going to use. It does make things a little more flexible and I my case also a lot cheaper.

There is only one problem with the Intel X552 network adaptor. The network driver, which ESXi 6.5 ships with by default, is of an older date and does not supported by the newer Intel X552 network adaptor. There is two ways to fix the problem. Install the newer driver after ended installation of ESXi or create a custom image in which you add the newer driver. I have opted to create my own custom image, as it is very easy to do and I can then have an automated and standardized install procedure for ESXi installations. I expect that I will have to reinstall my environment from time to time. This way it is very easy to do.

The driver you need can you download from – The version I use is available here:

Inside the zip file you downloaded is the offline bundle, which you are going to need to create your own custom image. This is the PowerCli script I use to create custom images with.

Note, that this script will always create an image of the latest version of ESXi 6.5, if you need to build another version of ESXi, you need to change the name response in line three. In addition, the last line in the script is create an offline bundle, which can be reused and modified if needed. If you see no use for that just remove it and be happy with the ISO it provides you.


Faster, Bigger, Storage

Storage is always a pain point. Either it is too little or it is too late to the party. In a home lab this is no different. I bought into the E300-8D in the hopes of being able to use some of the expansion options that could help me get a decent home lab up and run. I turns out I was not all wrong, but there is still some bumps on the road. With that let me dig into the options of storage for the E300-8D.


There is in total four physical SATA ports on the motherboard. On the block diagram from earlier you can see that M.2 and mini PCI-E mentioned as SATA5 and SATA6, my guess is for backwards compatibility reasons. What is very nice to know as well is that SATA0 and SATA1 supports SATADOM (SATA Disk On Module), what this means is that there is a small power connected conveniently located close to the SATA ports. As I stated just before there is also a M.2 and a mini PCI-E/mSATA interface and two PCI-E 3.0 x8, but more on that in a moment.

That was the quick overview of the options, now let me talk some about the different options, starting with SATA. Of the four SATA ports, two are located near the edge of the motherboard and two are located nearer the center. The edge SATA ports are regular SATA ports, whereas the centered once support SATADOM. As I use SATADOM in my setup, I opted to use one of the edge SATA ports for the 2TB hard disk I had in the lab.

I had a slide issue with the supplied SATA cable, which is L shaped. I order to fit it into the SATA port I have to plug it in so it turns away from the motherboard. With a tight build like this, it means I have to force the cable down into the SATA socket and bend the cable, touching the edge of the case and back in towards the motherboard. Not ideal. I guess I should just use one of the ports in the middle of the motherboard.

A quick note: If you need more than one SATADOM in the E300-8D a different cable might be better. Also, note that there is only one molex power plug on the motherboard making it harder to have multiple drive in the E300-8D. Not to repeat myself, but it is a tight build so I am not sure you would be able to find space for any more drives in there either.


The one sure option you have is to use a SATADOM. SATADOM is at Disk On Module, which has a SATA interface. Think of it as a small USB SSD thumb drive, but instead of a USB interface it has a SATA interface. Simply put, it is a damn small SSD, which plugs right into the motherboard. One of the downsides to SATADOM is it needs a special power plug, as you cannot draw power from a SATA interface. SATA ports SATA0 and SATA1, has such a power plug within reach of each SATA port, making it possible to have two SATADOMs installed in a system. Making a raid 1 configuration a possibility.

SuperMicros SATADOM, is called SuperDOM and comes in the following capacities: 16GB, 32GB, 64GB and 128GB. There are multiple reasons why I chose to use a SuperDOM in my build. First of I am using vSAN in my lab. vSAN does a lot of writing to the boot device (think: vSAN traces, syslog and core dump) and therefore it is not recommended, if you want to use vSAN, to use a USB as boot device, which is what I normally would have done.

This all has to do with the write endurance of the boot device. The 64GB SuperDOM I bought has a write endurance of 68 TBW (TeraBytes Written) or 1 DWPD (Drive Write Per Day). VMware recommends for vSAN 6.2 or newer to have a boot device with an endurance of 384 TBW. My choise is clearly undersized, but then I do not expect to be hammering my lab as hard as a production environment. I am hoping I am right on this. If you want higher endurance, a bigger device is needed. SuperMicros SuperDOM 128GB, has an endurance of 158TB. Still only half of what is recommended. I have not shopped around here, so I do not know if such a SATADOM exists.


The other reasons, besides endurance I chose a SuperDOM, is size and capacity. Size meaning it fit right in the E300-8D and only takes up a SATA port and really no space at all and the last one, capacity. It has to do with another VMware recommendation and looking at use cases. Running vSAN there are some requirements to how big the core dump partition should be, dependent on how much memory is in the system and how big the cache tier is. I could have gotten away with only a 16GB SuperDOM is terms of vSAN, but I also test other OSes, hypervisors and storage solutions and as such, I think 16GB might be a bit too small for some of the use cases.

The SuperDOM is not as cheap as a USB drive, but then it is much faster. I managed to install ESXi in 7 minutes incl. all the manual inputs. Have not timed it when I do it via kickstart, but it will be slower as there are multiple reboots involved.

If you want to know read more about what to consider when choosing boot the device for vSAN and ESXi have a look at these sites:!/vsphere-core-storage/vsphere-flash-device-support/esxi-coredump-device-usage-model/1!/vsphere-core-storage/vsphere-flash-device-support/ssd-and-flash-device-use-cases/1!/vsphere-core-storage/vsphere-flash-device-support/ssd-endurance-criteria/1!/vsphere-core-storage/vsphere-flash-device-support/ssd-selection-requirements/1!/vmware-vsan/vmware-r-virtual-san-tm-design-and-sizing-guide/boot-device-considerations-1/1


Mini PCI-E

Mini PCI-E or mSATA as it is also known as. Is a somewhat dyeing standard. It never really got off the ground before it was killed by the M.2 standard which has more use cases, when talking storage options. I believe you can still use to Mini PCI-E as a slot for a WIFI adapter if needed, but as a storage option there is not a lot of solutions out there. One of the few is Samsung 850 EVO 1TB, which comes as an mSATA option. I have not tested this option, yet. I think the 850 EVO you be a good fit for an all-flash vSAN option.

One thing you have to keep in mind though is that the interface is not the fastest. It uses the PCI-E 2.0 standard instead of the 3.0, which almost half the speed of PCI-E 3.0 (500MB/s vs 985MB/s per lane). That would not be a problem if it then had enough lanes to compensate for the lower speed per lane, but it does not. The Mini PCI-E interface uses only one lane (x1) and is therefore limited to 500MB/s. Do get me wrong this is still fast for a home lab, but not as fast as an M.2 interface, which usually is PCI-E 3.0 x4, meaning the theoretical throughput is close to 4GB/s or eight times faster than mini PCI-E. As a capacity tier I still think it is a valid option. Time will tell if I get to test this out in real life.



M.2 is the new kid on the block. It is a well define standard. Only the length of the M.2 modules differ, the rest more or less like a memory stick. Making it a very versatile option for both, servers, pcs and laptops. It is simple to install and fast if it is built upon the PCI-E 3.0 standard, but it is not all good! There are different interface types (B key or M key), which are not compatible. The M.2 module can have NAND modules on one side or both, dependent on clearance this can be a problem, then there is legacy (SATA) support vs NVMe support and lastly the length of the M.2 module and how much available space there is on the motherboard.

The E300-8D can use full-length M.2 modules of up to 110mm or as it is called in M.2 terminology 22110. It supports these three length 2242/2280/22110. The M.2 interface is using the M key type interface. It is stated that the M.2 interface has SATA support for legacy devices. I have not tested this. I have only tested with the Samsung 960 Pro M.2 NVMe module and is works and is recognized as an NVMe device. Given the fact that SuperMicro states it support SATA, it might mean that it is using either the ACHI stack or just the ACHI driver. That I do not know, but the documentation seems to suggest it is using the SATA stack as it is mux with SATA4.


PCI-E options

Given the fact that there is basically no room in the E300-8D, which could fit a lot of storage devices. I bought two “Asus Hyper M.2 X4 mini card”. The Asus Hyper card is a PCI-E to M.2 adaptor. Given the fact that the E300-8D is a tight fit for any SSD or HDD, I hoped the Asus Hyper M.2 X4 mini card, would provide me with some extra storage options. It is not a fit. Lesson learned.

I posted the above image on twitter and Paul Braren reached out, with a link to Amfeltec. I had a look at them. They are a Canadian based company, which does make many difference M.2 adapters. I tried to find a local supplier but came up empty handed. I wanted to know if they could be a viable option for my home lab, so I reached out to them and to my big surprise, they were neither expensive and did even ship to me across the pond. The price I was quoted was 62 USD, which is a little bit more than what I paid for the Asus Hyper and on top of that comes shipping costs. Therefore, I would have to buy a few in order for me to keep the price down.

You can take a look at the Amfeltec M.2 adapter for PCI-E, here.

I have not bought any of these; therefore, I cannot speak to the use case or if there is a performance penalty. I expect this to not be the case and the adapter being transparent to any OS.

What I can do is touch upon why is think this is interesting. First of there are to PCI-E slots, which means the two Amfeltec adapters per E300-8D would be possible. If I add up all the onboard options with the PCI-E options, it now becomes possible to have three M.2 storage devices and one Mini PCI-E/mSATA device. That means I could have one as a vSAN cache device and three as storage capacity. Albeit at a small premium, but compared to the NUC, this is still way ahead of anything it has to offer.

Of course, the PCI-E slots could also be used for other options. Two things you have to think about. If you do not use a riser card, height is a big limitation and second, with a riser card you still will not be able to use just any card. Only half height and only if it does not interfere with the placement of and SSD/HDD in the case. Using M.2 will fix that problem, if needed.


With that, I leave you for now to make your own decisions on how to put together your home lab. I know it has become quite lengthy. Remember if you found this to be of good use, others might as well so please feel free to share it with peers. Until next time take care and thanks for reading all 4500 words.


Previous: The NUC killer Supermicro E300-8D

  • Pingback: The NUC killer: SuperMicro SuperServer E300-8D - Michael Ryom()

  • eWhizz

    Use an apple keyboard, They sensibly have a USB hub built in for a mouse or other device to daisy chain. They also comfortably fit on a shelf in a rack.

    • Michael Ryom

      Cool thanks for sharing. I have a crush on my 10 year old Logitech DiNovo keyboard. Which I just cant seem to let go of. I have also contemplated buying an ThinkPad keyboard as it could also fix the problem and at the same time being a small and versatile solution, with buildin mouse 🙂