The Portable SDDC – Part 3 – The BOM
The Portable SDDC
Previously I reviewed my overall idea for the Portable SDDC in part 1 and the software that I was going to use in part 2. The idea is that I had something tangible to demonstrate the power of the SDDC and its ability to eliminate the need for massive amounts of data center space. Now to continue where I left off…
- Part 1 – The Overall Idea
- Part 2 – The Software
- Part 3 – The B.O.M.
- Part 4 – The Build (coming soon)
- Part 5 – The Configuration (coming soon)
- Part 6 – The Scripts (coming soon)
- Part 7 – Demo Time (coming soon)
Part 3 – The B.O.M.
Now that I have built my conceptual design and logical design, its time to focus on the physical design. One of the things I looked at was looking at 2x clusters running 10GB direct-connect and not having a 10GB switch. This would of required my OOB server to actually run ESXi for the vSAN witnesses, which then would mean I would need a second OOB server to handle shutdown/startup, I also wouldn’t of been able to leverage vSAN erasure coding. Because of that I decided to stick with a single cluster. This provides me with greater overall capacity, performance, and flexibility but it requires a 10GB switch.
For the hardware B.O.M. my decisions weren’t as straight forward, my decisions here could make or break this solution. Again, prior to choosing any hardware I had to make a decision:
(A) choose a case first and then try to find stuff that fits
(B) Select the compute & networking hardware first and then find a case that can fit it all
I went with Option B, as it was more important to me to meet all of my requirements and constraints instead of compromising them based on a certain style case. Looking back on this, Im very glad I did this instead of option A, as I don’t think this would of been successful.
For the OOB server I needed the ability to connect to external monitor, add USB keyboard & mouse, connect USB storage if necessary, it also just needs a single 1GB NIC. I though about getting a small Raspberry Pi, but I remembered I had my old NUC lab being unused. So I settled on one of my 5th gen Intel NUC (NUC5i5MYHE) and installing MS Server 2012 R2 for its OS. This will be my NTP source, powershell host, and even secondary DNS server.
So with that settled I looked at my available options for ESXi hosts:
(A) Intel NUC
(B) Mini PC
(C) Shuttle PC
(D) SuperMicro E200-8d
(E) SuperMicro E300-8d
(F) Build my own with mini ATX motherboard
Because I wanted 10GB networking, that ruled out the Intel NUC, Mini PC, and Shuttle PC. That leaves me with options D-F. Since I wanted SSDs with Power Loss Protection (PLP), most m.2 SSDs wouldn’t fit the bill, so that eliminated the SuperMicro E200-8d. Since I really didnt feel like finding my own case, fans, etc… I decided on the Option E, the SuperMicro E300-8d. There is a nice write up about the E300-8d here: https://tinkertry.com/supermicro-superserver-sys-e200-8d-and-sys-e300-are-here
Next decision is what are the resources that I’m going to put in the E300-8d (x4 hosts).
- RAM: This is usually the most constrained resource, I went with 64GB of RAM. I selected 2x 32GB DIMM modules which would leave room for growth if necessary, as there are 4x DIMM slots on the E300-8d. (Crucial 32GB DDR4 PC4-19200 ECC DIMM)
- CPU: The E300-8d has built on SoC processor (Intel® Xeon® processor D-1518, Single socket FCBGA 1667; 4-Core, 8 Threads, 35W) , so there is no modifying this
- ESXi Install Disk: After using SanDisk Cruzer Fit on my Intel NUCs previously, I knew they get really hot and then are more prone to errors. This time I wanted something that stuck out farther for better cooling, plus i needed clearance in the back of the server anyways because of the SFP modules and cables. So this time I went with SanDisk Ultra Flair USB 3.0 64GB Flash Drive High Performance up to 150MB/s (SDCZ73-064G-G46)
- vSAN Cache Disk: Not only do I need power loss protection (PLP), I also need performance to run all of these Apps. The AHCI onboard RAID controller doesn’t perform as well as I would like for this, so I wanted to go with a NVMe drive that was on the vSAN HCL. Looking through the list i finally came across the 400GB Intel P3600 series PCIe SSDs. So I know the E300-8d can take PCI-Express, but I will need to get creative (I’ll show how in Part 4) in how I mount the SSDs in the server.
- vSAN Capacity Disk: This I decided could use the SATA SSD, I just needed capacity and PLP. Looking through the list I settled on the 1.2TB Intel S3520 SATA SSD.
So, with my compute nodes selected and populated, now I had to find how I wanted to network them together. My choices for 10GB networking were constrained by the following; E300-8d use SFP for 10GB, I wanted redundant cables for availability and for multi-nic vMotion, I need 1GB ports for iDRAC style access, it needed to be smaller than a most datacenter switches, and ultimately I need to keep costs down. I decided that it would be acceptable if i had a smaller 1GB PoE switch that connects to my 10GB switch instead of trying to find one 10GB switch that did everything and wasn’t $5k+. This narrowed my switch selection down to the UBIQUITI EDGESWITCH ES-16-XG, there is a write up HERE about them(i didnt have the quality issues that STH did). Now I had to do some digging about supported SFPs, which I located from their forums, and selected the Cisco SFP-H10GB-CU1M 1meter Twinax. I actually ended up buying the 10 pack from amazon because then I could have 2 spares in case I needed them later.
For the 1GB switch I first used an 8 port switch I had laying around (but didnt have PoE). This is the one originally pictured in the bottom right corner. The problem was it was an unmanaged switch and had no PoE capabilities. I have since planned to swap it out for the NETGEAR ProSAFE GS110TP; which it should be arriving tomorrow, thanks Amazon Prime.
vSAN. (That was simple)
Misc Item Selection
The remaining items (not counting the case) deal with power, cables, Wi-Fi AP, keyboard & mouse, and mounting items.
Power Strip: I started with a 6 port power strip but swapped it out for an 8 port Tripp Lite power strip. While this was more ports than I needed, not all power adapters may fit together, and this gives me space to plug my cellphone/tablet in if needed.
—* 1 meter Cat6 STP (blue for internal access ports, green for laptop and/or NUC, and black for trunk)
—* 1 meter USB extenders (for hosts 2-4 that aren’t as accessible)
—* USB Console Cable
—* 3 meter HDMI cable
—* Mini Display Port to HDMI adapater
WiFi AP: Ubiquiti UAP AC Pro
Keyboard & Mouse: Logitech Wireless keyboard & mouse combo
—* Velcro (Good quality)
—* Rack Ears (the ones that came with the 10GB switch)
—* 90 Degree Angle Steel Brace
—* Command Strips – Hooks
Finally, the last piece of equipment that I needed to decide on, now that I had all of the other hardware selected. First thing I started doing was playing tetris, how could I fit everything together in the most optimal way to allow for airflow, access, mounting, and the ability to close everything up without having to unplug everything. I actually tried doing this with paper and schematics first, but nothing like actually having your hands on the equipment. Luckily for me I had a left-over amazon box that was very similar in size to what I was picturing in my head and I used that to plan everything out.
Once I had my measurements I was sure I found the perfect case… It wasn’t, I forgot about the wheel and handle indentations that take up space inside of the box and it just wouldn’t fit. Next I thought about making my own box, but again I really needed something already built and sturdy. That’s when I remeasured, modified how I was going to lay out the equipment, and found the case I have now. The Pelcian iM2620, I ordered it with foam, so that I could cut up foam pieces that I could use for transport protection if necessary, but not to stay in 24/7.
With everything ordered, it was finally time to start assembly of the pLab.
Previous – Next (coming soon)