I’ve been very lucky to be part of a small group of beta testers on Nutanix Community Edition (NCE). My original plan was to buy a Synology DS415+ with four 500GB Samsung EVO 850. Now after testing NCE I’ve managed to saved $1420. Learn all about it.
First download the Nutanix Community Edition and then create a USB stick from the IMG file using Rufus. Make sure to select DD Image as shown below. The process is rather slow so 30 minute is normal.
My MAC Mini vSphere Cluster consist of 3 Mac Mini 2012 Edition with i7 Quad Core and 16GB of memory. You can learn more about the setup in podcast AE002: How does Eric’s Home Lab look like? and AE007: The Perfect Portable Home & User Group Lab Rack.
I have installed Windows 2012 R2 on the local SATA storage and VMware ESXi 5.5U2 on a SanDisk 32GB USB3.0 Low-Profile Flash Drive. When you want to boot of the USB stick, connect a wired keyboard and hold the ALT key on your Windows keyboard straight after the start sound. To change Default boot to ESXi, select it and then press CRTL. Do not try to get this working with a wireless keyboard, you’ve been warned!
The NCE is also installed on a SanDisk 32GB Flash Drive and hardware wise you need an Intel Network adapter (might work with others) and a minimum of 16GB memory. You’ll also need at least 200GB SSD and whatever SATA hardrive.
I’m using one of my old whiteboxes with Quad Core and 32 GB and some spare disks. The setup is straight forward, just follow the documentation to configure and create your container and storage pool.
When presenting the NFS storage to vCenter I first whitelisted my subnet through Prism and then added the DataStore on vCenter. The folder path is your container name e.g. /Container-01.
To get real world testing results I decided to use my Automation Framework.
In the test I deployed a Windows 2012 R2 image using Microsoft Deployment Toolkit 2013. The reference image is identical and no Windows Updates are deployed. The only small difference is VMware Tools vs XenServer Tools.
XenServer VM with 1 vCPU and 2GB of memory on Local SSD Storage (Samsung EVO 840 500GB). Deployment time 14 minutes and 13 seconds.
VMware VM with 1 vCPU and 2GB of memory on shared NFS Storage hosted by Nutanix Community Edition. Deployment time 11 minutes and 30 seconds.
Say what! NFS over a 1GB connection is faster than local SSD storage? Very strange since XenServer is using internal traffic, no 1 GB limit there. Let’s test some more by dropping the VM tools to remove any doubts about that part.
XenServer VM with 1 vCPU and 2GB of memory on Local SSD Storage (Samsung EVO 840 500GB). Deployment time 10 minutes and 23 seconds.
VMware VM with 1 vCPU and 2GB of memory on shared NFS Storage hosted by Nutanix Community Edition. Deployment time 9 minutes and 03 seconds.
Okay, so the XenServer tools takes more time than VMware Tools, makes sense, more reboots. Hmmm, let’s test NFS on XenServer then. Deployment time 13 minutes and 45 seconds.
Well, maybe VMware is faster than XenServer, MAC Mini faster than my whitebox or a combination. There’s no way to find out since I don’t have SSD in the MAC mini. The test result are amazing anyway, so if you have some spare parts laying around you should get Nutanix Community Edition running today.
I’m still under NDA so stay tuned for more blog posts on Nutanix Community Edition when I’m back from Citrix Synergy.
Sneak Peak:
My lab is using a unmanaged Dlink DGS-1008D 8-port Gigabit switch. Now after my trip to Citrix Synergy I bought a HP ProCurve Switch 1810G-24 24 Gigabit ports J9450A for $135 on eBay. Let’s see how much the switch means for NFS traffic going over the wire.
I was also getting a tip from @Easi123 about Intel Solid-State Drive 750 Series SSDPEDMW400G4R5 2.5-Inch 400GB PCI-Express 3.0 MLC. This way we’ll bypass the SATA6 limit by combining four lanes of PCIe 3.0 with state-of-the-art NMV Express (NVMe) interface for truly amazing performance. I got myself one of those as well. You don’t even need to ask if the SSD 750 is fast. Do you need to ask a Lamborghini owner if his car is quick?
During Citrix Synergy 2015 the Service Pack 1 for Citrix XenServer 6.5 was released. Looking forward to test the new announced improvements.
Hey Trond, what was the reason you settled on HP ProCurve Switch 1810G-24?
Nothing special, Claudio was using the same 8 port for his movable rack, so I just went with the 24 ports because of the nice price.
Hi Eric,
Does Mac Mini have a Intel Ethernet Chipset? Is it compatible with Nutanix CE?
Thanks
Great question, have not thought about that scenario. Will install my OWC Data Doubler in one of them and give it a try. If that works, that will be awesome 🙂
Hey Trond, sorry I was reading your post again today and I wanted to understand your setup a little better.
Did you setup the Nutanix CE on it’s own server i.e. your whitebox and present the storage from it to your ESXi environment running on you’re mac mini’s or did you use just your whitebox with two USB drives i.e. 1 USB drive for NutanixCE and the other for ESXi?
I was trying to make sense of how you had it setup.
Hi Gareth, no worries. The Nutanix CE is running on a separate Whitebox server with 1 SATA SSD and 1 SATA. The Nutanix CE Storage is presented through NFS to my Mac Mini ESXi cluster or whatever Hypervisor I want to test with. The Intel 750 SSD is currently not supported in the current version of Nutanix CE, so I’m running that in another Whitebox Server with ESXi installed on USB and using the Intel 750 SSD as local storage.
Thanks for the clarification Trond. If you present the Nutanix storage as NFS to vSphere do you still get all the benefits advertised by Nutanix i.e. dedupe, ndfs etc. Also will one be able to test Hyper-V or vSphere with Nutanix CE in future. My understanding is that it’s only KVM at the moment.
No, you get very limited functionality with the NFS VAAI plugin. At the moment you can only present the NFS storage to other hypervisor. To get the full blow you need to go KVM, and that’s a bit Greek to me. Also with only one node in the cluster you’re very limited, 3 is optimal.
Hey Trond
I had another question regarding your NutanixCE install. My understanding is you installed this on one of your whitebox servers. Could you share what NIC you had in this whitebox? Was it an Intel Desktop or Server NIC or something different? I’m getting a network error during installation of NCE and wonder if it’s because of my NIC. I have an Intel Desktop NIC. I have also disabled my onboard NIC on the motherboard.
Let me know when you can, no rush.
Regards
Gareth
Hi Gareth, the strange thing is that I had both a Intel Desktop NIC (pre-req) and an on-board on my ASUS board. The Intel nic did not work, but the on-board one did! Did not disable anything, so give it a try.
So it works without Intel, because my MAC Mini that also worked out of the box to my surprise, uses and Broadcom NIC.
Why didn’t you run the VMs directly on the Nutanix node – that’s what it is designed for. It has all inbuilt Prism KVM management for creating and managing VMs etc. Don’t need vSphere
Simply because KVM is something new to most people, so there’s a learning curve here. I will check it out in the feature and support it in my Automation Framework. Any good resources you can share for people new to KVM on Nutanix?
Awesome write-up! I am just about to install this on a 3node cluster I have here at home. Dell C6100 🙂
Wish me luck!
So I started to do the install tonight and ran into an install error.
ERROR: ‘Node Serial’ connot contain non-alphanumeric characters except ‘-‘ and ‘_’
Press ‘enter’ to continue
I posted to the forum and waiting for a response now…shucks
Do you think it would be possible to install the CE OS on a Micro-ITX board using a M2 SSD?
No, at the moment there’s no support for PCIe. I tried with my Intel 750 and it doesn’t work.
Oh well USB3.0 it is then.
Thanks for the heads up. Much appreciated, that would have been a costly lesson to learn 😀
Yes it was for me! Now I have a $400 Intel 750 that performs at half speed because of old hardware and at the moment not worth $1500 to get all new hardware to support it!
If you use a whitebox server for Nutanix CE and you only have say two SSD drives and two HDD drives, do you have any kind of protection from disk failure in a single node “cluster?”?
No, to get this support you need to run a 3 node cluster.