Homelab – Enabling vSAN

I’ve had the lab online in its minimum viable configuration for about a week. I’ve ordered more and more parts of the puzzle and have patiently awaited their arrival. Maybe the Holiday Season wasn’t the best place to start playing with a #homelab…

In my initial post, I said that I wanted to begin playing with a 2-node vSAN cluster. I’ve got some new SSDs, slotted them in, and am ready to start playing around. In this post, I’ll walk through the setup of the 2-node vSAN cluster like many others have done before me.

The Gear

At the end of the day, I am not rich. I can’t afford to purchase the best of the best. I can’t buy “Enterprise-grade SSDs” because they’re significantly out of my price range. A recent drive I replaced in a storage array (on Christmas, no doubt) was $1300 for a prosumer – I just don’t have that kind of scratch for the lab… yet.

I’ve played around with vSAN Hybrid configuration before. In those days, I had the understanding that the Cache Tier for vSAN should be 10% of your Capacity Tier. At the time, I was using some no-name, throw-away 120GB SSD for my Cache Tier and a 7200 RPM 1TB HDD for my Capacity Tier. I was not impressed by the performance that was provided to me.

Having a few more dollars at my disposal I decided to go all SSD for this attempt. I only have two hosts that will be contributing to vSAN and I’m only just getting started. I picked up one Cache and one Capacity disk for now and will expand as I need to.

In all honesty, I thought I had purchased 500GB to make it 50% Cache Tier, but 25% Cache Tier writing to SSD Capacity Tier might be enough of a boost to performance for me. I may purchase more SSD to rid myself of local VMFS storage and just use all vSAN – we’ll see how the lab turns out after playing with this for a bit!

Pre-Work

Before I can enable the 2-node vSAN cluster, I had to do a little work up front. That work is to new vmkernel ports for vSAN Traffic and deploy the vSAN Witness Appliance. One of the things I’m finding that I’m running out (read: am already out) of is Ethernet cables. I underestimated the number that I would need and have made some early sacrifices until I can get some more. For this reason, I’ve got my vMotion and vSAN Traffic on the same vSwitch – that vSwitch is connected to the same physical switch as my Management Traffic.

00_Virtual_Switch_Setup

I’ve created a new vmkernel port and added it to vSwitch1 for all of my hosts – this includes the host in the Management Cluster. The vSAN Witness Appliance will need to be able to communicate with the other vSAN Nodes on the same network. I’ll circle back to this when I have started properly VLANing and physically segmenting storage-related traffic to a dedicated physical switch. The upgrade potential for a #homelab is endless, I’m learning…

Now I need to make sure that my disks are seen in ESXi. Just last night, I put the 3.5″ to 2.5″ adapters in the sleds and slotted the new SSDs into position. These showed up without even needing to rescan the HBAs.

03_Storage_Devices.png

Finally, I need to deploy my vSAN Witness Appliance. The vSAN Witness Appliance is provided as an OVF. I’ve deployed the OVF to my Management Cluster as “vSAN Witness Appliance.” I’ve added this nested ESXi host to my vDC as the standalone host “vsan-witness.mueller-tech.lab.” I’ve not gone over the deployment of this in detail because it’s a self-explanatory deployment. (The previous statement was a lie – I neglected to take screenshots while I was deploying it and don’t want to do it a second time.)

Enabling 2-Node vSAN

Now that the pre-work is configured, it’s time to enable vSAN in the cluster. This is simple enough to do. Select the Cluster that you want to enable vSAN on, navigate to the Configure tab, select Services under vSAN, and click on Configure to the far right of the notice “vSAN is Turned OFF.” This will bring you to the Configure vSAN Wizard.

04_Start_VSAN_Wizard.png

Next, I’ll select the entire reason I’m writing this post (hint: Two host vSAN cluster) and click Next.

05_VSAN_Wizard_1.png

For Services, I’m going to enable Deduplication and Compression. I’m doing this in a lab environment and have a finite set of resources. This option is only available when using all Flash disks, so it’s good that I opted to try this with all SSDs. I worry that using consumer-grade SSDs and enabling this feature could decrease the overall performance of my 2-node cluster. I’ll experiment with this if necessary.

05_VSAN_Wizard_2.png

Claiming disks is simple – the Wizard has already selected the largest of the Flash disks as Capacity with remaining as Cache. Super simple stuff!

05_VSAN_Wizard_3.png

I’ve already deployed the Witness Appliance and added the Witness Host as a standalone host in my vDC. As far as vCenter is concerned, this makes up “another site.” I’ll select this instead of selecting one of my Management Cluster hosts.

Important note: This is a lab. In a production deployment, the Witness Appliance would likely reside in the primary data center with the 2-node vSAN configuration being deployed at a remote/branch office.

05_VSAN_Wizard_4.png

Just like the vSAN nodes, I now need to claim disks for the Witness Host. These disks are created automatically as part of the OVF deployment and are based on the number of vSAN objects that you expect in the environment. I’ve deployed my Witness Host as a “Tiny” deployment for the lab. I’ve selected the smaller of the two disks as Cache and the larger of the two as Capacity.

05_VSAN_Wizard_5.png

One final confirmation page stands between me, a bunch of configuration done on my behalf in vCenter, and the presentation of my vSAN Datastore.

05_VSAN_Wizard_6.png

In less than five minutes, vCenter has configured vSAN on my Compute Cluster and I have a usable vSAN Datastore.

In future posts, I plan to evaluate the performance of the storage that I have at my disposal in the lab – local HDD, local (no-name, throw-away) SSD, and my new vSAN datastore. The outcome of that post will help me identify whether I rip out local VMFS storage in lieu of more disk for vSAN. I should do this anyway – vSAN and non-vSAN disks are only supported on the same storage controller if the non-vSAN disks are not being used for virtual machine I/O. See this KB Article for details.

After this experience, I have my eyes on some additional hardware that I want to procure and test with. My bank account will be thankful if I slow down a bit on new lab stuff – this doesn’t need to be built in a day. We’ll see how long I can resist.

Leave a Reply