In this post we will go over the requirements for deploying a Tanzu Kubernetes Grid using the distributed vswitch and deploying the HAProxy appliance for load balancing.
I deployed my TKG using the frontend option, which requires 3 networks.
So what I needed:
- vCenter 7.0U1
- ESXi 7.0U1
- Distributed vSwitch
- 3 networks and associated portgroups
We will also need to create a new content library to pull down the VMware templates for deploying the supervisor cluster and workload clusters.
I created an additional library for the HAProxy OVAs, but of course they can be added to an existing content library.
The VLANs, port groups, and networks are as follows:
The HAProxy appliance will get a network card for each portgroup and an associated IP. To keep it simple, I assigned the following IPs:
Part of the HAProxy deployment requests an IP range on the frontend network it can use to allocate for load balancing. During the Workload Management deployment, it will request an IP range on the workload network and a starting IP for management network.
HAProxy customization requests the range be provided in CIDR notation, Workload Management deployment requests the IP range or starting IP. We will just notate those in all formats.
Now that we have all that noted down, we can begin with creating the new content libraries.
In vCenter, go to Menu then Content Libraries.
Then select Created.
Give it a name.
Now we need to select Subscribed content library then enter the subscription URL and select the option to download immediately.
We will get the “Unable to verify the identity” warning, which we can safely connect anyway since it’s VMware. Click Yes.
Select storage to use for the content library. I store it in the same NFS volume as my other VMware appliances.
Review settings and select Finish.
We should now have the subscribed content library available.
If we select the new content library, we can see it consists of 5 OVA and OVF templates. It’s currently at 0B in size, which means it didn’t automatically synchronize or download.
Go back to Content Libraries, then right-click on the Tanzu content library and select Synchronize. This will download the templates.
20.4GB storage used, so most likely downloaded all the templates.
Now we will go ahead and create a content library for the HAProxy templates.
New content library and provide a name.
This will be a local library, so keep that selected and then Next.
Select storage for content library. Again using the VMware NFS datastore.
Review settings then Finish.
We should now have that content library. Select it.
Then go to the Templates tab, select OVF & OVA, then go up to Actions drop-down and select Import Item.
Original VMware documentation provides the link to HAProxy v0.1.7 but v0.1.8 is out and is usable as well.
Use these URLs for the source file, or just import them both:
“SSL certificate cannot be trusted” seems to be a common and regular occurrence. Click the actions drop-down and select Continue.
We should now have both templates available in the template library.
Back in VMs and Templates, I had already created a folder called Tanzu to organize this. Right-click a container and select New Virtual Machine.
Select Deploy from template.
Find the HAProxy template, in my case I used the v0.1.8, then Next.
Provide a name and make sure it’s in the folder you want, then Next.
Select the compute resource, then Next.
Review details, then Next. Thick provisioned disk can be changed later in the deployment.
Standard legalese, Next.
Now for the HAProxy configuration and customization. I’m using a frontend network which requires a 3rd NIC and the 3rd portgroup. If use the Default deployment, the frontend and workload will share a subnet.
Then select the datastore, and Next.
Now select the appropriate portgroups for the networks. In the Default deployment, disregard Frontend.
Now to configure the appliance. Provide password for root account and permit root to login via SSH. Leave TLS Certificate Authority fields empty.
Provide the FQDN for the appliance, DNS, management IP in CIDR notation and gateway, workload IP in CIDR notation and gateway.
Frontend IP in CIDR notation and gateway. Load Balancer IP range is just a range, in CIDR notation, that will be allocated to the frontend on HAProxy. This is for the HAProxy load balancing. This range needs to be outside the range provided for Workload, if using the Default deployment. In my case, I could have used 10.0.11.248/29 which would provide the IP range 10.0.11.249 – 10.0.11.254.
Keep Dataplane API Management Port at the default 5556.
Provide a local admin account and password, then Next as this is the last of the customization.
Review the settings; not sure why, but my screen didn’t show any of the actual settings or customizations. Finish to deploy.
Once it’s deployed, we need to power it on. Find it in VMs and Templates, and Power On.
Give it some time to apply settings and boot up. IP Addresses should display the IPs provided during customization.
Next, we need to SSH in to the appliance and get the data from the ca.crt. So we SSH, log in with root or admin, then:
We will need this for the TKG deployment, so either keep this in mind or copy it in to notes.
To validate the deployment and configuration was successful, we can ping some of the IPs in the provided frontend range. All IPs in that range should get a response from HAProxy.
Keep in mind that all IP ranges provided in CIDR notation must be a proper subnet.
For example, 10.0.12.129/29 will not work. It will accept it during configuration, but the appliance will not respond on those frontend IPs. It’s not a valid subnet.
Now that HAProxy is fully deployed and we have the appropriate content libraries, the next step is to enable Workload Management. We will go through that in part 2.