In this section, we will configure NSX-T such as setting up the Transport Nodes, Edge VMs, configure Layer 2 Segments, Tier-0 uplinks and the routing required in preparation for vSphere with Kubernetes.
Step 0 – Prerequisite, as this guide is broken down into multiple sections and this section is mostly focus on the NSX-T Manager, it would be good to ensure that the following are configured. This will prevent switching back and forth between vCenter and NSX-T Manager. In customer or production environment, it would also likely to have different teams managing different things, such as Systems / VI admins managing vCenter and Network team will be managing NSX-T. Therefore, sometimes its hard to get both teams to be online at the same time, and thus, would be good to be clear on who needs to be configure what.
1) VDS is created and MTU size has been increased to 1600 or more than 1600. MTU 9000 is recommended. This MTU size has to match the size port configuration.
2) Portgroups that are going to be used for Edge VMs are created. This is where it gets tricky. Depending on the switch port configuration, you either create a portgroup with VLAN ID tagged or Trunk. This portgroup VLAN tag or no-tag or trunk has to match the switch port configuration. Trunk configuration is recommended. In my installation, since I’m going to validate one Physical NIC set up, the switch port configuration has to be Trunk.
— Network Team —
1) Ensure the switch is configured with the right MTU as well as the routing are configured. As you can see, the following is showing VLAN116 and VLAN120, these two VLANs are use for Geneve Overlay TEP.
Step 1 – Add License to NSX-T Manager.
Step 2 – Add vCenter as Compute Manager.
Step 3 – Create Uplink Profiles for ESXi Transport Nodes.
Step 4 – Create Uplink Profiles for Edge VMs Transport Nodes.
System -> Fabric -> Profiles -> Add
Step 5 – Add IP Pools. As my Edge VMs are running in the same cluster as the Compute Cluster, I use a two VLANs approach to workaround. **You can read more on the this topic in the first part of the blog. Therefore, instead of one IP pools for TEPs, there will be a need for two IP Pools.
Step 6 – Create Transport Nodes Profiles for ESXi Hosts.
System -> Fabric -> Transport Node Profiles -> Add
Step 7 – Enabling NSX ESXi as Transport Nodes
Step 8 – Deploying Edges
Step 9 – Configure the Edge Cluster
Step 10 – Create the Segment required for Tier-0 Uplinks.
Step 10 – Configure the Tier-0 Gateway.
Click on Add Interface.
IP Address/Mask: 10.149.1.5/24
Connect To(Segment): Seg-T0-Uplink1
Edge Node: sun05-nsxtedgevm01
To ensure that the Tier-0 Gateway Uplink is configured correctly, we shall login to the next hop device, in my case is the Nexus 3K, to do a ping test.
I first ping myself ie. 10.149.1.1 which is configured on the switch then follow be the HA VIP configured on the Tier-0 Gateway.
Lastly we need to configured a default route out so that the containers can communicate back to IP addresses outside the NSX-T domain.
Under Routing, Click on Set under Static Routes. **BTW, if you are using BGP, then probably this step would differ.
Once the static route has been added, one way to test is from outside the NSX-T domain. In my case, I have a jumphost which is outside the NSX-T domain and the gateway of the jumphost is pointing to the N3K as well. I did a ping test from the jumphost to the Tier-0 Gateway VIP. If the ping test is successful, it means the static route we added to the Tier-0 gateway is successfully configured.
Step 10 – Validate whether NSX-T has been successfully set up for vSphere with Kubernetes.
With all the configuration on the NSX-T, vSphere VDS and physical network are set up, its now to go back to Workload Management to see whether we are ready to deploy Workload Management Clusters! And YES we can! NSX-T is now detected by Workload Management!
Now we are done with NSX-T and related networking configurations. Give yourself a pat here! A lot of the questions related to vSphere with Kubernetes during the Beta whether for customers or internally were due networking and NSX related questions. Next, we will start configuring things required for Workload Management such as storage policies, Content Library, etc, mostly on the vCenter side.
Tanzu vSphere 7 with Kubernetes on NSX-T 3.0 VDS Install
Part 1: Overview, Design, Network Topology, Hardware Used
Part 2: ESXi, vCenter, VDS Config and NSX-T Manager
Part 3: NSX-T Edges, Segments, Tier-0 Routing
Part 4: Supervisor Cluster, Content Library, TKG Clusters
Part 5: Testing, Demo Apps