Category Archives: NSX-T

Screen Shot 2018-05-15 at 7.19.06 AM

Openshift with NSX-T Installation Part 5: NCP and CNI Integration

In this article, we will integrate NSX-T NCP and CNI to Openshift.

Step 01: Tag the Logical Switches Ports connected to OCP-Master, OCP-Node01 and OCP-Node02 VMs.

NSX-T Manager -> Switching -> LS-VIFs -> Related -> Ports -> Click on the respective logical ports -> Actions -> Manage Tags

 

Screen Shot 2018-05-18 at 1.51.40 PM

* You can highlight the port to see which VMs is being Connected.

Screen Shot 2018-05-18 at 2.04.01 PM

 

  • Scope: ncp/node_name
  • Tag: ocp-master
  • Scope: ncp/cluster
  • Tag: ocp-cl1

Screen Shot 2018-05-18 at 2.09.53 PM

  • Scope: ncp/node_name
  • Tag: ocp-node01
  • Scope: ncp/cluster
  • Tag: ocp-cl1

Screen Shot 2018-05-18 at 2.05.24 PM

  • Scope: ncp/node_name
  • Tag: ocp-node02
  • Scope: ncp/cluster
  • Tag: ocp-cl1

 

 

Screen Shot 2018-05-18 at 2.08.44 PM

Step 02: On the master Node, lets fork the NSX-T Integration for Openshift. The below is using Yasen github, but its a fork from the actual nsxt integration from here -> https://github.com/vmware/nsx-integration-for-openshift.

On every node, run this
cd /root/nsx-container-2.1.3.8356796/Kubernetes
docker load -i nsx-ncp-rhel-2.1.3.8356796.tar

Screen Shot 2018-05-16 at 5.57.20 PM

On all the 3 nodes, you have to do the following.

 docker images
 docker tag registry.local/2.1.3.8356796/nsx-ncp-rhel nsx-ncp
 docker images
Screen Shot 2018-05-16 at 5.59.17 PM
Then on the master,
cd /root
cd /root/nsx-integration-for-openshift/openshift-ansible-nsx/roles/ncp_prep/defaults/
nano main.yml
change the uplink_port to ens224
# update the variable values below before running the ncp_prep role
cni_url: /root/nsx-container-2.1.3.8356796/Kubernetes/rhel_x86_64/nsx-cni-2.1.3.8356796-1.x86_64.rpm
ovs_url: /root/nsx-container-2.1.3.8356796/OpenvSwitch/rhel74_x86_64/openvswitch-2.8.1.7345072-1.x86_64.rpm
ovs_kmod1_url: /root/nsx-container-2.1.3.8356796/OpenvSwitch/rhel74_x86_64/openvswitch-kmod-2.8.1.7345072-1.el7.x86_64.rpm
ovs_kmod2_url: /root/nsx-container-2.1.3.8356796/OpenvSwitch/rhel74_x86_64/kmod-openvswitch-2.8.1.7345072-1.el7.x86_64.rpm
uplink_port: ens224
ncp_image_url: /root/nsx-container-2.1.3.8356796/Kubernetes/nsx-ncp-rhel-2.1.3.8356796.tar
Screen Shot 2018-05-18 at 2.57.10 PM
cd /root
wget http://52.59.159.238/ncp-rc.yml
nano /root/ncp-rc.yml
subnet_prefix = 24
tier0_router = T0
overlay_tz = TZ-Overlay
container_ip_blocks = IPBlock-PodNetworking
no_snat_ip_blocks = IPBlock-NONAT
external_ip_pools = Pool-NAT
top_firewall_section_marker = top_firewall_section
bottom_firewall_section_marker = bottom_firewall_section
Screen Shot 2018-05-19 at 10.04.12 PM
oc apply -f ncp-rc.yml
Screen Shot 2018-05-19 at 10.24.03 PM
cd /root/nsx-integration-for-openshift/openshift-ansible-nsx/roles/ncp/defaults
change the apiserver_host_ip, nsx_manager_ip, nsx_api_user and nsx_api_password to your configuration.
ncp_yaml_url: /root/ncp-rc.yml
agent_yaml_url: http://52.59.159.238/nsx-node-agent-ds.yml
cluster_name: ocp-cl1
apiserver_host_ip: 10.11.1.10
nsx_manager_ip: 10.136.1.102
nsx_api_user: admin
nsx_api_password: VMware1!
Screen Shot 2018-05-19 at 10.12.38 PM
ansible-playbook /root/nsx-integration-for-openshift/openshift-ansible-nsx/ncp.yaml
Screen Shot 2018-05-18 at 2.19.33 PM
oc get pod
oc delete nsx-ncp-r46z2
Screen Shot 2018-05-18 at 2.32.27 PM
Screen Shot 2018-05-15 at 7.19.06 AM

Openshift with NSX-T Installation Part 4: Openshift Installation

In this article, we will use Ansible to install Openshift 3.9.

On the Master node:
nano /usr/share/ansible/openshift-ansible/roles/lib_utils/action_plugins/sanity_checks.py

add then (‘openshift_use_nsx’,False.

Screen Shot 2018-05-15 at 4.36.44 PM
cd /root
wget https://raw.githubusercontent.com/vincenthanjs/openshift-nsxt/master/hosts
cp hosts /etc/ansible/ -f

cd /etc/ansible/
htpasswd -c /root/htpasswd admin

If the above URL does not work. you can copy and paste the below into /etc/ansible/hosts

#This is the file you replace on /etc/ansible/hosts

# Create an OSEv3 group that contains the masters and nodes groups
[OSEv3:children]
masters
nodes

# Set variables common for all OSEv3 hosts
[OSEv3:vars]
# SSH user, this user should allow ssh based auth without requiring a password
ansible_ssh_user=root
ansible_ssh_pass=VMware1!
openshift_master_default_subdomain=ocpapps.acepod.com

os_sdn_network_plugin_name=cni
openshift_use_openshift_sdn=false
openshift_use_nsx=true
openshift_node_sdn_mtu=1500
openshift_enable_service_catalog=true

# If ansible_ssh_user is not root, ansible_become must be set to true
#ansible_become=true

openshift_deployment_type=openshift-enterprise
#openshift_deployment_type=origin
openshift_disable_check=docker_storage,docker_image_availability
openshift_disable_check=memory_availability,disk_availability,docker_storage,docker_image_availability,package_version

# uncomment the following to enable htpasswd authentication; defaults to DenyAllPasswordIdentityProvider
openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider', 'filename': '/etc/origin/master/htpasswd'}]
openshift_master_htpasswd_file=/root/htpasswd

# host group for masters
[masters]
ocp-master	ansible_ssh_host=10.11.1.10

# host group for etcd
[etcd]
ocp-master	ansible_ssh_host=10.11.1.10

# host group for nodes, includes region info
[nodes]
ocp-master	ansible_ssh_host=10.11.1.10
ocp-node01	ansible_ssh_host=10.11.1.11
ocp-node02	ansible_ssh_host=10.11.1.12

[nsxtransportnodes:children]
masters
nodes
[root@ocp-master ~]#

ansible-playbook /usr/share/ansible/openshift-ansible/playbooks/prerequisites.yml

Screen Shot 2018-05-15 at 3.50.28 PM

 

You will get warning message but that should be alright.

Screen Shot 2018-05-16 at 10.41.03 AM

Before we continue to run the next script, we need to install the nsx container plugins. Before that we already have the NSX container download in the /root.

You have to do the below on every node.
cd /root/nsx-container-2.1.3.8356796/OpenvSwitch/rhel74_x86_64
yum install *.rpm -y

Screen Shot 2018-05-16 at 4.00.17 PM

You have to do the below on every node.
cd /root/nsx-container-2.1.3.8356796/Kubernetes/rhel_x86_64
yum install *.rpm -y

Screen Shot 2018-05-16 at 4.24.57 PM

Now we can run the deploy script. On the Master Node, run the following

ansible-playbook  /usr/share/ansible/openshift-ansible/playbooks/deploy_cluster.yml

It will take about 30 minutes to complete the install by the ansible playbook. Your mileage will vary.

You will see some error messages on the Web Console. This is because of the NSX-T integration have not been set up and therefore the error.

Screen Shot 2018-05-16 at 5.17.57 PM

The next check that the openshift installation is successful is to do the following commands.

As you can see now you can use the openshift commands such as
oc get pods –all-namespaces

Screen Shot 2018-05-16 at 5.21.24 PM

systemctl status openvswitch

Screen Shot 2018-05-16 at 5.37.33 PM

Screen Shot 2018-05-15 at 7.19.06 AM

Openshift with NSX-T Installation Part 3: RHEL

In this article we start install RHEL and install Openshift.

The main guide for the installation I reference from is at https://access.redhat.com/documentation/en-us/openshift_container_platform/3.9/html-single/installation_and_configuration/. Chapter 2.6 Advanced Installation.

Screen Shot 2018-05-15 at 10.37.53 AM

Step 1: Create a VM and install RHEL from iso. I place this VM on the LS-MGMT01 logical switch and this has internet access. Therefore, when you clone the VM, they will be on the same network.
Screen Shot 2018-05-15 at 10.45.35 AM

Screen Shot 2018-05-15 at 10.56.58 AM

As the installation complete, create the root password.

Screen Shot 2018-05-15 at 10.57.58 AM

Step 2: Start the VM, so we can install all the dependencies.
Screen Shot 2018-05-15 at 11.24.39 AM
Configure the IP. I use nmtui.
Screen Shot 2018-05-15 at 12.02.12 PM

Ping the gateway and ping 8.8.8.8 to test whether you have access to Internet.

Screen Shot 2018-05-15 at 12.04.47 PM

 

Step 3: Follow the installation guide to install all the pre-req. Run the following commands.Screen Shot 2018-05-15 at 12.19.08 PM

subscription-manager register –username=<Redhat Openshift Username>

Screen Shot 2018-05-15 at 12.35.04 PM
subscription-manager refresh
subscription-manager list –available –matches ‘*OpenShift*’

Screen Shot 2018-05-15 at 12.42.28 PM

subscription-manager attach –pool=<highlighted>

Screen Shot 2018-05-15 at 12.44.32 PM

subscription-manager repos –disable=”*”
yum repolist

Screen Shot 2018-05-15 at 12.50.24 PM
subscription-manager repos –enable=”rhel-7-server-rpms” –enable=”rhel-7-server-extras-rpms” –enable=”rhel-7-server-ose-3.9-rpms” –enable=”rhel-7-fast-datapath-rpms” –enable=”rhel-7-server-ansible-2.4-rpms”

Screen Shot 2018-05-15 at 1.02.02 PM
yum install wget git net-tools bind-utils iptables-services bridge-utils bash-completion kexec-tools sos psacct nano httpd-tools  atomic-openshift-utils unzip

Screen Shot 2018-05-15 at 1.04.07 PM
yum update

Screen Shot 2018-05-15 at 1.05.51 PM
systemctl reboot

Step 3: Now we going to upload the NSX Container zip file to the master copy.

The version I used is 2.13. Filename: nsx-container-2.1.3.8356796.zip.
Place this file in /root.

Screen Shot 2018-05-16 at 3.56.55 PM

 

[root@ocp-master ~]# unzip nsx-container-2.1.3.8356796.zip

Screen Shot 2018-05-16 at 4.00.17 PM

Step 4: Now we are reading to clone this VM to 3 copies.
I name my VMs the following

  • OCP-Master (IP: 10.11.1.10/24)
  • OCP-Node01 (IP: 10.11.1.11/24)
  • OCP-Node02  (IP: 10.11.1.12/24)

Screen Shot 2018-05-15 at 1.12.49 PM

 

Console to each VM. Change the IP using nmtui command. You can either use nmtui to deactivate/activate the connection or use ifdown ens192 and ifup ens192.

 

 

Screen Shot 2018-05-15 at 1.31.51 PM

Screen Shot 2018-05-15 at 1.30.19 PM

Once you configure ocp-node02, you can try to ping to the rest of the nodes.Screen Shot 2018-05-15 at 1.32.39 PM

Next we going to put in host entries in each of the nodes so that we can reach each nodes using hostnames.
nano /etc/hosts on each node.

On OCP-Master

Screen Shot 2018-05-15 at 4.17.50 PM

On OCP-Node01
Screen Shot 2018-05-15 at 4.18.43 PM

On OCP-Node02
Screen Shot 2018-05-15 at 4.19.15 PM

Once done, you can test connectivity from the master to the rest of the nodes. Repeat the process from Node01 and Node02.
Screen Shot 2018-05-15 at 2.22.08 PM

Step 4: Resume back to the preparation of the hosts.

ssh-keygen

Screen Shot 2018-05-15 at 2.31.29 PM

# for host in ocp-master01 \
    ocp-node01 \
    ocp-node02; \
    do ssh-copy-id -i ~/.ssh/id_rsa.pub $host; \
    done

Screen Shot 2018-05-15 at 2.34.21 PM

Once this is done, you can try SSH from the master to node01 and node02 without any login.Screen Shot 2018-05-15 at 2.34.21 PM

 

Screen Shot 2018-05-15 at 7.19.06 AM

Openshift with NSX-T Installation Part 2: NSX-T

 

 

In this article, we are now going to start configuring NSX-T so that it will be ready for us to install Openshift and consume the networking and security services provided by NSX-T. The result is that Openshift can deliver on demand provisioning of all NSX-T components: Container Network Interface (CNI), NSX-T Container Plugin (NCP) POD, NSX Node Agent POD, etc) automatically when a new Kubernetes (K8S) Cluster is requested, all done with a single CLI or API call. In addition, Openshift also provides a unique capability through its integration with NSX-T to enable network micro-segmentation at the K8S namespace level which allows Cloud/Platform Operators to manage access between application and/or tenant users at at a much finer grain level than was possible before which is really powerful!

I will not be walking through a step-by-step NSX-T installation and I will assume that you already have a basic NSX-T environment deployed which includes a few ESXi host prepped as Transport Nodes and at least 1 NSX-T Controller and 1 NSX-T Edge. If you would like a detailed step by step walk through, you can refer to the NSX-T documentation here or you can even leverage William Lam Automated NSX-T Lab Deployment script to setup the base environment and modify based on the steps in this article.

For my lab, I actually setup the NSX-T manually using my previous NSX-T 2.1 Install Documentation. You can follow the guide until the NSX Edges deployment because in NSX-T 2.1, you can actually use the NSX Manager to deploy the Edges.

Step 1 – Verify that you have prepped the ESXi hosts which will be used to deploy the Openshift workload VMs. In my example below, I only have one host is that prepare for NSX-T Transport Nodes. This would also means the Openshift VMs need to be pin to this hosts. You can use Affinity Rules to do that.

 

Screen Shot 2018-05-15 at 6.38.22 AM

Step 2 – Create a new IP Pool which will be used to allocate Virtual IPs for the exposed Openshift Services (e.g Load Balancer for application deployments). To do so, navigate to Inventory->Groups->IP Pool and provide the following:

  • Name: Pool-NAT
  • IP Range: 10.11.2.1 – 10.11.2.254
  • CIDR: 10.11.2.0/24

Screen Shot 2018-05-15 at 7.05.34 AM
Step 3 – Create a new IP Block which will by used by Openshift on-demand to carve up into smaller /24 networks and assigned those to each K8S namespace. This IP block should be sized sufficiently to ensure you do not run out of addresses and currently it is recommended to use a /16 network (non-routable). I’m using a /21 network. To do so, navigate to DDI->IPAM and provide the following:

  • Name: IPBlock-NONAT
  • CIDR: 10.11.8.0/21

Screen Shot 2018-05-15 at 7.04.47 AM
Step 4 – Create a new T0 Router which will be used to communicate with your external physical network. Make sure you have either created an Edge Cluster (can contain a single Edge) or create a new Edge Cluster if you have not already. The HA mode must be Active/Standby as NAT is used by the NCP service within K8S Management POD. To do so, navigate to Routing->Routers and provide the following:

  • Name: T0
  • Edge Cluster: EdgeCluster01
  • High Availability Mode: Active-Standby
  • Preferred Member: TN-SUN03-EDGEVM01

Screen Shot 2018-05-17 at 10.34.48 AM
Step 5 – Create a static route on the T0 which will enable all traffic from the Openshift Management PODs to be able to communicate outbound to our Management components.  In my example, 10.197.1.0/24 is the intermediate network’s gateway which will be used to route traffic from within T0 to my virtual router (pfSense). To do so, click on the T0 Router you just created and navigate to Routing->Static Routes and provide the following:

  • Network: 0.0.0.0/0
  • Next Hop: 10.197.1.1

Screen Shot 2018-05-17 at 10.37.37 AM

 

For my lab, I have a few other internal networks that I would like to route into the NSX-T networks. There I added another static route.

  • Network: 192.168.0.0/16
  • Next Hop: 10.197.1.2

Screen Shot 2018-05-17 at 10.39.36 AM
Step 6 – Next, we need to create three Logical Switches, one that will be used for the T0 uplink, the other for Openshift Management Cluster which is used to run the Openshift Management POD and the last for internal network of the NCP communication. To do so, navigate to Switching->Switches and add the following:

  • Name: LS-EdgeUplink01
  • Transport Zone: TZ-VLAN
  • VLAN: 0

Screen Shot 2018-05-17 at 10.45.55 AM

  • Name: LS-MGMT01
  • Transport Zone: TZ-Overlay
  • VLAN: NIL
  • Name: LS-VIFs
  • Transport Zone: TZ-Overlay
  • VLAN: NIL

Screen Shot 2018-05-17 at 10.42.38 AM
After this step, you should have three Logical Switches as shown in the screenshot below. TheLS-EdgeUplink01 should be on TZ-VLAN and the LS-MGMT01 and LS-VIDs should be on TZ-Overlay.

Screen Shot 2018-05-17 at 10.49.27 AM
Step 7 – Now we need to configure the Uplink Router Port and assign it an address from the intermediate network so that we can route from the T0 to our physical or virtual router. To do so, navigate to Routing and then click on the T0 Router we had created earlier and select Configuration->Router Ports and provide the following:

  • Name: RP-Uplink01
  • Type: Uplink
  • Transport Node: TN-SUN03-EDGEVM01
  • Logical Switch: LS-EdgeUplink01
  • Logical Switch Port: Uplink-1-Port
  • IP Address/mask: 10.197.1.5/24

Screen Shot 2018-05-17 at 10.51.59 AM

  • Name: RP-Uplink02
  • Type: Uplink
  • Transport Node: TN-SUN03-EDGEVM02
  • Logical Switch: LS-EdgeUplink02
  • Logical Switch Port: Uplink-2-Port
  • IP Address/mask: 10.197.1.6/24

Screen Shot 2018-05-17 at 10.55.05 AM

Step 8 – Because I have two Edges, you will need to create HA VIP.

  • VIP Address: 10.197.1.4/24
  • Status: Enable
  • Uplink Ports: RP-Uplink1, RP-Uplink2

Screen Shot 2018-05-17 at 10.57.12 AM
Step 9 – Create a new T1 Router which will be used for the K8S Management Cluster POD. To do so, navigate to Routing->Routers and provide the following:

  • Name: T1-Mgmt
  • Tier-0 Router: T0
  • Failover Mode: Preemptive
  • Edge Cluster: <No need to select>

Screen Shot 2018-05-17 at 10.59.15 AM
Step 10 – Configure the Downlink Router Port for the Openshift Management Cluster which is where you will define the network that NSX-T will use for these VMs. In my example, I use 10.11.1.0/24.To do so, click on the T1 Router that you had just created and navigate to Configuration->Router Ports and provide the following:

  • Name: RP-MGMT
  • Logical Switch: LS-MGMT01
  • Logical Switch Port: Attach New Switch Port
  • IP Address/mask: 10.11.1.1/24

Screen Shot 2018-05-17 at 11.03.15 AM
Step 11 – To ensure the Openshift network will be accessible from the outside, we need to advertise these routes. To do so, click on the T1 Router you had created earlier and navigate to Routing->Route Advertisement and enable the following:

  • Status: Enabled
  • Advertise All NSX Connected Routes: Yes

Screen Shot 2018-05-17 at 11.05.09 AM
Step 12 – This step maybe optional depending how you have configured your physical networking, in which case you will need to use BGP instead of static routes to connect your physical/virtual network to NSX-T’s T0 Router. In my environment, I am using a virtual router (pfSense) and easiest way to enable connectivity from both my management network as well as networks that my vCenter Server, ESXi hosts, NSX-T VMs and Management VMs are hosted on to communicate with Openshift is setting up a few static routes. We need to create one static routes to reach both our Openshift Management Cluster Network (10.11.1.0/24) as well as Openshift Projects Network (10.11.8.0/21). I can summarise into 10.11.0.0/16 one route. For all traffic destine to either of these networks, we will want them to be forwarded to our T0’s HA uplink address which if you recall from Step 7 is 10.197.1.4. Depending on the physical or virtual router solution, you will need to follow your product documentation to setup either BGP or static routes.

Screen Shot 2018-05-17 at 11.09.05 AM
Step 13 – At this point, we have completed all the NSX-T configurations and we can run through a few validation checks to ensure that when we go and deploy the Openshift Management VMS (One Master and Two Nodes), we will not run into networking issues. This is a very critical step and if you are not successful here, you should go back and troubleshoot prior to moving on.

To verify Overlay network connectivity between ESXi hosts and Edge VM, you should be able to ping using the VXLAN netstack between all ESXi hosts as well as to the Edge VM’s overlay interface. Below is a table of the IPs that were automatically allocated from the VTEP’s IP Pool, you can discover these by logging onto the ESXi host but they should be sequential from the stating range of your defined IP Pool. Also make sure you have your physical and virtual switches configured to use MTU 1600 for overlay traffic.

Host IP Address
esxi  10.140.1.11
edge01  10.140.1.12
edge02  10.140.1.13

Screen Shot 2018-05-17 at 11.27.54 AM

You can SSH to each ESXi host and run the following:

vmkping ++netstack=vxlan [IP]

or you can also do this remotely via ESXCLI by running the following:

esxcli network diag ping –netstack=vxlan –host=[IP]

OR you can also SSH to the Edge. Do a vrf 0 and ping the overlay networks.

Screen Shot 2018-05-17 at 11.32.00 AM

To verify connectivity to NSX-T Networks as well as routing between physical/virtual networking to NSX-T, use your Jumphost and you should be able to ping the following addresses from that system:

  • 10.11.1.1 (Downlink Router Port to our Openshift Management Cluster network)
  • 10.197.1.4 (Uplink Router Port)

In the next blog post, we will start our Openshift deployment starting with RHEL preparation.

 

 

 

Screen Shot 2018-05-15 at 7.19.06 AM

Openshift with NSX-T Installation Part 1: Overview

The past few days I have been involved in an Openshift + NSX-T POC. In fact, 2 POCs because we setup one time at the customer lab and at the same I also repeat the same steps in my own lab so that I can learn as well. Before I proceed further, I would to shout out to Yasen, the NSBU TPM to guide me on the installation of the Openshift and integrating with NSX-T. Without his guidance and his help with the POC, this article would not be possible. As a promise, I would write a blog on the installation because I believe many in the community and my peers would need help in setting this up.

I’m going to break up the articles into various parts. *Inspirations from William Lam who did the PKS with NSX-T series.

Openshift with NSX-T Installation Part 1: Overview
Openshift with NSX-T Installation Part 2: NSX-T
Openshift with NSX-T Installation Part 3: RHEL Preperation
Openshift with NSX-T Installation Part 4: Openshift Installation
Openshift with NSX-T Installation Part 5: NCP and CNI Integration
Openshift with NSX-T Installation Part 6: Demo App

Components:

  • Compute – vSphere 6.5+ (vCenter Server + ESXi) and Enterprise Plus license
  • Storage – VSAN or other vSphere Datastores
  • Networking & Security – NSX-T 2.1
  • Openshift 3.9 Enterprise
  • RHEL 7.5

Software Download:

Here is the complete list of software that needs to be downloaded to deploy Openshift and NSX-T.

Software Download URL
NSX-T  https://my.vmware.com/web/vmware/details?productId=673&downloadGroup=NSX-T-210
nsx-unified-appliance-2.1.0.0.0.7395503.ova
nsx-edge-2.1.0.0.0.7395502.ova
nsx-controller-2.1.0.0.0.7395493.ova
nsx-container-2.1.3.8356796.zip
RHEL https://access.redhat.com/downloads/
The version I used: rhel-server-7.5-x86_64-dvd.iso

Screen Shot 2018-05-16 at 3.01.02 PM

Lab Environment:

For my Lab, I will be using two physical hosts. One host for the NSX-T management components and the other host to be the NSX-T Transport Node which will house the Openshift VMs which is one Master and two Node VMs. You can see there are 3 physical nodes in my cluster because I am using vSAN here as my shared storage. My lab is a non-nested setup but at the customer POC, we setup a nested environment using William Lam NSX-T Auto-Deploy Script and that works as well.

Screen Shot 2018-05-15 at 6.34.20 AM

The good part of NSX-T is that you can select which hosts you would like to prepare as transport node instead of the whole cluster. You could also add in all the hosts in the whole cluster to be prepared as transport nodes with the use of the Compute Manager. In this case, I only selected 192.168.1.202 hosts as the transport node. The other two nodes are not prepared as transport nodes.

Screen Shot 2018-05-15 at 6.37.41 AM

Compute/Storage

VM CPU MEM DISK
NSX-T Manager 4 16GB 140GB
NSX-T Controller x 3 4 16GB 120GB
NSX-Edge x 2 (Medium Size) 4 8GB 120GB
Openshift Master 4 16GB 40GB
Openshift Node01 4 16GB 40GB
Openshift Node02 4 16GB 40GB

Networking

Defined within your physical or virtual network infrastructure

  • Management Network (10.136.1.0/24) – This is where the management VMs will reside such as the NSX-T Manager and Controllers.
  • Transit Network (10.197.1.0/24) – This network is required to route between our management network via the NSX-T T0 Router and our Openshift Cluster Management network as well as the Openshift Workload networks. You only need two IPs, one for the gateway which should exists by default and one for the uplink which will reside on the T0. Static routes will be used to reach the NSX-T networks. In a Production or non-lab environment, BGP would be used to peer the T0 to your physical network
  • Internet Network – IP address that is a able to access to Internet. For my lab, I setup a pfsense router as the next-hop for the T0 router and the pfsense router will basically NAT the networks in NSX-T such as the Openshift Cluster Management Network. I basically route 10.11.0.0/16, the whole subnet to NSX-T T0 which also means 10.11.*.* will have access to Internet. This will allow Internet access for the Openshift VMs for “yum install” and download relevant packages. ***Internet access is very important especially for the Openshift VMs, if not you will need to spend alot of time on troubleshooting.

Defined within NSX-T

  • Openshift Cluster Management Network (10.11.1.0/24) – This network is used for the Openshift management POD which includes services like the Node Agent for monitoring liveness of the cluster and NSX-T Container Plugin (NCP) to name a few
  • NAT IP Pool (10.11.2.0/24) – This network pool will provide addresses for when load-balancing services are required as part of an application deployment within Openshift. One IP address per project.
  • NONAT IP Block (10.11.8.0/21) – This network is used when an application is requested to deploy onto a new Openshift namespace. A /24 network is taken from this IP Block and allocated to a specific Openshift project. This network will allow for 8 projects.
  • Pod Networking IP Block (10.12.0.0/16) – This network pool will not be required to be routable in the external network.

Here is a logical diagram of my planned Openshift + NSX-T deployment:

Screen Shot 2018-05-17 at 10.01.19 AM

Screen Shot 2018-02-12 at 9.25.37 AM

PKS Pivotal Container Service 1.0 GA

PKS 1.0 has gone GA on the 8 Feb 2018! Its kinda weird that there is not much announcement on the web.

Screen Shot 2018-02-12 at 9.16.40 AM

 

I am super excited about PKS as it has native integration with NSX-T!

Features

  • Create, resize, delete, list, and show clusters through the PKS CLI
  • Native support for NSX-T and Flannel
  • Easily obtain kubeconfigs to use each cluster
  • Use kubectl to view the Kubernetes dashboard
  • Define plans that pre-configure VM size, authentication, default number of workers, and addons when creating Kubernetes clusters
  • User/Admin configurations for access to PKS API
  • Centralized logging through syslog

In the following blog post, I’m going to start blogging about my experience on installing PKS 1.0 on my existing NSX-T 2.1 setup. I think its going to be fun.

Screen Shot 2017-12-24 at 3.47.57 PM

NSX-T 2.1 GA Installation using ovftool

Update on 23 Dec 2017
NSX-T 2.1 was GA. I thought since Im going to re-do the whole process, I might as well take new screenshots as well.

You might be wondering why would I want to use ovftool to install NSX-T appliances. This is because my management host is not being managed by a vCenter and it failed when using the vSphere Client.

Screen Shot 2017-12-17 at 4.42.29 PM

You can see from the screenshot above, I only have hosts for the EdgeComp Cluster. As I do not have additional hosts for the management cluster, I will be using an existing management host that is standalone.

While reading the NSX-T Installation Guide documentation, realize they did mention of using an alternative method ie using the OVF Tool to install the NSX Manager. I reckon, this would be useful for automated install and the other reason, is that NSX-T architecture is to move away from the dependency of vCenter. NSX-T could be deployed in a 100% non-vSphere environment, like for example KVM.

Preparing for Installation

These are the files I will be using for the NSX-T Installation.
1) NSX Manager – nsx-unified-appliance-2.1.0.0.0.7395503.ova
2) NSX Controllers – nsx-controller-2.1.0.0.0.7395493.ova
3) NSX Edges – nsx-edge-2.1.0.0.0.7395503.ova

Installing NSX-T Manager using ovftool

Following the guide, and had to modify the ovftool command. So this is the command I used and I put into a batch file. Maybe later I will incorporate it into the powershell script I used to deploy the vSphere part.

Screen Shot 2017-12-24 at 2.21.54 PM

You can find the script here.

The ESXi host Im using is 6.0U2 and it does not takes in the OVF properties. So I had no choice, but to deploy to the vcenter instead and to the EdgeComp hosts.

Screen Shot 2017-12-17 at 9.05.44 PM

Finally able to login to the NSX Manager console.

Screen Shot 2017-12-17 at 9.06.50 PM

Trying to login to the web console of the NSX Manager.

Screen Shot 2017-12-17 at 9.10.36 PM

Awesome! Able to login and dashboard is up!

Screen Shot 2017-12-17 at 9.11.33 PM

Alright. so next will be the NSX-T Controllers.

Screen Shot 2017-12-17 at 9.23.16 PM

Screen Shot 2017-12-17 at 9.29.42 PM

Configuring the Controller Cluster

Retrieve the NSX Manager API thumbprint

  1. Log onto the NSX Manager via SSH using the admin credentials.
  2. Use “get certificate api thumbprint” to retrieve the SSL certificate thumbprint. Copy the output to use in commands later
    Screen Shot 2017-12-17 at 9.31.42 PM

Join the NSX Controllers to the NSX Manager

  1. Log onto each of the NSX Controllers via SSH using the admin credentials.
  2. Use “join management-plane <NSX Manager> username admin thumbprint <API Thumbprint>Screen Shot 2017-12-17 at 9.31.42 PM
  3. Enter the admin password when prompted
  4. Validate the controller has joined the Manager with “get managers” – you should see a status of “Connected”

    join management-plane 10.136.1.102 username admin thumbprint f24e53ef5c440d40354c2e722ed456def0d0ceed2459fad85803ad732ab8e82bScreen Shot 2017-12-17 at 9.51.04 PM

  5. Repeat this procedure for all three controllers

Screen Shot 2017-12-17 at 10.21.13 PM

Screen Shot 2017-12-17 at 10.22.13 PM

Initialise the Controller Cluster

To configure the Controller cluster we need to log onto any of the Controllers and initialise the cluster. This can be any one of the Controllers, but it will make the controller the master node in the cluster. Initialising the cluster requires a shared secret to be used on each node.

  1. Log onto the Controller node via SSH using the admin credentials.
  2. Use “set control-cluster security-model shared-secret” to configure the shared secret
  3. When the secret is configured, use “initialize control-cluster” to promote this node:

Screen Shot 2017-12-17 at 10.25.18 PM

Validate the status of the node using the “get control-cluster status verbose” command. You can also check the status in the NSX Manager web interface. The command shows that the Controller is the master, in majority and can connect to the Zookeeper Server (a distributed configuration service)

Screen Shot 2017-12-17 at 10.27.10 PM

Notice in the web interface that the node has a Cluster Status of “Up”

Screen Shot 2017-12-17 at 10.28.39 PM

Preparing ESXi Hosts

With ESXi hosts you can prepare them for NSX by using the “Compute Manager” construct to add a vCenter server, and then prepare the hosts automatically, or you can manually add the hosts. You can refer to Sam’s blog posts as he prepare the hosts manually for his learning exercise. Since my purpose is to quickly get the deployment up for PKS/PCF, Im going to use the automatic method using the “Compute Manager”

1. Login to NSX-T Manager.
2. Select Compute Managers.
3. Click on Add.

Screen Shot 2017-12-18 at 2.03.50 AM

4. Put in the details for the vcenter.

Screen Shot 2017-12-18 at 2.05.55 AM

Success!
Screen Shot 2017-12-18 at 2.07.11 AM

5. Go into Nodes under Fabric.

6. Change the Managed by from Standalone to the name of the compute manager you just specified.
Screen Shot 2017-12-18 at 2.09.44 AM

7. Select the Cluster and click on Configure Cluster. Enabled “Automatically Install NSX” and leave “Automatically Create Transport Node” as Disabled as I have not create the Transport Zone.
Screen Shot 2017-12-18 at 2.12.07 AM

You will see NSX Install In Progress
Screen Shot 2017-12-18 at 2.13.43 AM

Error! Host certificate not updated.
Screen Shot 2017-12-18 at 2.16.34 AM

After some troubleshooting, I realize the host has multiple IP addresses, So what I did was to remove all of them except for the management IP address and the host preparation went on smoothly.

Screen Shot 2017-12-23 at 3.40.23 PMScreen Shot 2017-12-23 at 3.40.11 PM

Screen Shot 2017-12-23 at 3.39.39 PM

Host preparation was successful. As I was in the middle of writing this blog post, NSX-T 2.1 was just GA. Although the build number was pretty similar, I decided I will reinstall with the GA version. So much for the host preparation, I will uninstall and re-do everything again.

Screen Shot 2017-12-23 at 10.16.29 PM

 

References
1. Sam NSX-T Installation Blog Posts
2. VMware NSX-T Installation Docs

Screen Shot 2017-12-24 at 3.47.57 PM

NSX-T 2.1 Installation using ovftool (GA ver)

Update on 23 Dec 2017
NSX-T 2.1 was GA. I was using a pre-GA version before. Since Im going to reinstall using the GA version, I thought might as well I take the screenshots again.

You might be wondering why would I want to use ovftool to install NSX-T appliances. This is because my management host is not being managed by a vCenter and it failed when using the vSphere Client.

Screen Shot 2017-12-17 at 4.42.29 PM

You can see from the screenshot above, I only have hosts for the EdgeComp Cluster. As I do not have additional hosts for the management cluster, I will be using an existing management host that is standalone.

While reading the NSX-T Installation Guide documentation, realize they did mention of using an alternative method ie using the OVF Tool to install the NSX Manager. I reckon, this would be useful for automated install and the other reason, is that NSX-T architecture is to move away from the dependency of vCenter. NSX-T could be deployed in a 100% non-vSphere environment, like for example KVM.

Preparing for Installation

These are the files I will be using for the NSX-T Installation.
1) NSX Manager – nsx-unified-appliance-2.1.0.0.0.7395503.ova
2) NSX Controllers – nsx-controller-2.1.0.0.0.7395493.ova
3) NSX Edges – nsx-edge-2.1.0.0.0.7395503.ova

Installing NSX-T Manager using ovftool

Following the guide, and had to modify the ovftool command. So this is the command I used and I put into a batch file. Maybe later I will incorporate it into the powershell script I used to deploy the vSphere part.

Screen Shot 2017-12-17 at 7.53.54 PM

You can find the batch script here.

The ESXi host Im using is 6.0U2 and it does not takes in the OVF properties. So I had no choice, but to deploy to the vcenter instead and to the EdgeComp hosts.

Screen Shot 2017-12-24 at 2.21.54 PM

Finally able to login to the NSX Manager console.

Screen Shot 2017-12-24 at 2.28.19 PM

Trying to login to the web console of the NSX Manager

Screen Shot 2017-12-24 at 3.47.57 PM

Awesome! Able to login and dashboard is up!

Screen Shot 2017-12-24 at 3.49.29 PM

The dashboard. Nothing to report at the moment.

Screen Shot 2017-12-24 at 3.49.29 PM

Alright. so next will be the NSX-T Controllers.

Screen Shot 2017-12-24 at 3.54.16 PM

NSX-T Controllers booted up.
Screen Shot 2017-12-24 at 3.55.44 PM

 

Configuring the Controller Cluster

Retrieve the NSX Manager API thumbprint

  1. Log onto the NSX Manager via SSH using the admin credentials.
  2. Use “get certificate api thumbprint” to retrieve the SSL certificate thumbprint. Copy the output to use in commands later
    Screen Shot 2017-12-24 at 11.39.44 PM

Join the NSX Controllers to the NSX Manager

  1. Log onto each of the NSX Controllers via SSH using the admin credentials.
  2. Use “join management-plane <NSX Manager> username admin thumbprint <API Thumbprint>
    Screen Shot 2017-12-24 at 11.41.10 PM
  3. Enter the admin password when prompted
  4. Validate the controller has joined the Manager with “get managers” – you should see a status of “Connected”

    join management-plane 10.136.1.102 username admin thumbprint 77d62c521b6c1477f709b67425f5e6e84bf6f1117bdca0439233db7921b67a28

    Screen Shot 2017-12-24 at 11.45.57 PM

  5. Repeat this procedure for all three controllers. *For my lab, I will deploy only one controller.
    Screen Shot 2017-12-24 at 11.49.03 PMScreen Shot 2017-12-24 at 11.49.19 PMScreen Shot 2017-12-24 at 11.50.10 PM

Initialise the Controller Cluster

To configure the Controller cluster we need to log onto any of the Controllers and initialise the cluster. This can be any one of the Controllers, but it will make the controller the master node in the cluster. Initialising the cluster requires a shared secret to be used on each node.

  1. Log onto the Controller node via SSH using the admin credentials.
  2. Use “set control-cluster security-model shared-secret” to configure the shared secret
  3. When the secret is configured, use “initialize control-cluster” to promote this node:

Screen Shot 2017-12-24 at 11.52.30 PM

Validate the status of the node using the “get control-cluster status verbose” command. You can also check the status in the NSX Manager web interface. The command shows that the Controller is the master, in majority and can connect to the Zookeeper Server (a distributed configuration service)
Screen Shot 2017-12-24 at 11.53.16 PM

Notice in the web interface that the node has a Cluster Status of “Up”
Screen Shot 2017-12-24 at 11.53.58 PM

Preparing ESXi Hosts

With ESXi hosts you can prepare them for NSX by using the “Compute Manager” construct to add a vCenter server, and then prepare the hosts automatically, or you can manually add the hosts. You can refer to Sam’s blog posts as he prepare the hosts manually for his learning exercise. Since my purpose is to quickly get the deployment up for PKS/PCF, Im going to use the automatic method using the “Compute Manager”

1. Login to NSX-T Manager.
2. Select Compute Managers.
3. Click on Add.
Screen Shot 2017-12-25 at 12.08.39 AM

4. Put in the details for the vcenter.
Screen Shot 2017-12-25 at 12.09.56 AM

Success!
Screen Shot 2017-12-25 at 12.12.11 AM

5. Go into Nodes under Fabric.

6. Change the Managed by from Standalone to the name of the compute manager you just specified.
Screen Shot 2017-12-25 at 12.17.13 AM

7. If you notice above, there are multiple IP addresses listed and this will pose problems to the installation. Click on each host and remove all the IP addresses except the management IP address of the hosts.

8. Select the hosts you would like to Install NSX.
Screen Shot 2017-12-25 at 12.15.48 AM

8. Select the Cluster and click on Configure Cluster. Enabled “Automatically Install NSX” and leave “Automatically Create Transport Node” as Disabled as I have not create the Transport Zone.

 

You will see NSX Install In Progress
Screen Shot 2017-12-18 at 2.13.43 AM

Error! Host certificate not updated.
Screen Shot 2017-12-18 at 2.16.34 AM

After some troubleshooting, I realize the host has multiple IP addresses, So what I did was to remove all of them except for the management IP address and the host preparation went on smoothly.

Screen Shot 2017-12-23 at 3.40.23 PM

 

Screen Shot 2017-12-28 at 4.26.22 PM

 

Yeah! Host preparation is successful!Screen Shot 2017-12-28 at 3.48.24 PM

Deploying a VM Edge Node

Following the instructions from Install NSX Edge on ESXi Using the Command-Line OVF Tool, we can deploy NSX Edges using ovftool.

Screen Shot 2017-12-28 at 5.11.56 PM

Once the OVF deployment has completed, power on the VM Edge Node.

Join NSX Edges with the management plane

If you enabled SSH (as I did) you can connect with the newly deployed Edge on it’s management IP address. If not you should be able to use the console to configure it. Once on the console/SSH, authenticate as the admin user with the password you specified during deploy time.

Screen Shot 2017-12-28 at 5.25.44 PM

Validate the management IP address using “get interface eth0”
Screen Shot 2017-12-28 at 5.15.26 PM

Retrieve the Manager API thumbprint using “get certificate api thumbprint” from the NSX Manager console/SSH, or using the web interface
Screen Shot 2017-12-28 at 5.28.58 PM

Join the VM Edge Node to the management plane using the following command:

join management-plane <NSX Manager> username <NSX Manager admin> thumbprint <NSX-Manager’s-thumbprint>

join management-plane 10.136.1.102 username admin thumbprint 77d62c521b6c1477f709b67425f5e6e84bf6f1117bdca0439233db7921b67a28

 

You will be prompted for the password of the NSX admin user and the node will be registered
Screen Shot 2017-12-28 at 5.19.03 PM

You can validate the Edge has joined the Management plane using the command “get managers”.
Screen Shot 2017-12-28 at 5.19.30 PM

Below you can see that in the NSX Manager console under Fabric > Nodes > Edges I have added two Edge VMs, the deployment is up and connected to the manager, but the Transport Node is not configured yet – that will be the next post!

Screen Shot 2018-02-13 at 10.21.17 PM

Create Transport Zones & Transport Nodes

Create an IP Pool for the Tunnel Endpoints (TEPs)
Both the hosts and edges will require an IP address for the GENEVE tunnel endpoints, so in order to address these I will create an IP Pool.

Click on Groups, IP Pools, Add New IP Pool.
Name: Pool-TEP
IP Ranges: 10.140.1.11 – 10.140.1.250
Gateway: 10.140.1.1
CIDR: 10.140.1.0/24
Screen Shot 2018-02-14 at 9.05.47 AM

 

This shows that the IP Pool is added successfully.Screen Shot 2018-02-14 at 9.06.37 AM

Create an Uplink Profile for VM Edge Nodes

Click on FabricProfilesUplink Profiles and ADD.
Name: uplink-profile-nsx-edge-vm
Teaming Policy: Failover Order
Active Uplinks: uplink-1
Transport VLAN: 0 (My VDS portgroup already tag with a VLAN, therefore there is no need to tag here. If you are using a trunk portgroup, then you have specify the VLAN ID here.)
MTU: 1600 (Default)
Screen Shot 2018-02-14 at 9.15.17 AM

Creating the Transport Zones

In my setup, I will be creating 2 transport zones. One for the VLAN and the second for the Overlay.
Click on Transport Zones and ADD.
Name: TZ-VLAN
N-VDS Name: N-VDS-STD-VLAN
N-VDS Mode: Standard
Traffic Type: VLAN
Screen Shot 2018-02-14 at 9.20.51 AM

Click on Transport Zones and ADD.
Name: TZ-OVERLAY
N-VDS Name: N-VDS-STD-OVERLAY
N-VDS Mode: Standard
Traffic Type: Overlay

Screen Shot 2018-02-14 at 9.22.58 AM

 

Once done, you should be able to see similar results as per this screenshot.
Screen Shot 2018-02-14 at 9.23.26 AM

Once done, you should be able to see similar results as per this screenshot.

Creating Host Transport Nodes

A Transport Node participates in the GENEVE overlay network as well as the VLAN networking – however for my configuration the Host Transport Nodes will actually only participate in overlay.

Click on Fabric, Nodes, Transport Nodes and ADD.
Name: TN-SUN03-ESX153
Node: sun03-esxi153.acepod.com (10.136.1.153) 
Transport Zones: TZ-OVERLAY

Screen Shot 2018-02-14 at 9.36.07 AM

N-VDS Configuration.

Screen Shot 2018-02-14 at 9.42.54 AM

You have to login to vCenter to check which is the vmnic that is available and not connected as that would be used for the host switch for overlay networking. For my setup, vmnic3 is not used by any vSwitch and therefore I will be using that for my transport node uplink.

Screen Shot 2018-02-14 at 9.40.47 AM

Adding the 2nd host or N host (depending on how many hosts you want to add as Transport Node)
Name: TN-SUN03-ESX154
Node: sun03-esxi154.acepod.com (10.136.1.154) 
Transport Zones: TZ-OVERLAY
Screen Shot 2018-02-14 at 10.04.52 AM

Screen Shot 2018-02-14 at 10.05.31 AM

Adding the NSX Edge-VM as Transport Node. Edge Node will be participating in VLAN and Overlay Transport Zone.
Name: TN-SUN03-ESX154
Node: sun03-esxi154.acepod.com (10.136.1.154) 
Transport Zones: TZ-OVERLAY

Screen Shot 2018-02-14 at 10.10.04 AM

Creating Edge Transport Nodes

A Transport Node participates in the GENEVE overlay network as well as the VLAN uplinks and provides transport between the two. The previously configured VM Edge Nodes will be configured as Edge Transport Nodes, using the Uplink Profile and Transport Zones configured above.

Adding the NSX Edge-VM as Transport Node. Edge Node will be participating in VLAN and Overlay Transport Zone.
Name: TN-SUN03-EDGEVM01
Node: sun03-edgevm01 (10.136.1.111)
Transport Zones: TZ-OVERLAY, TZ-VLAN

Screen Shot 2018-02-14 at 10.12.42 AM

 

Screen Shot 2018-02-14 at 10.14.37 AM

Screen Shot 2018-02-14 at 10.14.46 AM

 

 

Screen Shot 2018-02-14 at 10.27.54 AM

Click on ADD N-VDS

Screen Shot 2018-02-14 at 10.29.51 AM

 

Note: I have fp-eth2 which is suppose to be another uplink, however I only added one N-VDS previously. So if you want the 2nd VLAN uplink, you will need to create another N-VDS switch.

 

 

Screen Shot 2018-02-14 at 10.32.26 AM

Do the same for the 2nd Edge-VM Node.

Misc: About AES-NI
Screen Shot 2017-12-28 at 5.25.17 PM

Previously I had some problems with my Edge-VMs complaining the physical host does not have AES-NI support. I have check my Intel CPU did support AES-NI but however, after checking the BIOS, the AES-NI feature was disabled. After enabling that, I did not receive this error anymore.

WhatsApp Image 2018-02-13 at 2.53.50 PM

References
1. Sam NSX-T Installation Blog Posts
2. VMware NSX-T Installation Docs

Screen Shot 2017-12-17 at 2.43.30 PM

NSX-T 2.1 for PKS and PCF 2.0 Lab

Screen Shot 2017-12-17 at 2.43.30 PMFrom VMware PKS architecture, slide from VMworld 2017 US, you can see there is NSX to provide the network & security services for BOSH. To be more precise, this is going to be NSX-T. In the following few posts, I will cover setting up the vSphere Lab and prepare the hosts for NSX-T, and be ready for PKS.

 

Introducing NSX-T 2.1 with Pivotal Integration

Screen Shot 2017-12-17 at 4.00.10 PM

https://blogs.vmware.com/networkvirtualization/2017/12/nsx-t-2-1.html/

As you might have guess it by now, the version of NSX-T I will be using the lab will be 2.1 which supports PCF2.0 and PKS, specially I want to understand the CNI plugin.

Stay tune. In the next few posts, I will cover the installation of vSphere, NSX-T, PCF and PKS.