Category Archives: Openshift

Screen Shot 2018-05-15 at 7.19.06 AM

Openshift with NSX-T Installation Part 5: NCP and CNI Integration

In this article, we will integrate NSX-T NCP and CNI to Openshift.

Step 01: Tag the Logical Switches Ports connected to OCP-Master, OCP-Node01 and OCP-Node02 VMs.

NSX-T Manager -> Switching -> LS-VIFs -> Related -> Ports -> Click on the respective logical ports -> Actions -> Manage Tags

 

Screen Shot 2018-05-18 at 1.51.40 PM

* You can highlight the port to see which VMs is being Connected.

Screen Shot 2018-05-18 at 2.04.01 PM

 

  • Scope: ncp/node_name
  • Tag: ocp-master
  • Scope: ncp/cluster
  • Tag: ocp-cl1

Screen Shot 2018-05-18 at 2.09.53 PM

  • Scope: ncp/node_name
  • Tag: ocp-node01
  • Scope: ncp/cluster
  • Tag: ocp-cl1

Screen Shot 2018-05-18 at 2.05.24 PM

  • Scope: ncp/node_name
  • Tag: ocp-node02
  • Scope: ncp/cluster
  • Tag: ocp-cl1

 

 

Screen Shot 2018-05-18 at 2.08.44 PM

Step 02: On the master Node, lets fork the NSX-T Integration for Openshift. The below is using Yasen github, but its a fork from the actual nsxt integration from here -> https://github.com/vmware/nsx-integration-for-openshift.

On every node, run this
cd /root/nsx-container-2.1.3.8356796/Kubernetes
docker load -i nsx-ncp-rhel-2.1.3.8356796.tar

Screen Shot 2018-05-16 at 5.57.20 PM

On all the 3 nodes, you have to do the following.

 docker images
 docker tag registry.local/2.1.3.8356796/nsx-ncp-rhel nsx-ncp
 docker images
Screen Shot 2018-05-16 at 5.59.17 PM
Then on the master,
cd /root
cd /root/nsx-integration-for-openshift/openshift-ansible-nsx/roles/ncp_prep/defaults/
nano main.yml
change the uplink_port to ens224
# update the variable values below before running the ncp_prep role
cni_url: /root/nsx-container-2.1.3.8356796/Kubernetes/rhel_x86_64/nsx-cni-2.1.3.8356796-1.x86_64.rpm
ovs_url: /root/nsx-container-2.1.3.8356796/OpenvSwitch/rhel74_x86_64/openvswitch-2.8.1.7345072-1.x86_64.rpm
ovs_kmod1_url: /root/nsx-container-2.1.3.8356796/OpenvSwitch/rhel74_x86_64/openvswitch-kmod-2.8.1.7345072-1.el7.x86_64.rpm
ovs_kmod2_url: /root/nsx-container-2.1.3.8356796/OpenvSwitch/rhel74_x86_64/kmod-openvswitch-2.8.1.7345072-1.el7.x86_64.rpm
uplink_port: ens224
ncp_image_url: /root/nsx-container-2.1.3.8356796/Kubernetes/nsx-ncp-rhel-2.1.3.8356796.tar
Screen Shot 2018-05-18 at 2.57.10 PM
cd /root
wget http://52.59.159.238/ncp-rc.yml
nano /root/ncp-rc.yml
subnet_prefix = 24
tier0_router = T0
overlay_tz = TZ-Overlay
container_ip_blocks = IPBlock-PodNetworking
no_snat_ip_blocks = IPBlock-NONAT
external_ip_pools = Pool-NAT
top_firewall_section_marker = top_firewall_section
bottom_firewall_section_marker = bottom_firewall_section
Screen Shot 2018-05-19 at 10.04.12 PM
oc apply -f ncp-rc.yml
Screen Shot 2018-05-19 at 10.24.03 PM
cd /root/nsx-integration-for-openshift/openshift-ansible-nsx/roles/ncp/defaults
change the apiserver_host_ip, nsx_manager_ip, nsx_api_user and nsx_api_password to your configuration.
ncp_yaml_url: /root/ncp-rc.yml
agent_yaml_url: http://52.59.159.238/nsx-node-agent-ds.yml
cluster_name: ocp-cl1
apiserver_host_ip: 10.11.1.10
nsx_manager_ip: 10.136.1.102
nsx_api_user: admin
nsx_api_password: VMware1!
Screen Shot 2018-05-19 at 10.12.38 PM
ansible-playbook /root/nsx-integration-for-openshift/openshift-ansible-nsx/ncp.yaml
Screen Shot 2018-05-18 at 2.19.33 PM
oc get pod
oc delete nsx-ncp-r46z2
Screen Shot 2018-05-18 at 2.32.27 PM
Screen Shot 2018-05-15 at 7.19.06 AM

Openshift with NSX-T Installation Part 4: Openshift Installation

In this article, we will use Ansible to install Openshift 3.9.

On the Master node:
nano /usr/share/ansible/openshift-ansible/roles/lib_utils/action_plugins/sanity_checks.py

add then (‘openshift_use_nsx’,False.

Screen Shot 2018-05-15 at 4.36.44 PM
cd /root
wget https://raw.githubusercontent.com/vincenthanjs/openshift-nsxt/master/hosts
cp hosts /etc/ansible/ -f

cd /etc/ansible/
htpasswd -c /root/htpasswd admin

If the above URL does not work. you can copy and paste the below into /etc/ansible/hosts

#This is the file you replace on /etc/ansible/hosts

# Create an OSEv3 group that contains the masters and nodes groups
[OSEv3:children]
masters
nodes

# Set variables common for all OSEv3 hosts
[OSEv3:vars]
# SSH user, this user should allow ssh based auth without requiring a password
ansible_ssh_user=root
ansible_ssh_pass=VMware1!
openshift_master_default_subdomain=ocpapps.acepod.com

os_sdn_network_plugin_name=cni
openshift_use_openshift_sdn=false
openshift_use_nsx=true
openshift_node_sdn_mtu=1500
openshift_enable_service_catalog=true

# If ansible_ssh_user is not root, ansible_become must be set to true
#ansible_become=true

openshift_deployment_type=openshift-enterprise
#openshift_deployment_type=origin
openshift_disable_check=docker_storage,docker_image_availability
openshift_disable_check=memory_availability,disk_availability,docker_storage,docker_image_availability,package_version

# uncomment the following to enable htpasswd authentication; defaults to DenyAllPasswordIdentityProvider
openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider', 'filename': '/etc/origin/master/htpasswd'}]
openshift_master_htpasswd_file=/root/htpasswd

# host group for masters
[masters]
ocp-master	ansible_ssh_host=10.11.1.10

# host group for etcd
[etcd]
ocp-master	ansible_ssh_host=10.11.1.10

# host group for nodes, includes region info
[nodes]
ocp-master	ansible_ssh_host=10.11.1.10
ocp-node01	ansible_ssh_host=10.11.1.11
ocp-node02	ansible_ssh_host=10.11.1.12

[nsxtransportnodes:children]
masters
nodes
[root@ocp-master ~]#

ansible-playbook /usr/share/ansible/openshift-ansible/playbooks/prerequisites.yml

Screen Shot 2018-05-15 at 3.50.28 PM

 

You will get warning message but that should be alright.

Screen Shot 2018-05-16 at 10.41.03 AM

Before we continue to run the next script, we need to install the nsx container plugins. Before that we already have the NSX container download in the /root.

You have to do the below on every node.
cd /root/nsx-container-2.1.3.8356796/OpenvSwitch/rhel74_x86_64
yum install *.rpm -y

Screen Shot 2018-05-16 at 4.00.17 PM

You have to do the below on every node.
cd /root/nsx-container-2.1.3.8356796/Kubernetes/rhel_x86_64
yum install *.rpm -y

Screen Shot 2018-05-16 at 4.24.57 PM

Now we can run the deploy script. On the Master Node, run the following

ansible-playbook  /usr/share/ansible/openshift-ansible/playbooks/deploy_cluster.yml

It will take about 30 minutes to complete the install by the ansible playbook. Your mileage will vary.

You will see some error messages on the Web Console. This is because of the NSX-T integration have not been set up and therefore the error.

Screen Shot 2018-05-16 at 5.17.57 PM

The next check that the openshift installation is successful is to do the following commands.

As you can see now you can use the openshift commands such as
oc get pods –all-namespaces

Screen Shot 2018-05-16 at 5.21.24 PM

systemctl status openvswitch

Screen Shot 2018-05-16 at 5.37.33 PM

Screen Shot 2018-05-15 at 7.19.06 AM

Openshift with NSX-T Installation Part 3: RHEL

In this article we start install RHEL and install Openshift.

The main guide for the installation I reference from is at https://access.redhat.com/documentation/en-us/openshift_container_platform/3.9/html-single/installation_and_configuration/. Chapter 2.6 Advanced Installation.

Screen Shot 2018-05-15 at 10.37.53 AM

Step 1: Create a VM and install RHEL from iso. I place this VM on the LS-MGMT01 logical switch and this has internet access. Therefore, when you clone the VM, they will be on the same network.
Screen Shot 2018-05-15 at 10.45.35 AM

Screen Shot 2018-05-15 at 10.56.58 AM

As the installation complete, create the root password.

Screen Shot 2018-05-15 at 10.57.58 AM

Step 2: Start the VM, so we can install all the dependencies.
Screen Shot 2018-05-15 at 11.24.39 AM
Configure the IP. I use nmtui.
Screen Shot 2018-05-15 at 12.02.12 PM

Ping the gateway and ping 8.8.8.8 to test whether you have access to Internet.

Screen Shot 2018-05-15 at 12.04.47 PM

 

Step 3: Follow the installation guide to install all the pre-req. Run the following commands.Screen Shot 2018-05-15 at 12.19.08 PM

subscription-manager register –username=<Redhat Openshift Username>

Screen Shot 2018-05-15 at 12.35.04 PM
subscription-manager refresh
subscription-manager list –available –matches ‘*OpenShift*’

Screen Shot 2018-05-15 at 12.42.28 PM

subscription-manager attach –pool=<highlighted>

Screen Shot 2018-05-15 at 12.44.32 PM

subscription-manager repos –disable=”*”
yum repolist

Screen Shot 2018-05-15 at 12.50.24 PM
subscription-manager repos –enable=”rhel-7-server-rpms” –enable=”rhel-7-server-extras-rpms” –enable=”rhel-7-server-ose-3.9-rpms” –enable=”rhel-7-fast-datapath-rpms” –enable=”rhel-7-server-ansible-2.4-rpms”

Screen Shot 2018-05-15 at 1.02.02 PM
yum install wget git net-tools bind-utils iptables-services bridge-utils bash-completion kexec-tools sos psacct nano httpd-tools  atomic-openshift-utils unzip

Screen Shot 2018-05-15 at 1.04.07 PM
yum update

Screen Shot 2018-05-15 at 1.05.51 PM
systemctl reboot

Step 3: Now we going to upload the NSX Container zip file to the master copy.

The version I used is 2.13. Filename: nsx-container-2.1.3.8356796.zip.
Place this file in /root.

Screen Shot 2018-05-16 at 3.56.55 PM

 

[root@ocp-master ~]# unzip nsx-container-2.1.3.8356796.zip

Screen Shot 2018-05-16 at 4.00.17 PM

Step 4: Now we are reading to clone this VM to 3 copies.
I name my VMs the following

  • OCP-Master (IP: 10.11.1.10/24)
  • OCP-Node01 (IP: 10.11.1.11/24)
  • OCP-Node02  (IP: 10.11.1.12/24)

Screen Shot 2018-05-15 at 1.12.49 PM

 

Console to each VM. Change the IP using nmtui command. You can either use nmtui to deactivate/activate the connection or use ifdown ens192 and ifup ens192.

 

 

Screen Shot 2018-05-15 at 1.31.51 PM

Screen Shot 2018-05-15 at 1.30.19 PM

Once you configure ocp-node02, you can try to ping to the rest of the nodes.Screen Shot 2018-05-15 at 1.32.39 PM

Next we going to put in host entries in each of the nodes so that we can reach each nodes using hostnames.
nano /etc/hosts on each node.

On OCP-Master

Screen Shot 2018-05-15 at 4.17.50 PM

On OCP-Node01
Screen Shot 2018-05-15 at 4.18.43 PM

On OCP-Node02
Screen Shot 2018-05-15 at 4.19.15 PM

Once done, you can test connectivity from the master to the rest of the nodes. Repeat the process from Node01 and Node02.
Screen Shot 2018-05-15 at 2.22.08 PM

Step 4: Resume back to the preparation of the hosts.

ssh-keygen

Screen Shot 2018-05-15 at 2.31.29 PM

# for host in ocp-master01 \
    ocp-node01 \
    ocp-node02; \
    do ssh-copy-id -i ~/.ssh/id_rsa.pub $host; \
    done

Screen Shot 2018-05-15 at 2.34.21 PM

Once this is done, you can try SSH from the master to node01 and node02 without any login.Screen Shot 2018-05-15 at 2.34.21 PM

 

Screen Shot 2018-05-15 at 7.19.06 AM

Openshift with NSX-T Installation Part 2: NSX-T

 

 

In this article, we are now going to start configuring NSX-T so that it will be ready for us to install Openshift and consume the networking and security services provided by NSX-T. The result is that Openshift can deliver on demand provisioning of all NSX-T components: Container Network Interface (CNI), NSX-T Container Plugin (NCP) POD, NSX Node Agent POD, etc) automatically when a new Kubernetes (K8S) Cluster is requested, all done with a single CLI or API call. In addition, Openshift also provides a unique capability through its integration with NSX-T to enable network micro-segmentation at the K8S namespace level which allows Cloud/Platform Operators to manage access between application and/or tenant users at at a much finer grain level than was possible before which is really powerful!

I will not be walking through a step-by-step NSX-T installation and I will assume that you already have a basic NSX-T environment deployed which includes a few ESXi host prepped as Transport Nodes and at least 1 NSX-T Controller and 1 NSX-T Edge. If you would like a detailed step by step walk through, you can refer to the NSX-T documentation here or you can even leverage William Lam Automated NSX-T Lab Deployment script to setup the base environment and modify based on the steps in this article.

For my lab, I actually setup the NSX-T manually using my previous NSX-T 2.1 Install Documentation. You can follow the guide until the NSX Edges deployment because in NSX-T 2.1, you can actually use the NSX Manager to deploy the Edges.

Step 1 – Verify that you have prepped the ESXi hosts which will be used to deploy the Openshift workload VMs. In my example below, I only have one host is that prepare for NSX-T Transport Nodes. This would also means the Openshift VMs need to be pin to this hosts. You can use Affinity Rules to do that.

 

Screen Shot 2018-05-15 at 6.38.22 AM

Step 2 – Create a new IP Pool which will be used to allocate Virtual IPs for the exposed Openshift Services (e.g Load Balancer for application deployments). To do so, navigate to Inventory->Groups->IP Pool and provide the following:

  • Name: Pool-NAT
  • IP Range: 10.11.2.1 – 10.11.2.254
  • CIDR: 10.11.2.0/24

Screen Shot 2018-05-15 at 7.05.34 AM
Step 3 – Create a new IP Block which will by used by Openshift on-demand to carve up into smaller /24 networks and assigned those to each K8S namespace. This IP block should be sized sufficiently to ensure you do not run out of addresses and currently it is recommended to use a /16 network (non-routable). I’m using a /21 network. To do so, navigate to DDI->IPAM and provide the following:

  • Name: IPBlock-NONAT
  • CIDR: 10.11.8.0/21

Screen Shot 2018-05-15 at 7.04.47 AM
Step 4 – Create a new T0 Router which will be used to communicate with your external physical network. Make sure you have either created an Edge Cluster (can contain a single Edge) or create a new Edge Cluster if you have not already. The HA mode must be Active/Standby as NAT is used by the NCP service within K8S Management POD. To do so, navigate to Routing->Routers and provide the following:

  • Name: T0
  • Edge Cluster: EdgeCluster01
  • High Availability Mode: Active-Standby
  • Preferred Member: TN-SUN03-EDGEVM01

Screen Shot 2018-05-17 at 10.34.48 AM
Step 5 – Create a static route on the T0 which will enable all traffic from the Openshift Management PODs to be able to communicate outbound to our Management components.  In my example, 10.197.1.0/24 is the intermediate network’s gateway which will be used to route traffic from within T0 to my virtual router (pfSense). To do so, click on the T0 Router you just created and navigate to Routing->Static Routes and provide the following:

  • Network: 0.0.0.0/0
  • Next Hop: 10.197.1.1

Screen Shot 2018-05-17 at 10.37.37 AM

 

For my lab, I have a few other internal networks that I would like to route into the NSX-T networks. There I added another static route.

  • Network: 192.168.0.0/16
  • Next Hop: 10.197.1.2

Screen Shot 2018-05-17 at 10.39.36 AM
Step 6 – Next, we need to create three Logical Switches, one that will be used for the T0 uplink, the other for Openshift Management Cluster which is used to run the Openshift Management POD and the last for internal network of the NCP communication. To do so, navigate to Switching->Switches and add the following:

  • Name: LS-EdgeUplink01
  • Transport Zone: TZ-VLAN
  • VLAN: 0

Screen Shot 2018-05-17 at 10.45.55 AM

  • Name: LS-MGMT01
  • Transport Zone: TZ-Overlay
  • VLAN: NIL
  • Name: LS-VIFs
  • Transport Zone: TZ-Overlay
  • VLAN: NIL

Screen Shot 2018-05-17 at 10.42.38 AM
After this step, you should have three Logical Switches as shown in the screenshot below. TheLS-EdgeUplink01 should be on TZ-VLAN and the LS-MGMT01 and LS-VIDs should be on TZ-Overlay.

Screen Shot 2018-05-17 at 10.49.27 AM
Step 7 – Now we need to configure the Uplink Router Port and assign it an address from the intermediate network so that we can route from the T0 to our physical or virtual router. To do so, navigate to Routing and then click on the T0 Router we had created earlier and select Configuration->Router Ports and provide the following:

  • Name: RP-Uplink01
  • Type: Uplink
  • Transport Node: TN-SUN03-EDGEVM01
  • Logical Switch: LS-EdgeUplink01
  • Logical Switch Port: Uplink-1-Port
  • IP Address/mask: 10.197.1.5/24

Screen Shot 2018-05-17 at 10.51.59 AM

  • Name: RP-Uplink02
  • Type: Uplink
  • Transport Node: TN-SUN03-EDGEVM02
  • Logical Switch: LS-EdgeUplink02
  • Logical Switch Port: Uplink-2-Port
  • IP Address/mask: 10.197.1.6/24

Screen Shot 2018-05-17 at 10.55.05 AM

Step 8 – Because I have two Edges, you will need to create HA VIP.

  • VIP Address: 10.197.1.4/24
  • Status: Enable
  • Uplink Ports: RP-Uplink1, RP-Uplink2

Screen Shot 2018-05-17 at 10.57.12 AM
Step 9 – Create a new T1 Router which will be used for the K8S Management Cluster POD. To do so, navigate to Routing->Routers and provide the following:

  • Name: T1-Mgmt
  • Tier-0 Router: T0
  • Failover Mode: Preemptive
  • Edge Cluster: <No need to select>

Screen Shot 2018-05-17 at 10.59.15 AM
Step 10 – Configure the Downlink Router Port for the Openshift Management Cluster which is where you will define the network that NSX-T will use for these VMs. In my example, I use 10.11.1.0/24.To do so, click on the T1 Router that you had just created and navigate to Configuration->Router Ports and provide the following:

  • Name: RP-MGMT
  • Logical Switch: LS-MGMT01
  • Logical Switch Port: Attach New Switch Port
  • IP Address/mask: 10.11.1.1/24

Screen Shot 2018-05-17 at 11.03.15 AM
Step 11 – To ensure the Openshift network will be accessible from the outside, we need to advertise these routes. To do so, click on the T1 Router you had created earlier and navigate to Routing->Route Advertisement and enable the following:

  • Status: Enabled
  • Advertise All NSX Connected Routes: Yes

Screen Shot 2018-05-17 at 11.05.09 AM
Step 12 – This step maybe optional depending how you have configured your physical networking, in which case you will need to use BGP instead of static routes to connect your physical/virtual network to NSX-T’s T0 Router. In my environment, I am using a virtual router (pfSense) and easiest way to enable connectivity from both my management network as well as networks that my vCenter Server, ESXi hosts, NSX-T VMs and Management VMs are hosted on to communicate with Openshift is setting up a few static routes. We need to create one static routes to reach both our Openshift Management Cluster Network (10.11.1.0/24) as well as Openshift Projects Network (10.11.8.0/21). I can summarise into 10.11.0.0/16 one route. For all traffic destine to either of these networks, we will want them to be forwarded to our T0’s HA uplink address which if you recall from Step 7 is 10.197.1.4. Depending on the physical or virtual router solution, you will need to follow your product documentation to setup either BGP or static routes.

Screen Shot 2018-05-17 at 11.09.05 AM
Step 13 – At this point, we have completed all the NSX-T configurations and we can run through a few validation checks to ensure that when we go and deploy the Openshift Management VMS (One Master and Two Nodes), we will not run into networking issues. This is a very critical step and if you are not successful here, you should go back and troubleshoot prior to moving on.

To verify Overlay network connectivity between ESXi hosts and Edge VM, you should be able to ping using the VXLAN netstack between all ESXi hosts as well as to the Edge VM’s overlay interface. Below is a table of the IPs that were automatically allocated from the VTEP’s IP Pool, you can discover these by logging onto the ESXi host but they should be sequential from the stating range of your defined IP Pool. Also make sure you have your physical and virtual switches configured to use MTU 1600 for overlay traffic.

Host IP Address
esxi  10.140.1.11
edge01  10.140.1.12
edge02  10.140.1.13

Screen Shot 2018-05-17 at 11.27.54 AM

You can SSH to each ESXi host and run the following:

vmkping ++netstack=vxlan [IP]

or you can also do this remotely via ESXCLI by running the following:

esxcli network diag ping –netstack=vxlan –host=[IP]

OR you can also SSH to the Edge. Do a vrf 0 and ping the overlay networks.

Screen Shot 2018-05-17 at 11.32.00 AM

To verify connectivity to NSX-T Networks as well as routing between physical/virtual networking to NSX-T, use your Jumphost and you should be able to ping the following addresses from that system:

  • 10.11.1.1 (Downlink Router Port to our Openshift Management Cluster network)
  • 10.197.1.4 (Uplink Router Port)

In the next blog post, we will start our Openshift deployment starting with RHEL preparation.

 

 

 

Screen Shot 2018-05-15 at 7.19.06 AM

Openshift with NSX-T Installation Part 1: Overview

The past few days I have been involved in an Openshift + NSX-T POC. In fact, 2 POCs because we setup one time at the customer lab and at the same I also repeat the same steps in my own lab so that I can learn as well. Before I proceed further, I would to shout out to Yasen, the NSBU TPM to guide me on the installation of the Openshift and integrating with NSX-T. Without his guidance and his help with the POC, this article would not be possible. As a promise, I would write a blog on the installation because I believe many in the community and my peers would need help in setting this up.

I’m going to break up the articles into various parts. *Inspirations from William Lam who did the PKS with NSX-T series.

Openshift with NSX-T Installation Part 1: Overview
Openshift with NSX-T Installation Part 2: NSX-T
Openshift with NSX-T Installation Part 3: RHEL Preperation
Openshift with NSX-T Installation Part 4: Openshift Installation
Openshift with NSX-T Installation Part 5: NCP and CNI Integration
Openshift with NSX-T Installation Part 6: Demo App

Components:

  • Compute – vSphere 6.5+ (vCenter Server + ESXi) and Enterprise Plus license
  • Storage – VSAN or other vSphere Datastores
  • Networking & Security – NSX-T 2.1
  • Openshift 3.9 Enterprise
  • RHEL 7.5

Software Download:

Here is the complete list of software that needs to be downloaded to deploy Openshift and NSX-T.

Software Download URL
NSX-T  https://my.vmware.com/web/vmware/details?productId=673&downloadGroup=NSX-T-210
nsx-unified-appliance-2.1.0.0.0.7395503.ova
nsx-edge-2.1.0.0.0.7395502.ova
nsx-controller-2.1.0.0.0.7395493.ova
nsx-container-2.1.3.8356796.zip
RHEL https://access.redhat.com/downloads/
The version I used: rhel-server-7.5-x86_64-dvd.iso

Screen Shot 2018-05-16 at 3.01.02 PM

Lab Environment:

For my Lab, I will be using two physical hosts. One host for the NSX-T management components and the other host to be the NSX-T Transport Node which will house the Openshift VMs which is one Master and two Node VMs. You can see there are 3 physical nodes in my cluster because I am using vSAN here as my shared storage. My lab is a non-nested setup but at the customer POC, we setup a nested environment using William Lam NSX-T Auto-Deploy Script and that works as well.

Screen Shot 2018-05-15 at 6.34.20 AM

The good part of NSX-T is that you can select which hosts you would like to prepare as transport node instead of the whole cluster. You could also add in all the hosts in the whole cluster to be prepared as transport nodes with the use of the Compute Manager. In this case, I only selected 192.168.1.202 hosts as the transport node. The other two nodes are not prepared as transport nodes.

Screen Shot 2018-05-15 at 6.37.41 AM

Compute/Storage

VM CPU MEM DISK
NSX-T Manager 4 16GB 140GB
NSX-T Controller x 3 4 16GB 120GB
NSX-Edge x 2 (Medium Size) 4 8GB 120GB
Openshift Master 4 16GB 40GB
Openshift Node01 4 16GB 40GB
Openshift Node02 4 16GB 40GB

Networking

Defined within your physical or virtual network infrastructure

  • Management Network (10.136.1.0/24) – This is where the management VMs will reside such as the NSX-T Manager and Controllers.
  • Transit Network (10.197.1.0/24) – This network is required to route between our management network via the NSX-T T0 Router and our Openshift Cluster Management network as well as the Openshift Workload networks. You only need two IPs, one for the gateway which should exists by default and one for the uplink which will reside on the T0. Static routes will be used to reach the NSX-T networks. In a Production or non-lab environment, BGP would be used to peer the T0 to your physical network
  • Internet Network – IP address that is a able to access to Internet. For my lab, I setup a pfsense router as the next-hop for the T0 router and the pfsense router will basically NAT the networks in NSX-T such as the Openshift Cluster Management Network. I basically route 10.11.0.0/16, the whole subnet to NSX-T T0 which also means 10.11.*.* will have access to Internet. This will allow Internet access for the Openshift VMs for “yum install” and download relevant packages. ***Internet access is very important especially for the Openshift VMs, if not you will need to spend alot of time on troubleshooting.

Defined within NSX-T

  • Openshift Cluster Management Network (10.11.1.0/24) – This network is used for the Openshift management POD which includes services like the Node Agent for monitoring liveness of the cluster and NSX-T Container Plugin (NCP) to name a few
  • NAT IP Pool (10.11.2.0/24) – This network pool will provide addresses for when load-balancing services are required as part of an application deployment within Openshift. One IP address per project.
  • NONAT IP Block (10.11.8.0/21) – This network is used when an application is requested to deploy onto a new Openshift namespace. A /24 network is taken from this IP Block and allocated to a specific Openshift project. This network will allow for 8 projects.
  • Pod Networking IP Block (10.12.0.0/16) – This network pool will not be required to be routable in the external network.

Here is a logical diagram of my planned Openshift + NSX-T deployment:

Screen Shot 2018-05-17 at 10.01.19 AM