The last few blog posts I wrote about the installation steps for Openshift Container Platform (OCP) with NSX-T NCP attracted some good interest from the community as well as VMware internal folks. However, those materials were written quite awhile back and some of software used then were not up to date. My customers were also looking at the later versions of software. Lastly, in OCP 3.11, the ansible playbooks for NSX-T NCP integration comes out of the box and therefore makes the integration much simpler. Therefore, gave the reason to write this blog post.
The high level steps remains unchanged. However, the part 5 in this case has been streamline into the Openshift installation.
Openshift with NSX-T Installation Part 1: Overview
Openshift with NSX-T Installation Part 2: NSX-T
Openshift with NSX-T Installation Part 3: RHEL Preparation
Openshift with NSX-T Installation Part 4: Openshift Installation
Openshift with NSX-T Installation Part 5: NCP and CNI Integration (Combine into Part 4)
Openshift with NSX-T Installation Part 6: Demo App
** For fellow VMware colleagues, to save you time for preparing the RHEL templates and VMs for OCP install, I have exported out the VMs from my Lab. I have uploaded in onedrive. Email me, I will happily share the link to download. Size is about 7GB.
- Compute – vSphere 6.7+ (vCenter Server + ESXi) and Enterprise Plus license
- Storage – VSAN or other vSphere Datastores
- Networking & Security – NSX-T 2.3
- Openshift Container Platform 3.11 Enterprise
- RHEL 7.6
Here is the complete list of software that needs to be downloaded to deploy Openshift Container Platform and NSX-T.
|NSX-T||nsx-unified-appliance-22.214.171.124.0.10085405.ova (From 2.2 onwards, you can deploy NSX-T Controllers and Edges from the NSX-T Manager)
The version I used: rhel-server-7.6-x86_64-dvd.iso
Ansible Hosts File
**Update on 21 June 2019: I notice you will need the hosts file as reference.
Openshift Installation & NSX-T NCP Integration
- On every node, install docker.
yum install docker-1.13.1
- On the master node or jumphost, run the pre-requisites playbook.
- On every node,
docker load -i /root/nsx-container-126.96.36.19995762/Kubernetes/nsx-ncp-rhel-188.8.131.5295762.tar
docker image tag registry.local/184.108.40.20695762/nsx-ncp-rhel nsx-ncp
- On the master node or jumphost, run the deploy-cluster playbook.
You can see the NCP and Node Agents being deployed as Pods under the nsx-system namespace.
oc get pod –all-namespaces
If the NCP integration is successful, you should not see any error logs
oc logs nsx-ncp-279qf -n nsx-system | grep error
You can also do that for the node agent. However, there are 2 containers in the node-agent pod, you will need to specify the container using -c.
oc logs nsx-node-agent-56f2s -c node-agent -n nsx-system | grep error
On the NSX-T side, if the integration is successful, you will see a bunch of default logical switches, logical routers as well as load balancer being created.
You can also access the Openshift Container Platform web console using the https://ocp-master:8443. (You might need to add a host DNS entry)
Demo App Test
Alright. Now lets test whether is the Container Network Interface(CNI) and NCP are working correctly by deploying a demo test application. I normally use the Yelb app for my demo and testing.
- On the master node,
oc new-project yelb
git clone https://github.com/vincenthanjs/yelb-demo.git
- You will need to add policy before you can deploy the Pods. If not you will error in deploying the containers.
oc adm policy add-scc-to-user anyuid -z default
oc adm policy add-scc-to-user anyuid -z router
oc adm policy add-scc-to-user anyuid -z builder
oc adm policy add-scc-to-user anyuid -z deployer
- Now, you can deploy the yelb app.
oc create -f yelb-app.yaml
- Watch the containers creating.
watch oc get pod
- NSX-T Load Balancer supports Ingress service type.
oc get all
- Previously I already had a wildcard domain already pointed to the Openshift Load Balancer virtual IP.
You can watch the full Openshift Container Platform installation and integration with NSX-T NCP over here.