Category Archives: Uncategorized

NSX-T 2.1 Installation using ovftool

You might be wondering why would I want to use ovftool to install NSX-T appliances. This is because my management host is not being managed by a vCenter and it failed when using the vSphere Client.

Screen Shot 2017-12-17 at 4.42.29 PM

You can see from the screenshot above, I only have hosts for the EdgeComp Cluster. As I do not have additional hosts for the management cluster, I will be using an existing management host that is standalone.

While reading the NSX-T Installation Guide documentation, realize they did mention of using an alternative method ie using the OVF Tool to install the NSX Manager. I reckon, this would be useful for automated install and the other reason, is that NSX-T architecture is to move away from the dependency of vCenter. NSX-T could be deployed in a 100% non-vSphere environment, like for example KVM.

Preparing for Installation

These are the files I will be using for the NSX-T Installation. Please note Im using the pre-GA build here ie 7374156 as at this point of writing or setting up, Im not sure which build will be the GA build. I believe even if GA, it would roughly be the same experience. Will update this post again once NSX-T 2.1 is GA.
1) NSX Manager – nsx-unified-appliance-2.1.0.0.0.7374161.ova
2) NSX Controllers – nsx-controller-2.1.0.0.0.7374156.ova
3) NSX Edges – nsx-edge-2.1.0.0.0.7374178.ova

Installing NSX-T Manager using ovftool

Following the guide, and had to modify the ovftool command. So this is the command I used and I put into a batch file. Maybe later I will incorporate it into the powershell script I used to deploy the vSphere part.

Screen Shot 2017-12-17 at 7.53.54 PM

You can find the script here.

The ESXi host Im using is 6.0U2 and it does not takes in the OVF properties. So I had no choice, but to deploy to the vcenter instead and to the EdgeComp hosts.

Screen Shot 2017-12-17 at 9.05.44 PM

 

Finally able to login to the NSX Manager console.

Screen Shot 2017-12-17 at 9.06.50 PM

 

Trying to login to the web console of the NSX Manager.

Screen Shot 2017-12-17 at 9.10.36 PM

Awesome! Able to login and dashboard is up!

Screen Shot 2017-12-17 at 9.11.33 PM

Alright. so next will be the NSX-T Controllers.

Screen Shot 2017-12-17 at 9.23.16 PM

Screen Shot 2017-12-17 at 9.29.42 PM

Configuring the Controller Cluster

Retrieve the NSX Manager API thumbprint

  1. Log onto the NSX Manager via SSH using the admin credentials.
  2. Use “get certificate api thumbprint” to retrieve the SSL certificate thumbprint. Copy the output to use in commands later
    Screen Shot 2017-12-17 at 9.31.42 PM

Join the NSX Controllers to the NSX Manager

  1. Log onto each of the NSX Controllers via SSH using the admin credentials.
  2. Use “join management-plane <NSX Manager> username admin thumbprint <API Thumbprint>Screen Shot 2017-12-17 at 9.31.42 PM
  3. Enter the admin password when prompted
  4. Validate the controller has joined the Manager with “get managers” – you should see a status of “Connected”

    join management-plane 10.136.1.102 username admin thumbprint f24e53ef5c440d40354c2e722ed456def0d0ceed2459fad85803ad732ab8e82bScreen Shot 2017-12-17 at 9.51.04 PM

  5. Repeat this procedure for all three controllers

Screen Shot 2017-12-17 at 10.21.13 PM

 

Screen Shot 2017-12-17 at 10.22.13 PM

Initialise the Controller Cluster

To configure the Controller cluster we need to log onto any of the Controllers and initialise the cluster. This can be any one of the Controllers, but it will make the controller the master node in the cluster. Initialising the cluster requires a shared secret to be used on each node.

  1. Log onto the Controller node via SSH using the admin credentials.
  2. Use “set control-cluster security-model shared-secret” to configure the shared secret
  3. When the secret is configured, use “initialize control-cluster” to promote this node:

Screen Shot 2017-12-17 at 10.25.18 PM

 

Validate the status of the node using the “get control-cluster status verbose” command. You can also check the status in the NSX Manager web interface. The command shows that the Controller is the master, in majority and can connect to the Zookeeper Server (a distributed configuration service)

Screen Shot 2017-12-17 at 10.27.10 PM

Notice in the web interface that the node has a Cluster Status of “Up”

Screen Shot 2017-12-17 at 10.28.39 PM

Preparing ESXi Hosts

With ESXi hosts you can prepare them for NSX by using the “Compute Manager” construct to add a vCenter server, and then prepare the hosts automatically, or you can manually add the hosts. You can refer to Sam’s blog posts as he prepare the hosts manually for his learning exercise. Since my purpose is to quickly get the deployment up for PKS/PCF, Im going to use the automatic method using the “Compute Manager”

1. Login to NSX-T Manager.
2. Select Compute Managers.
3. Click on Add.

Screen Shot 2017-12-18 at 2.03.50 AM

4. Put in the details for the vcenter.

Screen Shot 2017-12-18 at 2.05.55 AM

Success!
Screen Shot 2017-12-18 at 2.07.11 AM

5. Go into Nodes under Fabric.

6. Change the Managed by from Standalone to the name of the compute manager you just specified.
Screen Shot 2017-12-18 at 2.09.44 AM

7. Select the Cluster and click on Configure Cluster. Enabled “Automatically Install NSX” and leave “Automatically Create Transport Node” as Disabled as I have not create the Transport Zone.
Screen Shot 2017-12-18 at 2.12.07 AM

You will see NSX Install In Progress
Screen Shot 2017-12-18 at 2.13.43 AM

Error! Host certificate not updated.
Screen Shot 2017-12-18 at 2.16.34 AM

References
1. Sam NSX-T Installation Blog Posts
2. VMware NSX-T Installation Docs

vmworld-2017

VMworld 2017 Please vote for my sessions!

This has been one of my goals for 2017 where I wanted to submit at least one paper for VMworld 2017. I am glad that I could come up with two sessions and Arup on third session. All sessions will be presented by both of us.

The VMware content team just announced that public voting is now open and I would request you to spend a couple of minutes of your time to vote for my sessions and other which you would like to see in the upcoming VMworld 2017.

Session Titles:

Below are the details of the abstracts submitted for each session and what can be expected. Click on any of the session titles to cast your vote.

Procedure to vote:

  1. Click here which will take you to the VMworld session catalog.
  2. Click on Stars next to Session title.  It will redirect for the Login. If you have existing login credentials, please kindly log in and cast your vote. If you do not have an existing account, please kindly register with VMworld website and subsequently cast your vote for our sessions.

vmword2017

I sincerely hope that you will spend a couple of minutes of your time to vote for the sessions if you think these sessions would help you. Thank you very much.

Hope to make it to my first VMword in 2017! 

10GE Testing – Multi-NIC vMotion

Had access to some 10GE NICs and therefore decided to do some performance testing and try out Multi-NIC vMotion.

Intel X520-DA2

The 10GE NICs were Intel X520-DA2 and I used SFP-H10GB-CU3M to connect directly to both hosts as I do not access to a 10GE switch at the moment.

Without reading any documentation, I tried add both 10GE ports into a vSwitch and vMotion does not work. After googling and reading the documentation, we actually have to separate the 10GE ports into their own vSwitches before the Multi-NIC vMotion will work.

http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2007467

Single-NIC 10GE vMotion Configuration
multi-nic-vmotion-1nic-host1multi-nic-vmotion-1nic-host2

Single-NIC 10GE vMotion Tests
Test 1: 14 seconds
Test 2: 14 seconds
Test 3: 10 seconds
Test 4: 11 seconds
multi-nic-vmotion-1nic-vmotiontest1multi-nic-vmotion-1nic-vmotiontest2multi-nic-vmotion-1nic-vmotiontest3multi-nic-vmotion-1nic-vmotiontest4

Single-NIC 10GE vMotion Network Performance Graphs
multi-nic-vmotion-1nic-host1-network-graphmulti-nic-vmotion-1nic-host2-network-graph 

 

Multi-NIC 2X10GE vMotion Configuration
multi-nic-vmotion-2nic-host1multi-nic-vmotion-2nic-host2

Multi-NIC 2X10GE vMotion Tests
Test 1: 10 seconds
Test 2: 12 seconds
Test 3: 09 seconds
Test 4: 12 seconds
multi-nic-vmotion-2nic-vmotiontest1 multi-nic-vmotion-2nic-vmotiontest2multi-nic-vmotion-2nic-vmotiontest3multi-nic-vmotion-2nic-vmotiontest4

Multi-NIC 2X10GE vMotion Network Performance Graphs
multi-nic-vmotion-2nic-host1-network-graphmulti-nic-vmotion-2nic-host2-network-graph


Conclusion:
Although with Multi-NIC, vMotion seems to perform faster as compare to Single-NIC, however, the difference is not significant. This is probably due to the size and the number of VMs I used for the vMotion tests.

It was noticed that there was load sharing of vMotion traffic between the 2 10GE ports as you can see from the network performance graphs. Using single-NIC, network usage is about 200MBps while multi-NIC scenario, network usage is about 100MBps per NIC which is 50% of that single-NIC.