Author Archives: admin

NSX-T 2.1 Installation using ovftool

You might be wondering why would I want to use ovftool to install NSX-T appliances. This is because my management host is not being managed by a vCenter and it failed when using the vSphere Client.

Screen Shot 2017-12-17 at 4.42.29 PM

You can see from the screenshot above, I only have hosts for the EdgeComp Cluster. As I do not have additional hosts for the management cluster, I will be using an existing management host that is standalone.

While reading the NSX-T Installation Guide documentation, realize they did mention of using an alternative method ie using the OVF Tool to install the NSX Manager. I reckon, this would be useful for automated install and the other reason, is that NSX-T architecture is to move away from the dependency of vCenter. NSX-T could be deployed in a 100% non-vSphere environment, like for example KVM.

Preparing for Installation

These are the files I will be using for the NSX-T Installation. Please note Im using the pre-GA build here ie 7374156 as at this point of writing or setting up, Im not sure which build will be the GA build. I believe even if GA, it would roughly be the same experience. Will update this post again once NSX-T 2.1 is GA.
1) NSX Manager – nsx-unified-appliance-2.1.0.0.0.7374161.ova
2) NSX Controllers – nsx-controller-2.1.0.0.0.7374156.ova
3) NSX Edges – nsx-edge-2.1.0.0.0.7374178.ova

Installing NSX-T Manager using ovftool

Following the guide, and had to modify the ovftool command. So this is the command I used and I put into a batch file. Maybe later I will incorporate it into the powershell script I used to deploy the vSphere part.

Screen Shot 2017-12-17 at 7.53.54 PM

You can find the script here.

The ESXi host Im using is 6.0U2 and it does not takes in the OVF properties. So I had no choice, but to deploy to the vcenter instead and to the EdgeComp hosts.

Screen Shot 2017-12-17 at 9.05.44 PM

 

Finally able to login to the NSX Manager console.

Screen Shot 2017-12-17 at 9.06.50 PM

 

Trying to login to the web console of the NSX Manager.

Screen Shot 2017-12-17 at 9.10.36 PM

Awesome! Able to login and dashboard is up!

Screen Shot 2017-12-17 at 9.11.33 PM

Alright. so next will be the NSX-T Controllers.

Screen Shot 2017-12-17 at 9.23.16 PM

Screen Shot 2017-12-17 at 9.29.42 PM

Configuring the Controller Cluster

Retrieve the NSX Manager API thumbprint

  1. Log onto the NSX Manager via SSH using the admin credentials.
  2. Use “get certificate api thumbprint” to retrieve the SSL certificate thumbprint. Copy the output to use in commands later
    Screen Shot 2017-12-17 at 9.31.42 PM

Join the NSX Controllers to the NSX Manager

  1. Log onto each of the NSX Controllers via SSH using the admin credentials.
  2. Use “join management-plane <NSX Manager> username admin thumbprint <API Thumbprint>Screen Shot 2017-12-17 at 9.31.42 PM
  3. Enter the admin password when prompted
  4. Validate the controller has joined the Manager with “get managers” – you should see a status of “Connected”

    join management-plane 10.136.1.102 username admin thumbprint f24e53ef5c440d40354c2e722ed456def0d0ceed2459fad85803ad732ab8e82bScreen Shot 2017-12-17 at 9.51.04 PM

  5. Repeat this procedure for all three controllers

Screen Shot 2017-12-17 at 10.21.13 PM

 

Screen Shot 2017-12-17 at 10.22.13 PM

Initialise the Controller Cluster

To configure the Controller cluster we need to log onto any of the Controllers and initialise the cluster. This can be any one of the Controllers, but it will make the controller the master node in the cluster. Initialising the cluster requires a shared secret to be used on each node.

  1. Log onto the Controller node via SSH using the admin credentials.
  2. Use “set control-cluster security-model shared-secret” to configure the shared secret
  3. When the secret is configured, use “initialize control-cluster” to promote this node:

Screen Shot 2017-12-17 at 10.25.18 PM

 

Validate the status of the node using the “get control-cluster status verbose” command. You can also check the status in the NSX Manager web interface. The command shows that the Controller is the master, in majority and can connect to the Zookeeper Server (a distributed configuration service)

Screen Shot 2017-12-17 at 10.27.10 PM

Notice in the web interface that the node has a Cluster Status of “Up”

Screen Shot 2017-12-17 at 10.28.39 PM

Preparing ESXi Hosts

With ESXi hosts you can prepare them for NSX by using the “Compute Manager” construct to add a vCenter server, and then prepare the hosts automatically, or you can manually add the hosts. You can refer to Sam’s blog posts as he prepare the hosts manually for his learning exercise. Since my purpose is to quickly get the deployment up for PKS/PCF, Im going to use the automatic method using the “Compute Manager”

1. Login to NSX-T Manager.
2. Select Compute Managers.
3. Click on Add.

Screen Shot 2017-12-18 at 2.03.50 AM

4. Put in the details for the vcenter.

Screen Shot 2017-12-18 at 2.05.55 AM

Success!
Screen Shot 2017-12-18 at 2.07.11 AM

5. Go into Nodes under Fabric.

6. Change the Managed by from Standalone to the name of the compute manager you just specified.
Screen Shot 2017-12-18 at 2.09.44 AM

7. Select the Cluster and click on Configure Cluster. Enabled “Automatically Install NSX” and leave “Automatically Create Transport Node” as Disabled as I have not create the Transport Zone.
Screen Shot 2017-12-18 at 2.12.07 AM

You will see NSX Install In Progress
Screen Shot 2017-12-18 at 2.13.43 AM

Error! Host certificate not updated.
Screen Shot 2017-12-18 at 2.16.34 AM

References
1. Sam NSX-T Installation Blog Posts
2. VMware NSX-T Installation Docs

Screen Shot 2017-12-17 at 2.43.30 PM

NSX-T 2.1 for PKS and PCF 2.0 Lab

Screen Shot 2017-12-17 at 2.43.30 PMFrom VMware PKS architecture, slide from VMworld 2017 US, you can see there is NSX to provide the network & security services for BOSH. To be more precise, this is going to be NSX-T. In the following few posts, I will cover setting up the vSphere Lab and prepare the hosts for NSX-T, and be ready for PKS.

 

Introducing NSX-T 2.1 with Pivotal Integration

Screen Shot 2017-12-17 at 4.00.10 PM

https://blogs.vmware.com/networkvirtualization/2017/12/nsx-t-2-1.html/

As you might have guess it by now, the version of NSX-T I will be using the lab will be 2.1 which supports PCF2.0 and PKS, specially I want to understand the CNI plugin.

Stay tune. In the next few posts, I will cover the installation of vSphere, NSX-T, PCF and PKS.

 

pks1

My First KUBO Deployment – PKS

Pivotal Container Service (PKS) was announced in VMworld 2017 US. Its not GA yet but through the VMworld CNA sessions, I learnt that its going to use BOSH to spin up Kubernetes cluster, thus the name KUBO – Kubernetes on BOSH. Through my googling, I saw my fellow colleague Simon from Ireland had the same thinking and did a fantastic job in detailing in the installation steps required to get KUBO up and running.

 

My First KUBO Deployment Screenshots

Screen Shot 2017-11-13 at 7.50.27 AM

 

Below you can see the kubernetes cluster spun up by BOSH. I had to scale down some of the nodes due to the limited amount of Memory and Storage resources I had.

 

Screen Shot 2017-11-14 at 10.08.40 PM

 

Below show the vSphere resources. Almost consuming all resources on my 64GB RAM, 1TB Storage Host.

Screen Shot 2017-12-17 at 2.20.29 PM

 

The screenshot below shows the amount of storage those k8s nodes consume.

Screen Shot 2017-12-17 at 2.29.43 PM

 

This is the worker node. You can identify from the Custom Attributes. 121.2GB Storage Used. Ouch!

 

Screen Shot 2017-12-17 at 2.33.41 PM

 

Thats about it. Next I will be setting up NSX-T for PKS. Follow the very good guide by Simon Guyennet – https://blog.inkubate.io/deploy-kubernetes-on-vsphere-with-kubo/

Screen Shot 2017-11-01 at 2.11.32 PM

virtuallyGhetto Automated NSX-T 2.0 Lab Deployment Gotchas

I was very excited when William Lam develop an automated NSX-T 2.0 Lab deployment and I always wanted to try it out. I had tried to install NSX-T 2.0 manually couple of weeks back and I know its quite painful to install it so since there is an automated way of doing it, its going to save me lotsa time. However, its not so straight forward to use William script whole sale and below I am going to list a few things to be taken note of. These were what I have experienced over the last 3 days of troubleshooting.

 

A few things to note here

  • I was using PowerCLI 6.3.0. PowerCLI need to be updated due to the new cmdlets for NSX-T. Since 6.5.1 onwards, the way to upgrade the powercli is very different. I followed the instructions here. However, I was having some issues on the Install-Module. Went some googling and finally found out that you will need to install PackageManagement_x64.exehttps://stackoverflow.com/questions/29828756/the-term-install-module-is-not-recognized-as-the-name-of-a-cmdlet
  • Once the package management powershell modules are installed, you can continue with the instructions such as Install-Module -Name VMware.PowerCLI
  • With the latest version of PowerCLI, we can run the script. The script requires you to have a vcenter, so therefore since my host is a standalone esxi, I need to install a vCenter. I chosen 6.5U2 VCSA and I use the cli script to install.
  • First time using the script, I used VSS for the network for the new VCSA and NSX-T Manager, since like the cmdlets are not able to find the virtual portgroup. Once I changed to VDS, the script works.
    $VMCluster = "Primp-Cluster"
    $VirtualSwitchType = "VDS" # VSS or VDS
    $VMNetwork = "dv-access333-dev"
    $VMDatastore = "himalaya-local-SATA-dc3500-1"
    $VMNetmask = "255.255.255.0"
    $VMGateway = "172.30.0.1"
    $VMDNS = "172.30.0.100"
    $VMNTP = "pool.ntp.org"
    $VMPassword = "VMware1!"
    $VMDomain = "primp-industries.com"
    $VMSyslog = "172.30.0.170"
    # Applicable to Nested ESXi only
    $VMSSH = "true"
    $VMVMFS = "false"
  • Some network issues such as reverse DNS not being set up. Undecided to use existing management subnet or create another new subnet and since the ESXi host is hosting some other VMs, I had the script trying to connect to existing vcenters and hosts. Lesson Learnt: try not to use the same IPs as the existing Lab setup to prevent any confusion.
  • Although my vCenter is 6.5U2 but my ESXi host is still in 6.0U2. When running the portion of the script where is using PutUsbScanCodes()I got an error of input argument. I suspect it got to do with the ESXi host, and therefore I went to try the PutUsbScanCodes() on a recent 6.5U2 vCenter and 6.5U2 host using the script here, and true enough the PubUsbScanCodes() works perfectly. So afterwhich, I went back to update the 6.0U2 host to 6.5U2 and confirm that the PutUsbScanCodes() works correctly.
nsx-lb

NSX Load Balancer for HTTPS Name-based Virtual Hosting

Few of you all know I used NSX Load Balancer for Name-based Virtual Hosting in my home lab as I only got a single Public IP Address and I got a few websites that I would like to host. For example, this blog you are reading now is actually being hosted behind a NSX LB. Try resolving blog.acepod.com and nsx.acepod.com, they will both resolve to same IP address ie. 101.100.182.15 but when you access these URLs, they are actually different web servers. I used to do that with an apache proxy server but managing the config was rather painful. Since NSX Load Balancer was able to achieve that and it comes with a nice UI, why not? Of course, the other motivation of using NSX LB is for the benefit of my work, getting to know the inside-out of the NSX LB is always good.

Name-based Virtual Hosting for HTTP has been working well for me. I always wanted to find out whether that will work for HTTPS. Asking around my friends on this requirement ie. multiple https websites on the same port 443 and same IP Address seems possible. I was referred to Server Name Indication.

So lets see whether something like this will work for HTTPS. First I have to find some web servers internally that able to do https. I have been using this Turnkey debian LAMP for my NSX testing, so I will use them in this test.

Before testing HTTPS, let see the HTTP in action first. These are the individual web servers, accessing them directly with IP address.

Screen Shot 2017-07-25 at 5.35.27 PM

Lets now map dev2.acepod.com and dev3.acepod.com to the same IP address, which is 192.168.191.36 that has been configured on the NSX ESG as secondary IP Address.

Screen Shot 2017-07-25 at 5.38.38 PM

 

Screen Shot 2017-07-25 at 5.28.16 PM

 

OK. lets now access the web servers using their FQDN. Great! It works! NSX LB is now looking at the URL given and point it to the right pool.

Screen Shot 2017-07-25 at 5.40.41 PM

 

Well, if you are interested in the script that made this work. Here you go.

acl host_app11 hdr(Host) -i dev2.acepod.com
acl host_app12 hdr(Host) -i dev3.acepod.com
use_backend dev2acepod if host_app11
use_backend dev3acepod if host_app12

You will need use the Application Rule. After you create the Application Rule, you have to attach it to the Virtual Server.

Screen Shot 2017-07-25 at 5.43.39 PM

Screen Shot 2017-07-25 at 5.46.14 PM

Alright. Lets now get to the goal which is test out the HTTPS. Same test again, now with HTTPS.

Screen Shot 2017-07-25 at 5.33.10 PM

I’m going to write an application rule that is something similar but now I will use a different Pool. I will name the pool dev2acepod-https and dev3acepod-https.

Screen Shot 2017-07-25 at 6.03.35 PM Screen Shot 2017-07-25 at 6.03.43 PM

Screen Shot 2017-07-25 at 5.55.52 PM

Screen Shot 2017-07-25 at 5.52.55 PM

 

This is the Application Rule I used for HTTPS.

 

acl host_app21 hdr(Host) -i dev2.acepod.com
acl host_app22 hdr(Host) -i dev3.acepod.com
use_backend dev2acepod-https if host_app21
use_backend dev3acepod-https if host_app22

Next will be creating a Virtual Server and attach this Application Rule to it.

Screen Shot 2017-07-25 at 5.54.24 PM

Screen Shot 2017-07-25 at 5.55.12 PM

 

The final configuration looks like this.Screen Shot 2017-07-25 at 5.59.10 PM

 

OK. Let test it out. So, as you can see, it does not work. Different URL, but its still goes to the same pool. It uses the dev2acepod-https pool because I place it as the default pool.

Screen Shot 2017-07-25 at 6.04.29 PM

 

Lets now take away the default pool and see how it goes.

Screen Shot 2017-07-25 at 6.06.13 PM

 

Cannot even load.

Screen Shot 2017-07-25 at 6.07.00 PM

 

Conclusion is we have to use different secondary IP addresses for different https url. Then the next question is why would you use LB to do this, why not consider NAT?

The other thought is maybe the application rule does not work out this way. Will have to spend some time researching on the right application rule.

 

[25 July 2017 Update]

Alright, its the application rule after some researching here. So by changing to the following, it works!!!

mode tcp
tcp-request inspect-delay 5s
tcp-request content accept if { req_ssl_hello_type 1 }

use_backend dev2acepod-https if { req_ssl_sni -i dev2.acepod.com }
use_backend dev3acepod-https if { req_ssl_sni -i dev3.acepod.com }

Screen Shot 2017-07-25 at 6.37.24 PM

Screen Shot 2017-07-25 at 6.34.28 PM

 

 

 

 

 

 

 

 

Screen Shot 2017-07-25 at 4.58.25 PM

vSAN 6.6 All Flash on OVH Private Cloud

Recently had the opportunity to set up an all flash vSAN on OVH BMaaS to facilitate a Customer NSX POC. Took the chance to run some vSAN throughput test.

This is the server built that we got from OVH. We selected the Mini-HG and configure it with 3 SSD. Its physical locally in the OVH BHS (Canada) DC.

Screen Shot 2017-07-25 at 5.05.15 PM

 

This was the service we selected. Basically, I used the vRACK service and put vSAN traffic on the 2nd NIC ie. the private NIC. But anyway, that would be another blog post to write all the deployment details.

Test was ran on a Windows Server 2012R2 Machine, 2 vCPU, 4GB of RAM using AS SSD Benchmark.

Screen Shot 2017-07-25 at 11.59.11 AMScreen Shot 2017-07-25 at 11.59.29 AM

Some details of the vSAN Cluster.

Screen Shot 2017-07-25 at 4.37.50 PM

 

Screen Shot 2017-07-25 at 4.42.43 PM

Screen Shot 2017-07-25 at 4.42.31 PM

Screen Shot 2017-07-25 at 4.42.56 PM

Host details .. Basically all 3 hosts are the same specifications.

Screen Shot 2017-07-25 at 4.36.55 PM

 

Look at the Controllers and Disk.

Screen Shot 2017-07-25 at 4.48.53 PM

Screen Shot 2017-07-25 at 4.49.18 PM

 

Some vSAN Monitoring

Screen Shot 2017-07-25 at 4.45.45 PM

Screen Shot 2017-07-25 at 4.45.21 PM

Screen Shot 2017-07-25 at 4.46.15 PM

 

 

projectclarity

Project Clarity – Learning to build an Angular JS App

I always wanted to learn more about Angular JS and I thought Project Clarity would be a great way to start. A couple of months back I tried to install npm, it failed and I did not bother to look at it and partly I was busy with other stuff.

Last couple of weeks I have been putting vSphere Integrated Containers (VIC) into my Lab to prepare myself before going to Customer place for VIC + NSX POC. I was thinking, since there is a quick way of spinning up containers, why not spin up one to try Clarity. I was also trigged by a twitter post by Grant Orchard that it is very easy to start. Also I have been reading Cody posts on his Amazon Echo and vSphere application that he built using Clarity. Here we go.

Screen Shot 2017-07-12 at 11.55.34 PM

I was pointed to https://vmware.github.io/clarity/get-started. If you look below, its really brief. Already, I know how to install git but I have totally no clue about npm.

Screen Shot 2017-07-12 at 9.33.40 AM

 

A little google research on NPM and its node.js framework. Alright, cool!

Screen Shot 2017-07-12 at 9.35.48 AM

 

So I reckon, I need a linux container to start off with. CentOS which closest to RHEL would be a good bet. That is after I failed with nimmis/apache-php5 image.

Screen Shot 2017-07-12 at 9.39.16 AM

Now I tried centos image.

docker -H 192.168.120.127:2376 –tls run –name test12 –net=external01 -it centos /bin/bash

Everything looks OK until…

Screen Shot 2017-07-12 at 9.41.43 AMSo far the steps that I took.

1) yum install git
2)  yum install -y gcc-c++ make
3)  curl -sL https://rpm.nodesource.com/setup_6.x |  bash –
https://www.e2enetworks.com/help/knowledge-base/how-to-install-node-js-and-npm-on-centos/
4)  yum install -y nodejs
5) git clone https://github.com/vmware/clarity-seed.git
6) npm install [This failed! You will need to go into the clarity-seed folder!!]
7) cd clarity-seed
8) npm install [Until I hit an error]

The npm installation takes awhile but I was thinking if it was successfully, I should commit this image into my Harbor registry. I was disappointed the built did not go successfully.

[Update]
OK. after some google search again, it was found out to be bzip2 related. Replace step 7 with below should work.
7) yum install -y bzip2
8) npm install

Some warnings but lets see.

Screen Shot 2017-07-12 at 10.16.10 AM

BOOM! My first Clarity App successfully running!

Screen Shot 2017-07-12 at 10.18.10 AM

Ok. Still doesn’t work because its on localhost. Need to open up package.json and at the start, add in the host, ng serve –host 10.10.12.5.

Happiness, successfully deployed my first Clarity App!

Screen Shot 2017-07-12 at 10.56.01 PM

 

 

 

Screen Shot 2017-07-02 at 5.14.20 PM

Powershell script to customise drivers into ESXi

The Supermicro E300 require the igxbe drivers for their 10GE NICs as the standard ESXi ISO does not natively support. Therefore I’m require to custom build the ESXi ISO.

The igxbe 4.5.1 drivers were from Paul Blog – https://tinkertry.com/how-to-install-intel-x552-vib-on-esxi-6-on-superserver-5028d-tn4t.

Download here.

The powershell script were from here. https://www.v-front.de/p/esxi-customizer-ps.html 

These are the commands used.

PowerCLI C:\> C:\Users\Administrator\Downloads\ESXi-Customizer-PS-v2.5.ps1 -izip C:\Users\Administrator\Downloads\update-from-esxi6.0-6.0_update03.zip -pkgDir E:\pkg

PowerCLI C:\> C:\Users\Administrator\Downloads\ESXi-Customizer-PS-v2.5.ps1 -izip C:\Users\Administrator\Downlods\ESXi650-201704001.zip -pkgDir E:\pkg

Screenshot:
Screen Shot 2017-07-02 at 5.14.20 PM

You must be wondering why can’t I just update the drivers after installation. I wanted to PXE boot for ESXi installer and somehow or rather the NIC on the E300 that support PXE boot were the 10GE NICs. That was the reason why I have to custom build the igxbe driver into the ISO.

vic-product

VMware VIC – vSphere Integrated Containers Testing

Recently partly due to my interest and also work requirements, I wanted to test out VIC. You can read more about VIC here https://vmware.github.io/vic-product/.

Below shows a screenshot that I had successfully deployed a vSphere Container Host(VCH). I wasn’t successfully the first time I tried to set it up just by reading the github documentation. Ben Corrie released an updated the VIC 1.1 Installation video and that help me a lot and I was successful deploying the VCH after following his steps in his video. You can watch the video here. https://www.youtube.com/watch?v=7WRFhJLZHJI

Screen Shot 2017-06-21 at 12.01.47 PM

 

Screen Shot 2017-06-09 at 12.18.23 AM

 

Here are some of the steps I took to create the environment. I also want to use this post as a guide to list down the docker commands that I used so that I can refer to this page when I need it in a POC or showing a demo.

Deploying VCH

vic-machine-windows.exe update firewall –target vcenter01.acepod.com –user administrator@vsphere.local –password ****** –compute-resource Cluster03-ComputeA –thumbprint 94:0D:18:EB:93:8B:50:C2:3D:1A:56:BB:9F:10:39:29:C2:4C:58:92 –allow

vic-machine-windows.exe create –target vcenter01.acepod.com –user administrator@vsphere.local –password ****** –name VCH01 –public-network “VLAN193-External03″ –public-network-ip 192.168.191.38/29 –public-network-gateway 192.168.191.33  –bridge-network vxw-dvs-80-universalwire-127-sid-8021-VIC-Bridge01 –bridge-network-range “10.11.0.0/16″ –dns-server 10.206.1.10 –tls-cname=*.acepod.com –no-tlsverify –compute-resource Cluster03-ComputeA –thumbprint 94:0D:18:EB:93:8B:50:C2:3D:1A:56:BB:9F:10:39:29:C2:4C:58:92 –image-store ds-xpe01-nfs02

 

 

Containers Creation and Management

To start a container and attach to the console of the container
docker -H 192.168.191.38:2376 –tls run –name test3 -it busybox

To list the containers running in the host
docker -H 192.168.191.38:2376 –tls ps -a

To exit a container without shutting down the container
Ctrl+P, Q(still holding Ctrl)

To attach back to a running container
docker -H 192.168.191.38:2376 –tls attach test3

To delete the container
docker -H 192.168.191.38:2376 –tls rm test3

To start a stopped container
docker -H 192.168.191.38:2376 –tls start test3

Some other useful commands:
To list the volumes
docker -H 192.168.191.38:2376 –tls network ls

To list the volumes
docker -H 192.168.191.38:2376 –tls volume ls

Screen Shot 2017-05-30 at 12.52.15 AM

VMs Security Tags during Disaster Recovery

If you use VM Security Tags for Security Group membership, these Security Tags are not applied on those VMs on the recovery site.

On the protected site.

Screen Shot 2017-05-30 at 12.52.15 AM

After using SRM for a planned migration or during a disaster recovery.

Screen Shot 2017-05-30 at 12.51.18 AM

 

I have created the same Security Tags on both the NSX Managers.

Primary NSX Manager:Screen Shot 2017-05-30 at 12.57.09 AM

Secondary NSX Manager:Screen Shot 2017-05-30 at 12.57.45 AM

Please let me know if you have any solution.