VMworld US 2017 – Pre-Conference Analysis

It’s VMworld US 2017 this week, starting in just a few hours, and as usual we can likely expect some huge announcements coming out of Las Vegas.

VMware have had a great year; their Q2 2018 earnings were announced a few days back, and they appear to be in great shape, with both NSX and VSAN leading the charge with great revenue growth. Their share price has definitely benefitted from this, with it now back up over $100 per share for the first time in four years (source), and this is great news at a time where there is a general outlook of doom and gloom as the march of Public Cloud continues.

Things we can no doubt expect to hear about this week:

  • VMware Cloud on AWS – this is the monster which will no doubt dominate this week; announced around a year ago, and in private beta earlier this year, it has now been made GA in the US West (Oregon) region only, this is going to provide a long list of benefits to those who can afford it. I expect we will see some cementing around details on this over the next few days, and some talk around integration with existing AWS services, which provides most of the value of this service over using a vCAN partner. The page is now live on AWS’ website, available here.
  • VMware Horizon Air on Azure – not to let Amazon get all the cloudy goodness, this new service announced back in May will no doubt be touted and elaborated on this week, or at VMworld Europe in a couple of weeks. This is going to make building a flexible Horizon estate easier than ever, and promises some additional compatibility around Microsoft’s desktop software offerings. This should be just about ready to drop according to the VMware FAQ on the service.
  • AppDefense – Gregg Robertson (@thesaffageek), has released a post about this today, but things have been bubbling up on this for some time, from talk of Project Goldilocks from around a year ago, to more recent teasings. It will be interesting to see more information on this as it develops; application security management is a growing market, as NSX has shown, and from the looks of this at the moment, it will give valuable insights into what’s running, and how it interacts. Coupled with vRealize Network Insight, and NSX, this could be a powerful set of tools.
  • VSAN – VSAN is now reaching maturity, and has a tonne of production customers if numbers are to be believed, it was talked about a long while back that VSAN is an object store under the covers, and is now able to serve iSCSI storage. I expect we will see this continue, with object and file storage being served out of VSAN in coming versions. Hopefully we will see some announcements around this in the coming days.
  • vSphere Next – it has been a year or so since vSphere 6.5 came out now, and it has proven itself to be a great release, with improved management, stability, and has revitalised an aging product. The recent announcement about the coming death of the Flex client, and the Windows vCenter install will, I would imagine, foreshadow the announcement of the final vSphere version bearing these relics this week.
  • Other Stuff – it seems some bits were announced early including:
    • Cost Insight – a tool to manage costs and make recommendations about placing of workloads across AWS, Azure, and on-premises
    • Discovery – a tool helping you discover workloads across clouds and organise them logically
    • NSX Cloud – a SaaS based offering of NSX, descibed by The Register as ‘NSX-as-a-Service’ no doubt for public cloud usage
    • Wavefront – performance monitoring for cloud native apps

I am hoping to throw more blog posts out as the week goes on, and I get some more time to check things out. In the meantime, the General Session streams can be found here, the vBrownBag/VMTN Tech Talks are available here, and there will be up-to-the minute news in all the usual places.


Goodbye to the Past

This week, in the lead up to this year’s VMworld US, VMware have announced the impending death of both vCenter on Windows, and the vSphere Web Client.

Each of these will be deprecated in the next major version of vSphere (assuming 7.0), and removed in the next major version after that (assuming 8.0). Quite when we will see these releases is still a matter of speculation, but with VMworld this week it is possible 7.0 will be slated for release in Winter 2017.

vCenter Server Appliance vs Windows

The vCenter Server Appliance, or vCSA, has been around since vSphere 5.0, and over time has become more mature, more stable, and has moved from VMware’s licensed version of SUSE Linux, to the modern and lightweight Photon OS.

These changes have turned the appliance into a performant, and reliable powerhouse; important facets for what is obviously a critical and central part of any modern vSphere datacenter.

Since vSphere 6.0, the vCSA has had at least feature parity with vCenter installed on Windows, and the added simplicity of installation that it includes – not having to install VUM separately, not having to install a MSSQL database, and not having yet another Windows instance to stroke – makes the use of vCSA over Windows a no-brainer.

As things have moved on, the migration tools too have become more and more mature, meaning that migration to vCSA 6.5 Update 1 is a really straight forward affair from most starting points.

The death of flash – long live HTML5

The vSphere Web Client has been a contentious tool since its introduction in vSphere 5.0; often unstable and poorly performing, it has been the only way to access newer features of vSphere such as VSAN, SPBM, as well as newer versions of vSphere Update Manager and Site Recovery Manager.

Added to this are the raft of security issues facing Flash, and it’s general fall from grace over the last 10 years. Adobe have now decreed that Flash will die in 2020, which is great news for desktop browsers, mobile devices, and the internet in general.

VMware responded to customer feedback on the vSphere Web Client (or Flex Client) by releasing the HTML5 Web Client under their Flings program back in March 2016, and shipped vSphere 6.5 with this installed alongside the Flex client.

While the HTML5 client still does not have feature parity at this time, the rate at which it is catching up is impressive, and we can reasonably expect it to match, and possibly, overtake in terms of features in the next 6-12 months.


The removal of both of these tools is, for me at least, great news. I have had no desire to run a Windows vCenter since probably 5.5, which was four years ago now. Likewise, the vSphere Web Client has been a necessary evil since the same time, especially when it came to using third party integrations and newer features.

Hopefully the removal of these legacy chains will mean that development is accelerated as engineering resource is freed up. It will put some burden on the engineering teams from VMware ecosystem partners, who will need to release new versions of their plugins, if they haven’t already, utilising the gorgeous HTML 5 Clarity interface, but those partners should already be moving in this direction anyway.

rbvami – Managing the vCSA VAMI using Ruby

I have been putting together a module for managing vCSA’s VAMI using Ruby. This uses the vCenter 6.5 REST API, and the long term plan is to build it out to cover the entire REST API.

My intention is to use this module as the basis for a Chef cookbook for managing VAMI configuration, and is mainly a learning rather than a practical exercise.

The project is on my GitHub site, feel free to contact me if there is functionality you would like to see added.

Deploying NX-OSv 9000 on vSphere

Cisco have recently released (1st March 2017) an updated virtual version of their Nexus 9K switch, and the good news is that this is now available as an OVA for deployment onto ESXi. We used to use VIRL in a lab, which was fine until a buggy earlier version of the virtual 9K was introduced which prevented core functionality like port channels. This new release doesn’t require the complex environment that VIRL brings, and lets you deploy a quick pair of appliances in vPC to test code against.

The download is available here, and while there are some instructions available, I did not find them particularly useful in deploying the switch to my ESXi environment. As a result, I decided to write up how I did this to hopefully save people spending time smashing their face off it.

Getting the OVA

NOTE: you will need a Cisco login to download the OVA file. My login has access to a bunch of bits so not sure exactly what the requirements are around this.

There are a few versions available from the above link, including a qcow2 (KVM) image, a .vmdk file (for rolling your own VM), a VirtualBox image (for use with VirtualBox and/or Vagrant), and an OVA (for use with Fusion, Workstation, ESXi).

Once downloaded we are ready to deploy the appliance. There are a few things to bear in mind here:

  1. This can be used to pass VM traffic between virtual machines: there are 6 connected vNICs on deployment, 1 of these simulates the mgmt0 port on the 9K, and the other 5 are able to pass VM traffic.
  2. vNICs 2-6 should not be attached to the management network (best practice)
  3. We will need to initially connect over a virtual serial port through the host, this will require opening up the ESXi host firewall temporarily

Deploying the OVA

You can deploy the OVA through the vSphere Web Client, or the new vSphere HTML5 Web Client, I’ve detailed how to do this via PowerShell here, because who’s got time for clicking buttons?


# Simulator is available at:
# https://software.cisco.com/download/release.html?mdfid=286312239&softwareid=282088129&release=7.0(3)I5(1)&relind=AVAILABLE&rellifecycle=&reltype=latest
# Filename: nxosv-final.7.0.3.I5.2.ova
# Documentation: http://www.cisco.com/c/en/us/td/docs/switches/datacenter/nexus9000/sw/7-x/nx-osv/configuration/guide/b_NX-OSv_9000/b_NX-OSv_chapter_01.html

Function New-SerialPort {
  # stolen from http://access-console-port-virtual-machine.blogspot.co.uk/2013/07/add-serial-port-to-vm-through-gui-or.html
  ) #end
  $dev = New-Object VMware.Vim.VirtualDeviceConfigSpec
  $dev.operation = "add"
  $dev.device = New-Object VMware.Vim.VirtualSerialPort
  $dev.device.key = -1
  $dev.device.backing = New-Object VMware.Vim.VirtualSerialPortURIBackingInfo
  $dev.device.backing.direction = "server"
  $dev.device.backing.serviceURI = "telnet://"+$hostIP+":"+$prt
  $dev.device.connectable = New-Object VMware.Vim.VirtualDeviceConnectInfo
  $dev.device.connectable.connected = $true
  $dev.device.connectable.StartConnected = $true
  $dev.device.yieldOnPoll = $true

  $spec = New-Object VMware.Vim.VirtualMachineConfigSpec
  $spec.DeviceChange += $dev

  $vm = Get-VM -Name $vmName

# Variables - edit these...
$ovf_location = '.\nxosv-final.7.0.3.I5.1.ova'
$n9k_name = 'NXOSV-N9K-001'
$target_datastore = 'VBR_MGTESX01_Local_SSD_01'
$target_portgroup = 'vSS_Mgmt_Network'
$target_cluster = 'VBR_Mgmt_Cluster'

$vi_server = ''
$vi_user = 'administrator@vsphere.local'
$vi_pass = 'VMware1!'

# set this to $true to remove non-management network interfaces, $false to leave them where they are
$remove_additional_interfaces = $true

# Don't edit below here
Import-Module VMware.PowerCLI

Connect-VIServer $vi_server -user $vi_user -pass $vi_pass

$vmhost = $((Get-Cluster $target_cluster | Get-VMHost)[0])

$ovfconfig = Get-OvfConfiguration $ovf_location

$ovfconfig.NetworkMapping.mgmt0.Value = $target_portgroup
$ovfconfig.NetworkMapping.Ethernet1_1.Value = $target_portgroup
$ovfconfig.NetworkMapping.Ethernet1_2.Value = $target_portgroup
$ovfconfig.NetworkMapping.Ethernet1_3.Value = $target_portgroup
$ovfconfig.NetworkMapping.Ethernet1_4.Value = $target_portgroup
$ovfconfig.NetworkMapping.Ethernet1_5.Value = $target_portgroup
$ovfconfig.DeploymentOption.Value = 'default'

Import-VApp $ovf_location -OvfConfiguration $ovfconfig -VMHost $vmhost -Datastore $target_datastore -DiskStorageFormat Thin -Name $n9k_name

if ($remove_additional_interfaces) {
  Get-VM $n9k_name | Get-NetworkAdapter | ?{$_.Name -ne 'Network adapter 1'} | Remove-NetworkAdapter -Confirm:$false

New-SerialPort -vmName $n9k_name -hostIP $($vmhost | Get-VMHostNetworkAdapter -Name vmk0 | Select -ExpandProperty IP) -prt 2000

$vmhost | Get-VMHostFirewallException -Name 'VM serial port connected over network' | Set-VMHostFirewallException -Enabled $true

Get-VM $n9k_name | Start-VM

This should start the VM, we will be able to telnet into the host on port 2000 to reach the VM console, but it will not be ready for us to do that until this screen is reached:


Now when we connect we should see:


At this point we can enter ‘n’ and go through the normal Nexus 9K setup wizard. Once the management IP and SSH are configured you should be able to connect via SSH, the virtual serial port can then be removed via the vSphere Client, and the ‘VM serial port connected over network’ rule should be disabled on the host firewall.

Pimping things up

Add more NICs

Obviously here we have removed the additional NICs from the VM, which makes it only talk over the single management port. We can add a bunch more NICs and the virtual switch will let us use them to talk on. This could be an interesting use case to pass actual VM traffic through the 9K.

Set up vPC

The switch is fully vPC (Virtual Port Channel) capable, so we can spin up another virtual N9K and put them in vPC mode, this is useful to experiment with that feature.

Bring the API!

The switch is NXAPI capable, which was the main reason for me wanting to deploy it, so that I could test REST calls against it. Enable NXAPI by entering the ‘feature nxapi’ commmand.


Hopefully this post will help people struggling to deploy this OVA, or wanting to test out NXOS in a lab environment. I found the Cisco documentation a little confusing so though I would share my experiences.

Installing vSphere Integrated Containers

This document details installing and testing vSphere Integrated Containers, which went v1.0 recently. This has been tested against vSphere 6.5 only.
Download VIC from my.vmware.com.
Release notes available here.
From Linux terminal:
root@LOBSANG:~# tar -xvf vic_0.8.0-7315-c8ac999.tar.gz
root@LOBSANG:~# cd vic
root@LOBSANG:~/vic# tree .
├── appliance.iso
├── bootstrap.iso
├── ui
│   ├── vCenterForWindows
│   │   ├── configs
│   │   ├── install.bat
│   │   ├── uninstall.bat
│   │   ├── upgrade.bat
│   │   └── utils
│   │   └── xml.exe
│   ├── VCSA
│   │   ├── configs
│   │   ├── install.sh
│   │   ├── uninstall.sh
│   │   └── upgrade.sh
│   └── vsphere-client-serenity
│   ├── com.vmware.vicui.Vicui-0.8.0
│   │   ├── plugin-package.xml
│   │   ├── plugins
│   │   │   ├── vic-ui-service.jar
│   │   │   ├── vic-ui-war.war
│   │   │   └── vim25.jar
│   │   └── vc_extension_flags
│   └── com.vmware.vicui.Vicui-0.8.0.zip
├── vic-machine-darwin
├── vic-machine-linux
├── vic-machine-windows.exe
├── vic-ui-darwin
├── vic-ui-linux
└── vic-ui-windows.exe

7 directories, 25 files
Now we have the files ready to go we can run the install command as detailed in the GitHub repository for VIC (here). We are going to use Linux here:
root@VBRPHOTON01 [ ~/vic ]# ./vic-machine-linux
  vic-machine-linux - Create and manage Virtual Container Hosts

  vic-machine-linux [global options] command [command options] [arguments...]


  create Deploy VCH
  delete Delete VCH and associated resources
  ls List VCHs
  inspect Inspect VCH
  version Show VIC version information
  debug Debug VCH

  --help, -h show help
  --version, -v print the version

root@VBRPHOTON01 [ ~/vic ]#
On all hosts in the cluster you are using, create a bridge network (has to be vDS), mine is called vDS_VCH_Bridge, and disable the ESXi firewall by doing this.
To install we use the command as follows:
root@VBRPHOTON01 [ ~/vic ]# ./vic-machine-linux create --target --image-store VBR_MGTESX01_Local_SSD_01 --name VBR-VCH-01 --user administrator@vsphere.local --password VMware1! --compute-resource VBR_Mgmt_Cluster --bridge-network vDS_VCH_Bridge --public-network vSS_Mgmt_Network --client-network vSS_Mgmt_Network --management-network vSS_Mgmt_Network --force --no-tlsverify
This is in my lab; I’m deploying to a vCenter with a single host, and don’t care about security. The output should look something like this:
INFO[2017-01-06T21:52:14Z] ### Installing VCH ####
WARN[2017-01-06T21:52:14Z] Using administrative user for VCH operation - use --ops-user to improve security (see -x for advanced help)
INFO[2017-01-06T21:52:14Z] Loaded server certificate VBR-VCH-01/server-cert.pem
WARN[2017-01-06T21:52:14Z] Configuring without TLS verify - certificate-based authentication disabled
INFO[2017-01-06T21:52:15Z] Validating supplied configuration
INFO[2017-01-06T21:52:15Z] vDS configuration OK on "vDS_VCH_Bridge"
INFO[2017-01-06T21:52:15Z] Firewall status: DISABLED on "/VBR_Datacenter/host/VBR_Mgmt_Cluster/vbrmgtesx01.virtualbrakeman.local"
WARN[2017-01-06T21:52:15Z] Firewall configuration will be incorrect if firewall is reenabled on hosts:
WARN[2017-01-06T21:52:15Z] "/VBR_Datacenter/host/VBR_Mgmt_Cluster/vbrmgtesx01.virtualbrakeman.local"
INFO[2017-01-06T21:52:15Z] Firewall must permit dst 2377/tcp outbound to VCH management interface if firewall is reenabled
INFO[2017-01-06T21:52:15Z] License check OK on hosts:
INFO[2017-01-06T21:52:15Z] "/VBR_Datacenter/host/VBR_Mgmt_Cluster/vbrmgtesx01.virtualbrakeman.local"
INFO[2017-01-06T21:52:15Z] DRS check OK on:
INFO[2017-01-06T21:52:15Z] "/VBR_Datacenter/host/VBR_Mgmt_Cluster/Resources"
INFO[2017-01-06T21:52:15Z] Creating virtual app "VBR-VCH-01"
INFO[2017-01-06T21:52:15Z] Creating appliance on target
INFO[2017-01-06T21:52:15Z] Network role "client" is sharing NIC with "public"
INFO[2017-01-06T21:52:15Z] Network role "management" is sharing NIC with "public"
INFO[2017-01-06T21:52:16Z] Uploading images for container
INFO[2017-01-06T21:52:16Z] "bootstrap.iso"
INFO[2017-01-06T21:52:16Z] "appliance.iso"
INFO[2017-01-06T21:52:22Z] Waiting for IP information
INFO[2017-01-06T21:52:35Z] Waiting for major appliance components to launch
INFO[2017-01-06T21:52:35Z] Checking VCH connectivity with vSphere target
INFO[2017-01-06T21:52:36Z] vSphere API Test: vSphere API target responds as expected
INFO[2017-01-06T21:52:38Z] Initialization of appliance successful
INFO[2017-01-06T21:52:38Z] VCH Admin Portal:
INFO[2017-01-06T21:52:38Z] Published ports can be reached at:
INFO[2017-01-06T21:52:38Z] Docker environment variables:
INFO[2017-01-06T21:52:38Z] DOCKER_HOST=
INFO[2017-01-06T21:52:38Z] Environment saved in VBR-VCH-01/VBR-VCH-01.env
INFO[2017-01-06T21:52:38Z] Connect to docker:
INFO[2017-01-06T21:52:38Z] docker -H --tls info
INFO[2017-01-06T21:52:38Z] Installer completed successfully
Now we can check the state of our remote VIC host with:
root@VBRPHOTON01 [ ~ ]# docker -H tcp:// --tls info
Containers: 0
 Running: 0
 Paused: 0
 Stopped: 0
Images: 0
Server Version: v0.8.0-7315-c8ac999
Storage Driver: vSphere Integrated Containers v0.8.0-7315-c8ac999 Backend Engine
vSphere Integrated Containers v0.8.0-7315-c8ac999 Backend Engine: RUNNING
 VCH mhz limit: 2419 Mhz
 VCH memory limit: 27.88 GiB
 VMware Product: VMware vCenter Server
 VMware OS: linux-x64
 VMware OS version: 6.5.0
Execution Driver: vSphere Integrated Containers v0.8.0-7315-c8ac999 Backend Engine
 Network: bridge
Operating System: linux-x64
OSType: linux-x64
Architecture: x86_64
CPUs: 2419
Total Memory: 27.88 GiB
Name: VBR-VCH-01
ID: vSphere Integrated Containers
Docker Root Dir:
Debug mode (client): false
Debug mode (server): false
Registry: registry-1.docker.io
root@VBRPHOTON01 [ ~ ]#
This shows us it’s up and running, Now we can run our first container on here by doing:
root@VBRPHOTON01 [ ~ ]# docker -H tcp:// --tls run hello-world
Unable to find image 'hello-world:latest' locally
Pulling from library/hello-world
c04b14da8d14: Pull complete
a3ed95caeb02: Pull complete
Digest: sha256:548e9719abe62684ac7f01eea38cb5b0cf467cfe67c58b83fe87ba96674a4cdd
Status: Downloaded newer image for library/hello-world:latest

Hello from Docker!
This message shows that your installation appears to be working correctly.

To generate this message, Docker took the following steps:
 1. The Docker client contacted the Docker daemon.
 2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
 3. The Docker daemon created a new container from that image which runs the
  executable that produces the output you are currently reading.
 4. The Docker daemon streamed that output to the Docker client, which sent it
  to your terminal.

To try something more ambitious, you can run an Ubuntu container with:
 $ docker run -it ubuntu bash

Share images, automate workflows, and more with a free Docker Hub account:

For more examples and ideas, visit:

root@VBRPHOTON01 [ ~ ]#
We can see this under vSphere as follows:


So our container host itself is a VM under a vApp, and all containers are spun up as VMs under the vApp. As we can see here, the container ‘VM’ is powered off. This can be seen further by running ‘docker ps’ against our remote host:
root@VBRPHOTON01 [ ~ ]# docker -H tcp:// --tls ps
root@VBRPHOTON01 [ ~ ]# docker -H tcp:// --tls ps -a
24598201e216 hello-world "/hello" 56 seconds ago Exited (0) 47 seconds ago silly_davinci
root@VBRPHOTON01 [ ~ ]# docker -H tcp:// --tls rm 24598201e216
root@VBRPHOTON01 [ ~ ]# docker -H tcp:// --tls ps -a
root@VBRPHOTON01 [ ~ ]#
This container is now tidied up in vSphere:
So now we have VIC installed and can spin up containers. In the next post we will install VMware Harbor and use that as our trusted registry.

Integrating Platform Services Controller/vCSA 6.0 with Active Directory using CLI

I am currently automating the build and configuration of a VMware vCenter environment, and the internet has been a great source of material in helping me with this, particularly William Lam and Brian Graf‘s websites. It seems VMware have done a great job with the changes in vSphere 6.0 of enabling automated deployments, this follows the general trends of the industry in driving automation and orchestration with everything we do in the Systems Administration world.

One thing I needed to do which I could not find any information on, was to join my standalone Platform Services Controller (PSC) to an AD domain, this is easy enough in the GUI, and is documented here. It was important for me to automate this however, so I trawled through the CLI on my PSC to figure out how to do this.

I stumbled across the following command which joins you to the AD domain of your choosing.

/usr/lib/vmware-vmafd/bin/vmafd-cli join-ad --server-name <server name> --user-name <user-name> --password <password> --machine-name <machine name> --domain-name <domain name>

Once this is completed the PSC will need restarting, to enable the change. This will add the PSC to Active Directory. The next challenge was finding a scripted method to add the identity source. Once the identity source is added, permissions can be set up as normal in vCenter using this identity source.

Again, I had to trawl through the PSC OS to find this, the script is as follows:

/usr/lib/vmidentity/tools/scripts/sso-add-native-ad-idp.sh <Native-Active-Dir-Domain-Name>

Both of these can be carried out through an SSH session to your PSC (or embedded PSC/VCSA server). Assuming you have BASH enabled on your PSC, you can invoke this remotely using the PowerCLI ‘Invoke-VMScript’ cmdlet. This should help in the process of fully automating the process of deploying a vCenter environment.

As an aside, one issue I did have, which is discussed in the VMware forums is that I was getting the error ‘Error while extracting local SSO users’ when enumerating users/groups from my AD in the VMware GUI, this was fixed by creating a PTR record in DNS in my domain for the Domain Controller, it seems this is needed by the new VMware SSO at some point.

I hope this is useful to people, and hopefully VMware will document this sort of automation in the future, in the meantime, as I said above, William Lam and Brian Graf’s sites are a good source of information.

%d bloggers like this: