rbvami – Managing the vCSA VAMI using Ruby

I have been putting together a module for managing vCSA’s VAMI using Ruby. This uses the vCenter 6.5 REST API, and the long term plan is to build it out to cover the entire REST API.

My intention is to use this module as the basis for a Chef cookbook for managing VAMI configuration, and is mainly a learning rather than a practical exercise.

The project is on my GitHub site, feel free to contact me if there is functionality you would like to see added.

Deploying NX-OSv 9000 on vSphere

Cisco have recently released (1st March 2017) an updated virtual version of their Nexus 9K switch, and the good news is that this is now available as an OVA for deployment onto ESXi. We used to use VIRL in a lab, which was fine until a buggy earlier version of the virtual 9K was introduced which prevented core functionality like port channels. This new release doesn’t require the complex environment that VIRL brings, and lets you deploy a quick pair of appliances in vPC to test code against.

The download is available here, and while there are some instructions available, I did not find them particularly useful in deploying the switch to my ESXi environment. As a result, I decided to write up how I did this to hopefully save people spending time smashing their face off it.

Getting the OVA

NOTE: you will need a Cisco login to download the OVA file. My login has access to a bunch of bits so not sure exactly what the requirements are around this.

There are a few versions available from the above link, including a qcow2 (KVM) image, a .vmdk file (for rolling your own VM), a VirtualBox image (for use with VirtualBox and/or Vagrant), and an OVA (for use with Fusion, Workstation, ESXi).

Once downloaded we are ready to deploy the appliance. There are a few things to bear in mind here:

  1. This can be used to pass VM traffic between virtual machines: there are 6 connected vNICs on deployment, 1 of these simulates the mgmt0 port on the 9K, and the other 5 are able to pass VM traffic.
  2. vNICs 2-6 should not be attached to the management network (best practice)
  3. We will need to initially connect over a virtual serial port through the host, this will require opening up the ESXi host firewall temporarily

Deploying the OVA

You can deploy the OVA through the vSphere Web Client, or the new vSphere HTML5 Web Client, I’ve detailed how to do this via PowerShell here, because who’s got time for clicking buttons?

1p474b


# Simulator is available at:
# https://software.cisco.com/download/release.html?mdfid=286312239&softwareid=282088129&release=7.0(3)I5(1)&relind=AVAILABLE&rellifecycle=&reltype=latest
# Filename: nxosv-final.7.0.3.I5.2.ova
# Documentation: http://www.cisco.com/c/en/us/td/docs/switches/datacenter/nexus9000/sw/7-x/nx-osv/configuration/guide/b_NX-OSv_9000/b_NX-OSv_chapter_01.html

Function New-SerialPort {
  # stolen from http://access-console-port-virtual-machine.blogspot.co.uk/2013/07/add-serial-port-to-vm-through-gui-or.html
  Param(
     [string]$vmName,
     [string]$hostIP,
     [string]$prt
  ) #end
  $dev = New-Object VMware.Vim.VirtualDeviceConfigSpec
  $dev.operation = "add"
  $dev.device = New-Object VMware.Vim.VirtualSerialPort
  $dev.device.key = -1
  $dev.device.backing = New-Object VMware.Vim.VirtualSerialPortURIBackingInfo
  $dev.device.backing.direction = "server"
  $dev.device.backing.serviceURI = "telnet://"+$hostIP+":"+$prt
  $dev.device.connectable = New-Object VMware.Vim.VirtualDeviceConnectInfo
  $dev.device.connectable.connected = $true
  $dev.device.connectable.StartConnected = $true
  $dev.device.yieldOnPoll = $true

  $spec = New-Object VMware.Vim.VirtualMachineConfigSpec
  $spec.DeviceChange += $dev

  $vm = Get-VM -Name $vmName
  $vm.ExtensionData.ReconfigVM($spec)
}

# Variables - edit these...
$ovf_location = '.\nxosv-final.7.0.3.I5.1.ova'
$n9k_name = 'NXOSV-N9K-001'
$target_datastore = 'VBR_MGTESX01_Local_SSD_01'
$target_portgroup = 'vSS_Mgmt_Network'
$target_cluster = 'VBR_Mgmt_Cluster'

$vi_server = '192.168.1.222'
$vi_user = 'administrator@vsphere.local'
$vi_pass = 'VMware1!'

# set this to $true to remove non-management network interfaces, $false to leave them where they are
$remove_additional_interfaces = $true

# Don't edit below here
Import-Module VMware.PowerCLI

Connect-VIServer $vi_server -user $vi_user -pass $vi_pass

$vmhost = $((Get-Cluster $target_cluster | Get-VMHost)[0])

$ovfconfig = Get-OvfConfiguration $ovf_location

$ovfconfig.NetworkMapping.mgmt0.Value = $target_portgroup
$ovfconfig.NetworkMapping.Ethernet1_1.Value = $target_portgroup
$ovfconfig.NetworkMapping.Ethernet1_2.Value = $target_portgroup
$ovfconfig.NetworkMapping.Ethernet1_3.Value = $target_portgroup
$ovfconfig.NetworkMapping.Ethernet1_4.Value = $target_portgroup
$ovfconfig.NetworkMapping.Ethernet1_5.Value = $target_portgroup
$ovfconfig.DeploymentOption.Value = 'default'

Import-VApp $ovf_location -OvfConfiguration $ovfconfig -VMHost $vmhost -Datastore $target_datastore -DiskStorageFormat Thin -Name $n9k_name

if ($remove_additional_interfaces) {
  Get-VM $n9k_name | Get-NetworkAdapter | ?{$_.Name -ne 'Network adapter 1'} | Remove-NetworkAdapter -Confirm:$false
}

New-SerialPort -vmName $n9k_name -hostIP $($vmhost | Get-VMHostNetworkAdapter -Name vmk0 | Select -ExpandProperty IP) -prt 2000

$vmhost | Get-VMHostFirewallException -Name 'VM serial port connected over network' | Set-VMHostFirewallException -Enabled $true

Get-VM $n9k_name | Start-VM

This should start the VM, we will be able to telnet into the host on port 2000 to reach the VM console, but it will not be ready for us to do that until this screen is reached:

Capture

Now when we connect we should see:

Capture

At this point we can enter ‘n’ and go through the normal Nexus 9K setup wizard. Once the management IP and SSH are configured you should be able to connect via SSH, the virtual serial port can then be removed via the vSphere Client, and the ‘VM serial port connected over network’ rule should be disabled on the host firewall.

Pimping things up

Add more NICs

Obviously here we have removed the additional NICs from the VM, which makes it only talk over the single management port. We can add a bunch more NICs and the virtual switch will let us use them to talk on. This could be an interesting use case to pass actual VM traffic through the 9K.

Set up vPC

The switch is fully vPC (Virtual Port Channel) capable, so we can spin up another virtual N9K and put them in vPC mode, this is useful to experiment with that feature.

Bring the API!

The switch is NXAPI capable, which was the main reason for me wanting to deploy it, so that I could test REST calls against it. Enable NXAPI by entering the ‘feature nxapi’ commmand.

Conclusion

Hopefully this post will help people struggling to deploy this OVA, or wanting to test out NXOS in a lab environment. I found the Cisco documentation a little confusing so though I would share my experiences.

Installing vSphere Integrated Containers

This document details installing and testing vSphere Integrated Containers, which went v1.0 recently. This has been tested against vSphere 6.5 only.
Download VIC from my.vmware.com.
Release notes available here.
From Linux terminal:
root@LOBSANG:~# tar -xvf vic_0.8.0-7315-c8ac999.tar.gz
root@LOBSANG:~# cd vic
root@LOBSANG:~/vic# tree .
.
├── appliance.iso
├── bootstrap.iso
├── LICENSE
├── README
├── ui
│   ├── vCenterForWindows
│   │   ├── configs
│   │   ├── install.bat
│   │   ├── uninstall.bat
│   │   ├── upgrade.bat
│   │   └── utils
│   │   └── xml.exe
│   ├── VCSA
│   │   ├── configs
│   │   ├── install.sh
│   │   ├── uninstall.sh
│   │   └── upgrade.sh
│   └── vsphere-client-serenity
│   ├── com.vmware.vicui.Vicui-0.8.0
│   │   ├── plugin-package.xml
│   │   ├── plugins
│   │   │   ├── vic-ui-service.jar
│   │   │   ├── vic-ui-war.war
│   │   │   └── vim25.jar
│   │   └── vc_extension_flags
│   └── com.vmware.vicui.Vicui-0.8.0.zip
├── vic-machine-darwin
├── vic-machine-linux
├── vic-machine-windows.exe
├── vic-ui-darwin
├── vic-ui-linux
└── vic-ui-windows.exe

7 directories, 25 files
root@LOBSANG:~/vic#
Now we have the files ready to go we can run the install command as detailed in the GitHub repository for VIC (here). We are going to use Linux here:
root@VBRPHOTON01 [ ~/vic ]# ./vic-machine-linux
NAME:
  vic-machine-linux - Create and manage Virtual Container Hosts

USAGE:
  vic-machine-linux [global options] command [command options] [arguments...]

VERSION:
  v0.8.0-7315-c8ac999

COMMANDS:
  create Deploy VCH
  delete Delete VCH and associated resources
  ls List VCHs
  inspect Inspect VCH
  version Show VIC version information
  debug Debug VCH

GLOBAL OPTIONS:
  --help, -h show help
  --version, -v print the version

root@VBRPHOTON01 [ ~/vic ]#
On all hosts in the cluster you are using, create a bridge network (has to be vDS), mine is called vDS_VCH_Bridge, and disable the ESXi firewall by doing this.
To install we use the command as follows:
root@VBRPHOTON01 [ ~/vic ]# ./vic-machine-linux create --target 192.168.1.222 --image-store VBR_MGTESX01_Local_SSD_01 --name VBR-VCH-01 --user administrator@vsphere.local --password VMware1! --compute-resource VBR_Mgmt_Cluster --bridge-network vDS_VCH_Bridge --public-network vSS_Mgmt_Network --client-network vSS_Mgmt_Network --management-network vSS_Mgmt_Network --force --no-tlsverify
This is in my lab; I’m deploying to a vCenter with a single host, and don’t care about security. The output should look something like this:
INFO[2017-01-06T21:52:14Z] ### Installing VCH ####
WARN[2017-01-06T21:52:14Z] Using administrative user for VCH operation - use --ops-user to improve security (see -x for advanced help)
INFO[2017-01-06T21:52:14Z] Loaded server certificate VBR-VCH-01/server-cert.pem
WARN[2017-01-06T21:52:14Z] Configuring without TLS verify - certificate-based authentication disabled
INFO[2017-01-06T21:52:15Z] Validating supplied configuration
INFO[2017-01-06T21:52:15Z] vDS configuration OK on "vDS_VCH_Bridge"
INFO[2017-01-06T21:52:15Z] Firewall status: DISABLED on "/VBR_Datacenter/host/VBR_Mgmt_Cluster/vbrmgtesx01.virtualbrakeman.local"
WARN[2017-01-06T21:52:15Z] Firewall configuration will be incorrect if firewall is reenabled on hosts:
WARN[2017-01-06T21:52:15Z] "/VBR_Datacenter/host/VBR_Mgmt_Cluster/vbrmgtesx01.virtualbrakeman.local"
INFO[2017-01-06T21:52:15Z] Firewall must permit dst 2377/tcp outbound to VCH management interface if firewall is reenabled
INFO[2017-01-06T21:52:15Z] License check OK on hosts:
INFO[2017-01-06T21:52:15Z] "/VBR_Datacenter/host/VBR_Mgmt_Cluster/vbrmgtesx01.virtualbrakeman.local"
INFO[2017-01-06T21:52:15Z] DRS check OK on:
INFO[2017-01-06T21:52:15Z] "/VBR_Datacenter/host/VBR_Mgmt_Cluster/Resources"
INFO[2017-01-06T21:52:15Z]
INFO[2017-01-06T21:52:15Z] Creating virtual app "VBR-VCH-01"
INFO[2017-01-06T21:52:15Z] Creating appliance on target
INFO[2017-01-06T21:52:15Z] Network role "client" is sharing NIC with "public"
INFO[2017-01-06T21:52:15Z] Network role "management" is sharing NIC with "public"
INFO[2017-01-06T21:52:16Z] Uploading images for container
INFO[2017-01-06T21:52:16Z] "bootstrap.iso"
INFO[2017-01-06T21:52:16Z] "appliance.iso"
INFO[2017-01-06T21:52:22Z] Waiting for IP information
INFO[2017-01-06T21:52:35Z] Waiting for major appliance components to launch
INFO[2017-01-06T21:52:35Z] Checking VCH connectivity with vSphere target
INFO[2017-01-06T21:52:36Z] vSphere API Test: https://192.168.1.222 vSphere API target responds as expected
INFO[2017-01-06T21:52:38Z] Initialization of appliance successful
INFO[2017-01-06T21:52:38Z]
INFO[2017-01-06T21:52:38Z] VCH Admin Portal:
INFO[2017-01-06T21:52:38Z] https://192.168.1.46:2378
INFO[2017-01-06T21:52:38Z]
INFO[2017-01-06T21:52:38Z] Published ports can be reached at:
INFO[2017-01-06T21:52:38Z] 192.168.1.46
INFO[2017-01-06T21:52:38Z]
INFO[2017-01-06T21:52:38Z] Docker environment variables:
INFO[2017-01-06T21:52:38Z] DOCKER_HOST=192.168.1.46:2376
INFO[2017-01-06T21:52:38Z]
INFO[2017-01-06T21:52:38Z] Environment saved in VBR-VCH-01/VBR-VCH-01.env
INFO[2017-01-06T21:52:38Z]
INFO[2017-01-06T21:52:38Z] Connect to docker:
INFO[2017-01-06T21:52:38Z] docker -H 192.168.1.46:2376 --tls info
INFO[2017-01-06T21:52:38Z] Installer completed successfully
Now we can check the state of our remote VIC host with:
root@VBRPHOTON01 [ ~ ]# docker -H tcp://192.168.1.46:2376 --tls info
Containers: 0
 Running: 0
 Paused: 0
 Stopped: 0
Images: 0
Server Version: v0.8.0-7315-c8ac999
Storage Driver: vSphere Integrated Containers v0.8.0-7315-c8ac999 Backend Engine
VolumeStores:
vSphere Integrated Containers v0.8.0-7315-c8ac999 Backend Engine: RUNNING
 VCH mhz limit: 2419 Mhz
 VCH memory limit: 27.88 GiB
 VMware Product: VMware vCenter Server
 VMware OS: linux-x64
 VMware OS version: 6.5.0
Execution Driver: vSphere Integrated Containers v0.8.0-7315-c8ac999 Backend Engine
Plugins:
 Volume:
 Network: bridge
Operating System: linux-x64
OSType: linux-x64
Architecture: x86_64
CPUs: 2419
Total Memory: 27.88 GiB
Name: VBR-VCH-01
ID: vSphere Integrated Containers
Docker Root Dir:
Debug mode (client): false
Debug mode (server): false
Registry: registry-1.docker.io
root@VBRPHOTON01 [ ~ ]#
This shows us it’s up and running, Now we can run our first container on here by doing:
root@VBRPHOTON01 [ ~ ]# docker -H tcp://192.168.1.46:2376 --tls run hello-world
Unable to find image 'hello-world:latest' locally
Pulling from library/hello-world
c04b14da8d14: Pull complete
a3ed95caeb02: Pull complete
Digest: sha256:548e9719abe62684ac7f01eea38cb5b0cf467cfe67c58b83fe87ba96674a4cdd
Status: Downloaded newer image for library/hello-world:latest

Hello from Docker!
This message shows that your installation appears to be working correctly.

To generate this message, Docker took the following steps:
 1. The Docker client contacted the Docker daemon.
 2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
 3. The Docker daemon created a new container from that image which runs the
  executable that produces the output you are currently reading.
 4. The Docker daemon streamed that output to the Docker client, which sent it
  to your terminal.

To try something more ambitious, you can run an Ubuntu container with:
 $ docker run -it ubuntu bash

Share images, automate workflows, and more with a free Docker Hub account:
 https://hub.docker.com

For more examples and ideas, visit:
 https://docs.docker.com/engine/userguide/

root@VBRPHOTON01 [ ~ ]#
We can see this under vSphere as follows:

vch_vapp

So our container host itself is a VM under a vApp, and all containers are spun up as VMs under the vApp. As we can see here, the container ‘VM’ is powered off. This can be seen further by running ‘docker ps’ against our remote host:
root@VBRPHOTON01 [ ~ ]# docker -H tcp://192.168.1.46:2376 --tls ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
root@VBRPHOTON01 [ ~ ]# docker -H tcp://192.168.1.46:2376 --tls ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
24598201e216 hello-world "/hello" 56 seconds ago Exited (0) 47 seconds ago silly_davinci
root@VBRPHOTON01 [ ~ ]# docker -H tcp://192.168.1.46:2376 --tls rm 24598201e216
24598201e216
root@VBRPHOTON01 [ ~ ]# docker -H tcp://192.168.1.46:2376 --tls ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
root@VBRPHOTON01 [ ~ ]#
This container is now tidied up in vSphere:
 vch_vapp_2
So now we have VIC installed and can spin up containers. In the next post we will install VMware Harbor and use that as our trusted registry.

Integrating Platform Services Controller/vCSA 6.0 with Active Directory using CLI

I am currently automating the build and configuration of a VMware vCenter environment, and the internet has been a great source of material in helping me with this, particularly William Lam and Brian Graf‘s websites. It seems VMware have done a great job with the changes in vSphere 6.0 of enabling automated deployments, this follows the general trends of the industry in driving automation and orchestration with everything we do in the Systems Administration world.

One thing I needed to do which I could not find any information on, was to join my standalone Platform Services Controller (PSC) to an AD domain, this is easy enough in the GUI, and is documented here. It was important for me to automate this however, so I trawled through the CLI on my PSC to figure out how to do this.

I stumbled across the following command which joins you to the AD domain of your choosing.

/usr/lib/vmware-vmafd/bin/vmafd-cli join-ad --server-name <server name> --user-name <user-name> --password <password> --machine-name <machine name> --domain-name <domain name>

Once this is completed the PSC will need restarting, to enable the change. This will add the PSC to Active Directory. The next challenge was finding a scripted method to add the identity source. Once the identity source is added, permissions can be set up as normal in vCenter using this identity source.

Again, I had to trawl through the PSC OS to find this, the script is as follows:

/usr/lib/vmidentity/tools/scripts/sso-add-native-ad-idp.sh <Native-Active-Dir-Domain-Name>

Both of these can be carried out through an SSH session to your PSC (or embedded PSC/VCSA server). Assuming you have BASH enabled on your PSC, you can invoke this remotely using the PowerCLI ‘Invoke-VMScript’ cmdlet. This should help in the process of fully automating the process of deploying a vCenter environment.

As an aside, one issue I did have, which is discussed in the VMware forums is that I was getting the error ‘Error while extracting local SSO users’ when enumerating users/groups from my AD in the VMware GUI, this was fixed by creating a PTR record in DNS in my domain for the Domain Controller, it seems this is needed by the new VMware SSO at some point.

I hope this is useful to people, and hopefully VMware will document this sort of automation in the future, in the meantime, as I said above, William Lam and Brian Graf’s sites are a good source of information.