Installing vSphere Integrated Containers

This document details installing and testing vSphere Integrated Containers, which went v1.0 recently. This has been tested against vSphere 6.5 only.
Download VIC from my.vmware.com.
Release notes available here.
From Linux terminal:
root@LOBSANG:~# tar -xvf vic_0.8.0-7315-c8ac999.tar.gz
root@LOBSANG:~# cd vic
root@LOBSANG:~/vic# tree .
.
├── appliance.iso
├── bootstrap.iso
├── LICENSE
├── README
├── ui
│   ├── vCenterForWindows
│   │   ├── configs
│   │   ├── install.bat
│   │   ├── uninstall.bat
│   │   ├── upgrade.bat
│   │   └── utils
│   │   └── xml.exe
│   ├── VCSA
│   │   ├── configs
│   │   ├── install.sh
│   │   ├── uninstall.sh
│   │   └── upgrade.sh
│   └── vsphere-client-serenity
│   ├── com.vmware.vicui.Vicui-0.8.0
│   │   ├── plugin-package.xml
│   │   ├── plugins
│   │   │   ├── vic-ui-service.jar
│   │   │   ├── vic-ui-war.war
│   │   │   └── vim25.jar
│   │   └── vc_extension_flags
│   └── com.vmware.vicui.Vicui-0.8.0.zip
├── vic-machine-darwin
├── vic-machine-linux
├── vic-machine-windows.exe
├── vic-ui-darwin
├── vic-ui-linux
└── vic-ui-windows.exe

7 directories, 25 files
root@LOBSANG:~/vic#
Now we have the files ready to go we can run the install command as detailed in the GitHub repository for VIC (here). We are going to use Linux here:
root@VBRPHOTON01 [ ~/vic ]# ./vic-machine-linux
NAME:
  vic-machine-linux - Create and manage Virtual Container Hosts

USAGE:
  vic-machine-linux [global options] command [command options] [arguments...]

VERSION:
  v0.8.0-7315-c8ac999

COMMANDS:
  create Deploy VCH
  delete Delete VCH and associated resources
  ls List VCHs
  inspect Inspect VCH
  version Show VIC version information
  debug Debug VCH

GLOBAL OPTIONS:
  --help, -h show help
  --version, -v print the version

root@VBRPHOTON01 [ ~/vic ]#
On all hosts in the cluster you are using, create a bridge network (has to be vDS), mine is called vDS_VCH_Bridge, and disable the ESXi firewall by doing this.
To install we use the command as follows:
root@VBRPHOTON01 [ ~/vic ]# ./vic-machine-linux create --target 192.168.1.222 --image-store VBR_MGTESX01_Local_SSD_01 --name VBR-VCH-01 --user administrator@vsphere.local --password VMware1! --compute-resource VBR_Mgmt_Cluster --bridge-network vDS_VCH_Bridge --public-network vSS_Mgmt_Network --client-network vSS_Mgmt_Network --management-network vSS_Mgmt_Network --force --no-tlsverify
This is in my lab; I’m deploying to a vCenter with a single host, and don’t care about security. The output should look something like this:
INFO[2017-01-06T21:52:14Z] ### Installing VCH ####
WARN[2017-01-06T21:52:14Z] Using administrative user for VCH operation - use --ops-user to improve security (see -x for advanced help)
INFO[2017-01-06T21:52:14Z] Loaded server certificate VBR-VCH-01/server-cert.pem
WARN[2017-01-06T21:52:14Z] Configuring without TLS verify - certificate-based authentication disabled
INFO[2017-01-06T21:52:15Z] Validating supplied configuration
INFO[2017-01-06T21:52:15Z] vDS configuration OK on "vDS_VCH_Bridge"
INFO[2017-01-06T21:52:15Z] Firewall status: DISABLED on "/VBR_Datacenter/host/VBR_Mgmt_Cluster/vbrmgtesx01.virtualbrakeman.local"
WARN[2017-01-06T21:52:15Z] Firewall configuration will be incorrect if firewall is reenabled on hosts:
WARN[2017-01-06T21:52:15Z] "/VBR_Datacenter/host/VBR_Mgmt_Cluster/vbrmgtesx01.virtualbrakeman.local"
INFO[2017-01-06T21:52:15Z] Firewall must permit dst 2377/tcp outbound to VCH management interface if firewall is reenabled
INFO[2017-01-06T21:52:15Z] License check OK on hosts:
INFO[2017-01-06T21:52:15Z] "/VBR_Datacenter/host/VBR_Mgmt_Cluster/vbrmgtesx01.virtualbrakeman.local"
INFO[2017-01-06T21:52:15Z] DRS check OK on:
INFO[2017-01-06T21:52:15Z] "/VBR_Datacenter/host/VBR_Mgmt_Cluster/Resources"
INFO[2017-01-06T21:52:15Z]
INFO[2017-01-06T21:52:15Z] Creating virtual app "VBR-VCH-01"
INFO[2017-01-06T21:52:15Z] Creating appliance on target
INFO[2017-01-06T21:52:15Z] Network role "client" is sharing NIC with "public"
INFO[2017-01-06T21:52:15Z] Network role "management" is sharing NIC with "public"
INFO[2017-01-06T21:52:16Z] Uploading images for container
INFO[2017-01-06T21:52:16Z] "bootstrap.iso"
INFO[2017-01-06T21:52:16Z] "appliance.iso"
INFO[2017-01-06T21:52:22Z] Waiting for IP information
INFO[2017-01-06T21:52:35Z] Waiting for major appliance components to launch
INFO[2017-01-06T21:52:35Z] Checking VCH connectivity with vSphere target
INFO[2017-01-06T21:52:36Z] vSphere API Test: https://192.168.1.222 vSphere API target responds as expected
INFO[2017-01-06T21:52:38Z] Initialization of appliance successful
INFO[2017-01-06T21:52:38Z]
INFO[2017-01-06T21:52:38Z] VCH Admin Portal:
INFO[2017-01-06T21:52:38Z] https://192.168.1.46:2378
INFO[2017-01-06T21:52:38Z]
INFO[2017-01-06T21:52:38Z] Published ports can be reached at:
INFO[2017-01-06T21:52:38Z] 192.168.1.46
INFO[2017-01-06T21:52:38Z]
INFO[2017-01-06T21:52:38Z] Docker environment variables:
INFO[2017-01-06T21:52:38Z] DOCKER_HOST=192.168.1.46:2376
INFO[2017-01-06T21:52:38Z]
INFO[2017-01-06T21:52:38Z] Environment saved in VBR-VCH-01/VBR-VCH-01.env
INFO[2017-01-06T21:52:38Z]
INFO[2017-01-06T21:52:38Z] Connect to docker:
INFO[2017-01-06T21:52:38Z] docker -H 192.168.1.46:2376 --tls info
INFO[2017-01-06T21:52:38Z] Installer completed successfully
Now we can check the state of our remote VIC host with:
root@VBRPHOTON01 [ ~ ]# docker -H tcp://192.168.1.46:2376 --tls info
Containers: 0
 Running: 0
 Paused: 0
 Stopped: 0
Images: 0
Server Version: v0.8.0-7315-c8ac999
Storage Driver: vSphere Integrated Containers v0.8.0-7315-c8ac999 Backend Engine
VolumeStores:
vSphere Integrated Containers v0.8.0-7315-c8ac999 Backend Engine: RUNNING
 VCH mhz limit: 2419 Mhz
 VCH memory limit: 27.88 GiB
 VMware Product: VMware vCenter Server
 VMware OS: linux-x64
 VMware OS version: 6.5.0
Execution Driver: vSphere Integrated Containers v0.8.0-7315-c8ac999 Backend Engine
Plugins:
 Volume:
 Network: bridge
Operating System: linux-x64
OSType: linux-x64
Architecture: x86_64
CPUs: 2419
Total Memory: 27.88 GiB
Name: VBR-VCH-01
ID: vSphere Integrated Containers
Docker Root Dir:
Debug mode (client): false
Debug mode (server): false
Registry: registry-1.docker.io
root@VBRPHOTON01 [ ~ ]#
This shows us it’s up and running, Now we can run our first container on here by doing:
root@VBRPHOTON01 [ ~ ]# docker -H tcp://192.168.1.46:2376 --tls run hello-world
Unable to find image 'hello-world:latest' locally
Pulling from library/hello-world
c04b14da8d14: Pull complete
a3ed95caeb02: Pull complete
Digest: sha256:548e9719abe62684ac7f01eea38cb5b0cf467cfe67c58b83fe87ba96674a4cdd
Status: Downloaded newer image for library/hello-world:latest

Hello from Docker!
This message shows that your installation appears to be working correctly.

To generate this message, Docker took the following steps:
 1. The Docker client contacted the Docker daemon.
 2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
 3. The Docker daemon created a new container from that image which runs the
  executable that produces the output you are currently reading.
 4. The Docker daemon streamed that output to the Docker client, which sent it
  to your terminal.

To try something more ambitious, you can run an Ubuntu container with:
 $ docker run -it ubuntu bash

Share images, automate workflows, and more with a free Docker Hub account:
 https://hub.docker.com

For more examples and ideas, visit:
 https://docs.docker.com/engine/userguide/

root@VBRPHOTON01 [ ~ ]#
We can see this under vSphere as follows:

vch_vapp

So our container host itself is a VM under a vApp, and all containers are spun up as VMs under the vApp. As we can see here, the container ‘VM’ is powered off. This can be seen further by running ‘docker ps’ against our remote host:
root@VBRPHOTON01 [ ~ ]# docker -H tcp://192.168.1.46:2376 --tls ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
root@VBRPHOTON01 [ ~ ]# docker -H tcp://192.168.1.46:2376 --tls ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
24598201e216 hello-world "/hello" 56 seconds ago Exited (0) 47 seconds ago silly_davinci
root@VBRPHOTON01 [ ~ ]# docker -H tcp://192.168.1.46:2376 --tls rm 24598201e216
24598201e216
root@VBRPHOTON01 [ ~ ]# docker -H tcp://192.168.1.46:2376 --tls ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
root@VBRPHOTON01 [ ~ ]#
This container is now tidied up in vSphere:
 vch_vapp_2
So now we have VIC installed and can spin up containers. In the next post we will install VMware Harbor and use that as our trusted registry.

AWS Certified Solutions Architect – Associate Exam Experience

I sat my AWS CSA Associate exam today, and I’m happy to say I passed with 78%. This post will talk about my previous experiences with AWS, and the resources I used to study it.

A bit of history about this certification; AWS currently have a total of five generally available certifications, and three in beta:

Associate level (the easiest):

  • AWS Certified Solutions Architect – Associate – aimed I guess at Solutions Architects (pre-sales or post-sales consultants), and covering most of the core services, with a focus on designing highly available infrastructures
  • AWS Certified Developer – Associate – aimed at people developing software solutions to run on top of AWS, this looks to cover most of the same ground, but aimed more around using APIs, using the services to build applications, and some of the tools available to help developers consume services more easily
  • AWS Certified SysOps Administrator – Associate – aimed at people administering AWS environments, covering a lot of the same ground as the other two associate level exams, but focussing more on the tools available to keep things running, and troubleshooting

Specialist level (the new ones – currently beta):

  • AWS Certified Big Data – Specialty – focussing on the Big Data type services available in AWS
  • AWS Certified Advanced Networking – Specialty – focussing on the networking concepts and services in AWS
  • AWS Certified Security – Specialty – focussing on security in AWS

Professional level (the hard ones):

  • AWS Certified Solutions Architect – Professional – focussed on designing and implanting highly available systems and applications on AWS
  • AWS Certified DevOps Engineer – Professional – focussed on developing and managing distributed applications on AWS

The associate level exam blueprints talk about having hands on experience and time served with AWS; I haven’t used AWS at work, so had to start from scratch basically. Having a good understanding of how modern applications are architected, databases, Linux and Windows, and infrastructure in general will definitely help with getting your head around the concepts in play in AWS.

Free Tier Account:

The first thing you will need is a Free Tier account with AWS, this gives you 12 months of basically free services (but there are limits). You will need a credit card to sign up for this, but don’t worry about spending a tonne of money with it. You can put billing notifications in place which will tell you if you are spending any cash, and as long as you shut stuff down when you are not using it then it wont cost you any money. Get your free tier account here. When you have it then spend as much time as possible playing with the services, checking out all the options, and reading about what they do, this is the easiest way to understand the service.

Some video training:

This is optional I guess, but if you’re new to AWS, as I was, then it should help focus you on the things you need to pass the exam. I used the following courses:

  • Pluralsight – Nigel Poulton (@nigelpoulton) – AWS VPC Operations – I watched this course probably a year ago or more, and while I didn’t use it directly when studying for my CSA Associate exam, it did give me a broad understanding of VPCs in AWS and how things tie together, definitely worth a watch
  • A Cloud Guru – this course is widely lauded by people on social media in my experience, and the guys have done an excellent job of the course, although I felt it didn’t cover some bits in as much detail as needed to pass the exam. This is available on their site (at a discount until 19/01/16), or (where I bought it from) on Udemy where I paid £19 (think it is £10 for January as well). I would definitely say this course is worth it anyway, but I would look at another one as well to complement the parts lacking in the ACG course.
  • Linux Academy – the instructor for this course was not quite as good as Ryan from ACG in my opinion, but the depth and breadth of the subject matter, hands on labs, and the practice exams make this worth looking at. To top it off, if you join the Microsoft Dev Essentials program (here) you can get 3 months access to both Linux Academy and Pluralsight for free!

Books:

There is an official study gude, but it’s near enough 30 quid, and the reviews on Amazon were generally negative so I avoided it.

Whitepapers etc:

The Exam Blueprint (here) lists a bunch of whitepapers and these were definitely worth swotting up on. In addition, the documentation for each technology should be read and understood: there really is a lot to learn here, and you need to know everything in reasonable detail in my opinion.

Other:

These flash cards were pretty good, but think they’re generally taken from the ACG practice exams anyway.

Note taking:

I have taken a lot of notes for each of the services; I used Evernote for this which i found to be very useful in summarising the key fundamentals of each service. I will likely tidy these up now the exam is out the way and publish them as a series of blog posts, to hopefully help other exam candidates.

The Exam itself:

I was seriously stressed going into this exam; the breadth and depth of information you need to take in is pretty daunting if you only give yourself a couple of weeks to learn it as I did, and while the exam was slightly easier than I thought it would be, I still found it tough. I would suggest giving yourself 4-6 weeks of preparation if you have no AWS experience.

One gripe I have is that although I am lucky enough to have a Kryterion exam centre only 5 miles away, they only seem to do the exams once a month, which doesn’t give much flexibility. Looks to be the same in Leeds as well, so hopefully this will improve in the future.

Glad I got it done anyway, and putting myself under intense time pressure seems to have paid off anyway. Onwards and upwards to the Certified Developer Associate exam now. I would recommend doing the exam to anyone, AWS certification is in high demand at the moment, and getting a foot on the ladder will definitely help anyone in the IT industry over the next few years. I hope to go on and do as many of these exams as I can because I genuinely find what AWS are doing, and the way the services are implemented, to be interesting.

 

vBrownBag Tech Talk – PowerCLI – Where to start?

I had the opportunity to do a ten-minute tech talk for the vBrownBag crew while at VMworld in Barcelona. I have tried to accelerate my involvement and contribution to the awesome community around virtualization in 2016, and while this has often taken me out of my comfort zone, it is continuing to boost my confidence, and is a great way of meeting new friends, and building soft skills.

So when the call for papers came up for the vBrownBag stage I signed up. The session was basically a cut down version of the VMUG presentation I did earlier this year in the North East England chapter, but with the non-PowerShell bits removed, and a few bits added. You can see the presentation below, and the slide set is available here on GitHub.

Thanks to Alastair and Chris for the opportunity to do this, and for the support on the day.

Incorrectly Reported Separated Network Partitions in VSAN Cluster

I’ve been playing around with VSAN, automating the build of a 3 node Management cluster using ESXi 6.0 Update 1. I came across and issue where I moved one of my hosts to another cluster and then back into the VSAN cluster, and when it came back it showed as a separate network partition, and had a separate VSAN datastore.

The VSAN Disk Management page under my cluster in the Web Client showed that the Network Partition Group was different for this host to my other two hosts, despite the network being absolutely fine.

Turned out that the host had not rejoined the VSAN cluster, but had created its own 1-node cluster. I resolved this by running the following commands:

On the partitioned host:

esxcli vsan cluster get

Cluster Information

   Enabled: true

   Current Local Time: 2016-09-21T10:23:35Z

   Local Node UUID: 57e0040c-83a9-add9-ec1f-0cc47ab46218

   Local Node Type: NORMAL

   Local Node State: MASTER

   Local Node Health State: HEALTHY

   Sub-Cluster Master UUID: 57e0040c-83a9-add9-ec1f-0cc47ab46218

   Sub-Cluster Backup UUID:

   Sub-Cluster UUID: 3451e257-cedd-8772-4b31-0cc47ab460e8

   Sub-Cluster Membership Entry Revision: 0

   Sub-Cluster Member Count: 1

   Sub-Cluster Member UUIDs: 57e0040c-83a9-add9-ec1f-0cc47ab46218

   Sub-Cluster Membership UUID: 9c5fe257-e053-7716-ca0a-0cc47ab46218

This shows the host in a single node cluster

On a surviving host:

esxcli vsan cluster get

Cluster Information

   Enabled: true

   Current Local Time: 2016-09-21T11:14:55Z

   Local Node UUID: 57e006b6-71ab-c8f6-7d1d-0cc47ab460e8

   Local Node Type: NORMAL

   Local Node State: MASTER

   Local Node Health State: HEALTHY

   Sub-Cluster Master UUID: 57e006b6-71ab-c8f6-7d1d-0cc47ab460e8

   Sub-Cluster Backup UUID: 57e0f22f-3071-fe1a-fd8e-0cc47ab460ec

   Sub-Cluster UUID: 57e0040c-83a9-add9-ec1f-0cc47ab46218

   Sub-Cluster Membership Entry Revision: 0

   Sub-Cluster Member Count: 2

   Sub-Cluster Member UUIDs: 57e0f22f-3071-fe1a-fd8e-0cc47ab460ec, 57e006b6-71ab-c8f6-7d1d-0cc47ab460e8

   Sub-Cluster Membership UUID: 3451e257-cedd-8772-4b31-0cc47ab460e8

This showed me there were only 2 nodes in the cluster, we will use the Sub-Cluster UUID from here in a moment.

On the partitioned host:

esxcli vsan cluster leave

esxcli vsan cluster join -u 57e0040c-83a9-add9-ec1f-0cc47ab46218

esxcli vsan cluster get

Cluster Information

   Enabled: true

   Current Local Time: 2016-09-21T10:24:26Z

   Local Node UUID: 57e0040c-83a9-add9-ec1f-0cc47ab46218

   Local Node Type: NORMAL

   Local Node State: AGENT

   Local Node Health State: HEALTHY

   Sub-Cluster Master UUID: 57e006b6-71ab-c8f6-7d1d-0cc47ab460e8

   Sub-Cluster Backup UUID: 57e0f22f-3071-fe1a-fd8e-0cc47ab460ec

   Sub-Cluster UUID: 57e0040c-83a9-add9-ec1f-0cc47ab46218

   Sub-Cluster Membership Entry Revision: 1

   Sub-Cluster Member Count: 3

   Sub-Cluster Member UUIDs: 57e0f22f-3071-fe1a-fd8e-0cc47ab460ec, 57e006b6-71ab-c8f6-7d1d-0cc47ab460e8, 57e0040c-83a9-add9-ec1f-0cc47ab46218

   Sub-Cluster Membership UUID: 3451e257-cedd-8772-4b31-0cc47ab460e8

Now we see all three nodes back in the cluster. The data will take some time to rebuild on this node, but once done, the VSAN health check should show as Healthy, and there should be a single VSAN datastore spanning all hosts.

vRealize Orchestrator and Site Recovery Manager – The Missing Parts (or how to hack SOAP APIs to get what you want)

vRealize Orchestrator (vRO) forms the backbone of vRealize Automation (vRA), and provides the XaaS (Anything-as-a-Service) functionality for this product. vRO has plugins for a number of technologies; both those made by VMware, and those which are not. Having been using vRO to automate various products for the last 6 months or so, I have found that these plugins have varying degrees of quality, and some cover more functionality of the underlying product than others.

Over the last couple of weeks, I have been looking at the Site Recovery Manager (SRM) plugin (specifically version 6.1.1, in association with vRO 7.0.1, and SRM 6.1), and while this provides some of the basic functionality of SRM, it is missing some key features which I needed to expose in order to provide full-featured vRA catalog services. Specifically, the plugin documentation lists the following as being missing:

  • You cannot create, edit, or delete recovery plans.
  • You cannot add or remove test network mapping to a recovery plan.
  • You cannot rescan storage to discover newly added replicated devices.
  • You cannot delete folder, network, and resource pool mappings
  • You cannot delete protection groups
  • The unassociateVms and unrotectVms methods are not available in the plug-in. You can use them by using the Site Recovery Manager public API.

Some of these are annoying, but the last ones, around removing VMs from Protection Groups are pretty crucial for the catalog services I was looking to put together. I had to find another way to do this task, outside of the hamstrung plugin.

I dug out the SRM API Developers Guide (available here), and had a read through it, but whilst describing the API in terms of Java and C# access, it wasn’t particularly of use in helping me to use vRO’s JavaScript based programming to do what I needed to do. So I needed another way to do this, which utilised the native SOAP API presented by SRM.

Another issue I saw when using the vRO SRM plugin was that when trying to add a second SRM server (the Recovery site), the plugin fell apart. It seems that the general idea is you only automate your Protected site with this plugin, and not both sites through a single vRO instance.

I tried adding a SOAP host to vRO using the ‘Add a SOAP host’ workflow, but even after adding the WSDL available on the SRM API interface, this was still not particularly friendly, so this didn’t help too much.

Using PowerCLI, we can do some useful things using the SRM API, see this post, and this GitHub repo, for some help with doing this. Our general approach to using vRO is to avoid using a PowerShell host, as this adds a bunch of complexity around adding a host, and generally we would rather do things using REST hosts with pure JavaScript code. So we need a way to figure out how to use this undocumented SOAP API to do stuff.

Now before we go on, I appreciate that the API is subject to change, and that by using the following method to do what we need to do, the methods of automation may change in a future version of SRM. As you will see, this is a fairly simple method of getting what you need, and it should be easy enough to refactor the payloads we are using if and when the API changes. In addition to this, this method should work for any kind of SOAP or REST based API which you can access through .NET type objects in PowerShell.

So the first thing we need to do is to install Fiddler. This is the easiest tool I found to get what I wanted, and there are probably other products about, but I found and liked this one. Fiddler is a web debugging tool, which I would imagine a lot of web developers are familiar with, it can be obtained here. What I like about it is the simplicity it gives in setting up a man-in-the-middle (MitM) attack to pull the detail of what is going on. This is particularly useful when using it with PowerShell, because your client machine is the endpoint, so the proxy injection is straight forward without too much messing about.

NOTE: Because this is doing MitM attacks on the traffic, it is

I’m not going to go into installing Fiddler here, it’s a standard Windows wizard, once installed, launch the program and you should see something like this:

1

If you click in the bottom right, next to ‘All Processes’, you will see it change to ‘Capturing’:

2

We are now ready to start capturing some API calls. So open PowerShell. Now to limit the amount of junk traffic we capture, we can set to only keep a certain number of sessions (in this case I set it to 1000), and target the process to capture from (by dragging the ‘Any Process’ button to our PowerShell window).

3

9

Run the following to connect to vCenter:

Import-Module VMware.VimAutomation.Core
Connect-VIServer -Server $vcenter -Credential (Get-Credential)

4

You should see some captures appearing in the Fiddler window, we can ignore these for now as it’s just connections to the vCenter server:

You can inspect this traffic in any case, by selecting a session, and selecting the ‘Raw’ tab in the right hand pane:

5

Here we can see the URI (https://<redacted>/sdk), the SOAP method (POST), the Headers (User-Agent, Content-Type, SOAPAction, Host, Cookie etc), and the body (<?xml version….), this shows us exactly what the PowerShell client is doing to talk to the API.

Now we can connect to our local and remote SRM sites using the following command:

$srm = connect-srmserver -RemoteCredential (Get-Credential -Message 'Remote Site Credential') -Credential (Get-Credential -Message 'Local Site Credential')

If you examine the sessions in your Fiddler window now, you should see a session which looks like this:

6

This shows the URI as our SRM server, on HTTPS port 9086, with suffix ‘/vcdr/extapi/sdk’, this is the URI we use for all the SRM SOAP calls, it shows the body we use (which contains usernames and passwords for both sites), and the response with a ‘Set-Cookie’ header with a session ticket in it. This session ticket will be added as a header to each of our following calls to the SOAP API.

Let’s try and do something with the API through PowerShell now, and see what the response looks like, run the following in your PowerShell window:

$srmApi = $srm.ExtensionData
$protectionGroups= $srmApi.Protection.ListProtectionGroups()

This session will show us the following:

7

Here we can see the URI is the same as earlier, that there is a header with the name ‘Cookie’ and value of ‘vmware_soap_session=”d8ba0e7de00ae1831b253341685201b2f3b29a66″’, which ties in with the cookie returned by the last call, which has returned us some ManagedObjectReference (MoRef) names of ‘srm-vm-protection-group-1172’ and ‘srm-vm-protection-group-1823’, which represent our Protection Groups. This is great, but how do we tie these into the Protection Group names we set in SRM? Well if we run the following commands in our PowerShell window, and look at the output:

Write-Output $($pg.MoRef.Value+" is equal to "+$pg.GetInfo().Name)

The responses in Fiddler look like this:

8

This shows us a query being sent, with the Protection Group MoRef, and the returned Protection Group name.

We can repeat this process for any of the methods available through the SRM API exposed in PowerCLI, and build up a list of the bodies we have for querying, and retrieving data, and use this to build up a library of actions. As an example we have the following methods already:

Query for Protection Groups:

<?xml version="1.0" encoding="utf-8"?><soap:Envelope xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"><soap:Body><ListProtectionGroups xmlns="urn:srm0"><_this type="SrmProtection">SrmProtection</_this></ListProtectionGroups></soap:Body></soap:Envelope>

Get the name of a Protection Group from it’s MoRef:

<?xml version="1.0" encoding="utf-8"?><soap:Envelope xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"><soap:Body><GetInfo xmlns="urn:srm0"><_this type="SrmProtectionGroup">MOREFNAME</_this></GetInfo></soap:Body></soap:Envelope>

So how do we take these, and turn them into actions in vRO? Well we first need to add a REST host to vRO using the ‘Add a REST host’ built in workflow, pointing to ‘https://<SRM_Server_IP>:9086’, and then write actions to do calls against this, there is more detail on doing this around on the web, this site has a good example. For the authentication method we can do:

// let's set up our variables first, these could be pushed in through parameters on the action, which would make more sense, but keep it simple for now

var localUsername = "administrator@vsphere.local"

var localPassword = "VMware1!"

var remoteUsername = "administrator@vsphere.local"

var remotePassword = "VMware1!"

 

// We need our XML body to send

var content = '<?xml version="1.0" encoding="utf-8"?><soap:Envelope xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"><soap:Body><SrmLoginSites xmlns="urn:srm0"><_this type="SrmServiceInstance">SrmServiceInstance</_this><username>'+localUsername+'</username><password>'+localPassword+'</password><remoteUsername>'+remoteUsername+'</remoteUsername><remotePassword>'+remotePassword+'</remotePassword></SrmLoginSites></soap:Body></soap:Envelope>';

 

// create the session request

var SessionRequest = RestHost.createRequest("POST", "/vcdr/extapi/sdk", content);

// set the headers we saw on the request through Fiddler

SessionRequest.setHeader("SOAPAction","urn:srm0/4.0");

SessionRequest.setHeader("Content-Type","text/xml; charset=utf-8");

var SessionResponse = SessionRequest.execute();

 

// show the content

System.log("Session Response: " + SessionResponse.contentAsString);

 

// take the response and turn it into a string

var XmlContent = SessionResponse.contentAsString;

 

// get the headers

var responseHeaders = SessionResponse.getAllHeaders();

 

// and just the one we want

var token = responseHeaders.get("Set-Cookie");

 

// log the token we got

System.log("Token: " + token);

 

// return our token

return token

This will return us the token we can use for doing calls against the API. Now how do we use that to return a list of Protection Groups:

// We need our XML body to send, this just queries for the Protection Group MoRefs

var content = '<?xml version="1.0" encoding="utf-8"?><soap:Envelope xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"><soap:Body><ListProtectionGroups xmlns="urn:srm0"><_this type="SrmProtection">SrmProtection</_this></ListProtectionGroups></soap:Body></soap:Envelope>';

 

// create the session request

var SessionRequest = RestHost.createRequest("POST", "/vcdr/extapi/sdk", content);

// set the headers we saw on the request through Fiddler

SessionRequest.setHeader("SOAPAction","urn:srm0/4.0");

SessionRequest.setHeader("Content-Type","text/xml; charset=utf-8");

SessionRequest.setHeader("Cookie",token);

var SessionResponse = SessionRequest.execute();

 

// show the content

System.log("Session Response: " + SessionResponse.contentAsString);

 

// take the response and turn it into a string

var XmlContent = SessionResponse.contentAsString;

 

// lets get the Protection Group MoRefs from the response

var PGMoRefs = XmlContent.getElementsByTagName("returnval");

 

// declare an array of Protection Groups to return

var returnedPGs = [];

 

// iterate through each Protection Group MoRef

for each (var index=0; index<PGMoRefs.getLength(); index++) {

// extract the actual MoRef value

var thisMoRef = PGMoRefs.item(index).textContent;

// and insert it into the body of the new call

var content = '<?xml version="1.0" encoding="utf-8"?><soap:Envelope xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"><soap:Body><GetInfo xmlns="urn:srm0"><_this type="SrmProtectionGroup">'+thisMoRef+'</_this></GetInfo></soap:Body></soap:Envelope>';

// do another call to the API to get the Protection Group name

SessionRequest = RestHost.createRequest("POST", "/vcdr/extapi/sdk", content);

SessionRequest.setHeader("SOAPAction","urn:srm0/4.0");

SessionRequest.setHeader("Content-Type","text/xml; charset=utf-8");

SessionRequest.setHeader("Cookie",token);

SessionResponse = SessionRequest.execute();

XmlContent = XMLManager.fromString(SessionResponse.contentAsString);

returnedPGs += myxmlobj.getElementsByTagName("name").item(0).textContent;

};

 

// return our token

return returnedPGs;

Through building actions like this, we can build up a library to call the API directly. This should be a good starting point for building your own libraries for vRO to interact with SRM via the API, rather than the plugin. As stated earlier, using Fiddler, or something like it, you should be able to use this to capture anything being done through PowerShell, and I have even had some success with capturing browser clicks through this method, depending on how the web interface is configured. This method certainly made creating some integration with SRM through vRO less painful than trying to use the plugin.

Slipstreaming VMXNET3 drivers into Windows builds

 

A question came up on Twitter this week about pre-loading drivers into Windows builds to allow future devices to be ready to use. I have been doing this for a while, but take it for granted I guess. I was prompted on responding to blog this, which totally makes sense, so here goes.

In my case we are slipstreaming the VMXNET3 driver, which is needed if you want to provision Windows Server VMs on vCenter with the VMXNET3 network adapter, which is required if you don’t want to limit your Windows VMs to the 1Gbps available to the E1000 adapter type.

Here we are doing it using the autoUnattend.xml file, which I use on a virtual floppy disk to automate Windows builds, but this could apply to however you build your servers, and want to install drivers.

So the following command is in my autoUnattend.xml file, near the end, where we do commands to run at first logon:

Capture2

This uses the pnputil.exe, which is a built in utility to install drivers, and points to the .inf file, sitting on the virtual floppy drive, in a folder with the below files:

Capture

As a heads up, I got these files from a VM which already had VMware Tools installed, from the VMXNET3 specific version folder in ‘C:\Windows\System32\DriverStore\FileRepository’.

Hopefully one day this will be included with Windows, but not sure if this is something Microsoft want to do or not, if not then this is the way I do it for now. I don’t think I entirely figured this out on my own, but have been doing it this way for a while, so apologies for not referencing anyone’s blog I poached this from. I’m sure if you are using SCCM or whatever then you could use a similar method to this, with the command line tool used above to do this (and with any driver, not just VMXNET3).

Unable to see identity providers in vRA 7.0.x

I have seen a weird issue which seems to have come along in vRA 7.0.1, to do with roles and authorization. In my environment I have delegated the Tenant Administrator role to an Active Directory group, named ‘vRA-TenantAdmins’, of which my user account is a member. This shows when I look at my user account through ‘Users and Groups’ (the square rather than a tick indicates this permission is implicit):

Roles_1

Now, I can do the stuff a Tenant Administrator should be able to do, with some weird exceptions. For example, when I try to look at what directories have been added to vIDM, the interface just hangs at refreshing the list of directories:

Dir_Hanging

And the same when I look at identity providers:

Provider_hanging

And I can’t do login screen branding (although header and footer branding works fine!):

branding_fail

I smashed my face off this problem for a few hours, but turns out the fix was fairly simple (although this should be unnecessary). If I go to my account again, under ‘Users and Groups’, and add my account explicitly to the ‘Tenant Administrator’ role, then the functionality all mysteriously works.

Roles_2

This is pretty annoying, as I want to do Role Based Access Control (RBAC), using Active Directory to control access for user accounts. Hopefully this will be fixed in the next release of vRealize Automation, and hoping this post helps people seeing the same obscure behaviour I did.