Becoming a Briklayer

This month saw me start a new chapter in my career, as a DevOps Sales Engineer at Rubrik. This is an exciting leap from me, moving from the channel into vendorland, and to a company which is changing the market in data management.

Rubrik’s approach to product development, being API first, and rooted in a modern distributed systems approach, is evident when using their products, and this was hugely attractive to me as a consumer. The API has truly full coverage, is simple, consistent, and well documented. Despite the number of software and hardware companies promoting themselves as modern and API first, using Rubrik in a POC I did earlier this year was truly a breath of fresh air.

This is my first foray into both a sales organisation, and to a vendor, but my feeling is that it will open my mind to the wider IT industry, and give me some great opportunities to not only see how the sausage is made, but also how to sell sausages (as a vegetarian, I should note that the sausages in this analogy are made from a soya derivative).

The last few years for me have been a fast paced journey through systems engineering and automation, product development, and learning and implementing modern software engineering and delivery processes both on-premises and in the cloud, and these skills should come in very useful in my new role helping customers and potential customers to realise the possibilities that a platform that enables end-to-end systems automation can deliver, and spreading the good word.

You can see more of the latest of what Rubrik are doing from their appearance at Cloud Field Day 2 a couple of weeks ago:

There is also a raft of the kind of DevOps things my new team work on available at the following GitHub repositories:


Goodbye to the Past

This week, in the lead up to this year’s VMworld US, VMware have announced the impending death of both vCenter on Windows, and the vSphere Web Client.

Each of these will be deprecated in the next major version of vSphere (assuming 7.0), and removed in the next major version after that (assuming 8.0). Quite when we will see these releases is still a matter of speculation, but with VMworld this week it is possible 7.0 will be slated for release in Winter 2017.

vCenter Server Appliance vs Windows

The vCenter Server Appliance, or vCSA, has been around since vSphere 5.0, and over time has become more mature, more stable, and has moved from VMware’s licensed version of SUSE Linux, to the modern and lightweight Photon OS.

These changes have turned the appliance into a performant, and reliable powerhouse; important facets for what is obviously a critical and central part of any modern vSphere datacenter.

Since vSphere 6.0, the vCSA has had at least feature parity with vCenter installed on Windows, and the added simplicity of installation that it includes – not having to install VUM separately, not having to install a MSSQL database, and not having yet another Windows instance to stroke – makes the use of vCSA over Windows a no-brainer.

As things have moved on, the migration tools too have become more and more mature, meaning that migration to vCSA 6.5 Update 1 is a really straight forward affair from most starting points.

The death of flash – long live HTML5

The vSphere Web Client has been a contentious tool since its introduction in vSphere 5.0; often unstable and poorly performing, it has been the only way to access newer features of vSphere such as VSAN, SPBM, as well as newer versions of vSphere Update Manager and Site Recovery Manager.

Added to this are the raft of security issues facing Flash, and it’s general fall from grace over the last 10 years. Adobe have now decreed that Flash will die in 2020, which is great news for desktop browsers, mobile devices, and the internet in general.

VMware responded to customer feedback on the vSphere Web Client (or Flex Client) by releasing the HTML5 Web Client under their Flings program back in March 2016, and shipped vSphere 6.5 with this installed alongside the Flex client.

While the HTML5 client still does not have feature parity at this time, the rate at which it is catching up is impressive, and we can reasonably expect it to match, and possibly, overtake in terms of features in the next 6-12 months.


The removal of both of these tools is, for me at least, great news. I have had no desire to run a Windows vCenter since probably 5.5, which was four years ago now. Likewise, the vSphere Web Client has been a necessary evil since the same time, especially when it came to using third party integrations and newer features.

Hopefully the removal of these legacy chains will mean that development is accelerated as engineering resource is freed up. It will put some burden on the engineering teams from VMware ecosystem partners, who will need to release new versions of their plugins, if they haven’t already, utilising the gorgeous HTML 5 Clarity interface, but those partners should already be moving in this direction anyway.

Replacing the ‘All Services’ Icon in vRealize Automation

I had a conversation with Ricky El-Qasem (@rickyelqasem) on Twitter this week about the ‘All Services’ logo in vRealize Automation, and whether this could be replaced programatically.

For those which don’t know the pain of this particular element of vRA; when browsing the service catalog, groups of services are listed down the left hand side of the page with icons next to them:

Screen Shot 2017-03-17 at 17.52.42.png

These can all be changed, but until recently the top icon would remain as a blue lego brick, which can make the otherwise slick portal look unsightly. This is shown on the image below:

Screen Shot 2017-03-17 at 17.56.25.png

Now luckily, from vRA 7.1, this has been replaceable through the API, and steps have been documented in the accompanying guide here. This uses the REST API, and means you need to convert the image in PNG into Base-64 encoding in order to push it to the API, a little to manual for me!

So I quickly threw vRA 7.2 up in my home lab and got to work. I chose to script it using Python because I found that I could easily convert the image to Base-64, and I knew I could do the REST calls using the excellent ‘requests’ Python package (info available here). The code I used is available on my GitHub, and is shown below. I also created a script to delete the custom icon, and return things to vanilla state, you know, just in case 😉

Anyway, I hope this is useful for people who want to quickly and easily replace the icon.

#!/usr/bin/env python
# required packages, install with pip if not present
import requests
import json
# disable self-signed cert warnings
# replace these variables
filename = 'service.png'
vra_ip = ''
vra_user = 'administrator@vsphere.local'
vra_pass = 'VMware1!'
vra_tenant = 'vsphere.local'
# don't replace anything from here
# open file and encode it in b64
with open("./"+filename, "rb") as f:
    data =
    encoded = data.encode("base64")
encoded = encoded.replace("\r","")
encoded = encoded.replace("\n","")
# get our authorization token
uri = 'https://'+vra_ip+'/identity/api/tokens'
headers = {'Accept':'application/json','Content-Type':'application/json'}
payload = '{"username":"'+vra_user+'","password":"'+vra_pass+'","tenant":"'+vra_tenant+'"}'
r =, headers=headers, verify=False, data=payload)
token = 'Bearer '+str(json.loads(r.text)["id"])
# send the new icon to the API
uri = 'https://'+vra_ip+'/catalog-service/api/icons'
headers = {'Accept':'application/json','Content-Type':'application/json','Authorization':token}
payload = '{"id":"cafe_default_icon_genericAllServices","fileName":"'+filename+'","contentType":"image/png","image":"'+encoded+'"}'
r =, headers=headers, verify=False, data=payload)
if r.status_code == 201:
    print "Replacement successful"
    print "Expected return code 201, got "+r.status_code+" something went wrong"

Slipstreaming VMXNET3 drivers into Windows builds


A question came up on Twitter this week about pre-loading drivers into Windows builds to allow future devices to be ready to use. I have been doing this for a while, but take it for granted I guess. I was prompted on responding to blog this, which totally makes sense, so here goes.

In my case we are slipstreaming the VMXNET3 driver, which is needed if you want to provision Windows Server VMs on vCenter with the VMXNET3 network adapter, which is required if you don’t want to limit your Windows VMs to the 1Gbps available to the E1000 adapter type.

Here we are doing it using the autoUnattend.xml file, which I use on a virtual floppy disk to automate Windows builds, but this could apply to however you build your servers, and want to install drivers.

So the following command is in my autoUnattend.xml file, near the end, where we do commands to run at first logon:


This uses the pnputil.exe, which is a built in utility to install drivers, and points to the .inf file, sitting on the virtual floppy drive, in a folder with the below files:


As a heads up, I got these files from a VM which already had VMware Tools installed, from the VMXNET3 specific version folder in ‘C:\Windows\System32\DriverStore\FileRepository’.

Hopefully one day this will be included with Windows, but not sure if this is something Microsoft want to do or not, if not then this is the way I do it for now. I don’t think I entirely figured this out on my own, but have been doing it this way for a while, so apologies for not referencing anyone’s blog I poached this from. I’m sure if you are using SCCM or whatever then you could use a similar method to this, with the command line tool used above to do this (and with any driver, not just VMXNET3).

vSphere HTML5 Client Fling Deployment Script

So yesterday, VMware released the HTML5 vSphere Client as a fling, this is available for download here. I have put together a PowerShell script to deploy this to your vSphere environment.

It seems unusual for this to take the form of an OVA, but at least this means that it does not touch your existing vCenter, so should be deployable with less apprehension.

The client itself is issued an IP address from an IP Pool, and therefore has a different IP address to access from vCenter. Deployment of the OVA is pretty straight forward, and instructions for use and setup are in the link above.

There are already a tonne of posts around the features present, and not present, in the vSphere HTML5  Client, I am not going to go over that here, suffice to say, it is a fling for a reason.

This script (at first release) assumes that a valid, enabled IP Pool already exists in vCenter for the IP you allocate to the VM, I will add functionality in the next release to create an IP Pool if one is not already present.

Other than that, you should just need to replace the variables at the top of the script to use it for deployment. The script is available on my GitHub repository at this link.


PowerShell – Could not create SSL/TLS secure channel

I have spent a considerable amount of time in my life battling with the above error message when running PowerShell scripts. Long and short of it is that this can be caused by a few things, but most of the times I have experienced it, the reason is that the endpoint you are trying to connect to is using self-signed certificates, which causes the Invoke-WebRequest, and Invoke-RestMethod commands to throw an error stating:

The underlying connection was closed: Could not establish trust relationship for the SSL/TLS secure channel.

If you hit this, you will know as your web request via standard REST methods will simply refuse to give you anything back.

I had a bunch of scripts written to do automation of the configuration of vRealize Orchestrator, and vRealize Automation 7.0, and these had been heavily tested, and confirmed as working. The way of avoiding the above error is to use the following PowerShell function:

function Ignore-SelfSignedCerts
Write-Host "Adding TrustAllCertsPolicy type." -ForegroundColor White
Add-Type -TypeDefinition @"
using System.Net;
using System.Security.Cryptography.X509Certificates;
public class TrustAllCertsPolicy : ICertificatePolicy
public bool CheckValidationResult(
ServicePoint srvPoint, X509Certificate certificate,
WebRequest request, int certificateProblem)
return true;
Write-Host "TrustAllCertsPolicy type added." -ForegroundColor White
Write-Host $_ -ForegroundColor "Yellow"
[System.Net.ServicePointManager]::CertificatePolicy = New-Object TrustAllCertsPolicy

So not a great start to my Sunday when I found that my scripts no longer worked after a fresh install of the recently released vRealize Orchestrator and vRealize Automation 7.0.1.

After much messing about, I worked out the cause of this, which is that SSLv3 and TLSv1.0 were both disabled in the new releases, as a result we need to either:

a) Enable SSLv3 or TLSv1.0 – probably not the best idea, these have been disabled due to the growing number of security risks in these protocols, and will (presumably) continue to be disabled for every new version of the products going forward

b) Change the way we issue requests, to use TLSv1.2 – this is the way to do it in my opinion, and the code to do this is a simple one-liner:

[System.Net.ServicePointManager]::SecurityProtocol = [System.Net.SecurityProtocolType]::Tls12;

So if you  hit this problem (and if you are a PowerShell scripter, and interacting with REST APIs with your scripts, then you probably will!), then this is how to fix the issue.

FlexPod and UCS – where are we now?

I have been working on FlexPod for about a year now, and on UCS for a couple of years; in my opinion it’s great tech and takes a lot of the pain away from designing new data center infrastructure solutions, taking away the guesswork, and bringing inherent simplicity, reliability and resilience to the space. Over the last year or so, there has been a shift in the Cisco Validated Designs (CVDs) coming out of Cisco, and as things have moved forward, there is a noticeable shift in the way things are going with FlexPod. I should note that some of what I discuss here is already clear, and some is mere conjecture. I think the direction things are going in with FlexPod is a symptom of changes in the industry, but it is clear that converged solutions are becoming more and more desirable as the Enterprise technology world moves forward.

So what are the key changes we are seeing?

The SDN elephant in the room 

Cisco’s ACI (Application-centric Infrastructure) has taken a while to get moving; the replacement of existing 3-tier network architecture with a leaf-spine network is something which is not going to happen overnight. The switched fabric is arguably a great solution for modern data centers, where east-west traffic is the bulk of network activity, and Spanning Tree Protocol continues to be the bane of network admins, but its implementation often requires either green-field deployments, or forklift replacement of large portions of the existing core networking of a data center.

That’s not to say ACI is not doing OK in terms of sales; Cisco’s figures, and case studies, seem to show that there is uptake, and some large customers taking this on. So how does this fit in with FlexPod? Well, the Cisco Validated Designs (CVDs) released over the last 12 months have all included the new Nexus 9000 series switches, rather than the previous stalwart Nexus 5000 series. These are ACI based switches, which are also able to operate in the ‘legacy’ NX-OS mode. Capability wise, in the context of FlexPod, there is not a lot of difference, they now have FCoE support, can do vPC, QoS, Layer 3, and all the good stuff we come to expect from Nexus switches.

So from what I can gather, the inclusion of 9K switches in the FlexPod line (outside of the FlexPod with ACI designs), is there to enable FlexPod customers to more easily move into the leaf/spine ACI network architecture at a later date, should they wish to do this. This makes sense, and the pricing on the 9Ks being used looks favourable over the 5Ks, so this is a win-win for customers, even if they don’t eventually decide to go with ACI.

40GbE standard 

Recent announcements around the Gen 3 UCS Fabric Interconnects have revealed that 40GbE is now going to be the standard for UCS connectivity solutions, and the new chassis designs show 4 x 40GbE QSFP connections, totaling 320Gbps total bandwidth per chassis, this is an incredible throughput, and although I can’t see 99% of customers going anywhere near these levels, it does help to strengthen the UCS platform’s use cases for even the most high performance environments, and reduces the requirement for Infiniband type solutions for high throughput environments.

Another interesting point, and following on from the ACI ramblings above, is that the new 6300 series Fabric Interconnects are now based on the Nexus 9300 switching line, rather than the Nexus 5K based 6200 series. This positions them perfectly to act as a leaf in an ACI fabric one day, should this become the eventual outcome of Cisco’s strategy.


With the announcements about the new UCS generation, came the news that from UCS firmware version 3.1, the software for UCS would now be unified for UCS classic, UCS Mini, and the newish M-series systems, this simplifies things for people looking at version numbers and how they relate to the relevance of the release, and means that there should now be relative feature parity across all footprints of UCS systems.

The most exciting, if you have experienced long running pain with Java, is that the new release incorporates the HTML5 interface, which has been seen on UCS Mini since its release. I’m sure this will bring new challenges with it, but for now at least, something fresh to look forward to for those running UCS classic.

FlexPod Mini – now not so mini 


FlexPod Mini is based on the UCS Mini release, which came out around 18 months ago, this is based on the I/O Modules (or FEXs) in the UCS 5108 chassis, being replaced with UCS 6324 pocket sized Fabric Interconnects, ultimately cramming a single chassis of UCS, and the attached network equipment into just 6 rack units. This could be expanded with C-series servers, but the scale for UCS blades, was strictly limited to the 8 blade limit of a standard chassis. With the announcement of the new Fabric Interconnect models came the news of a new ‘QSFP Scalability Port License’, which allows the 40GbE port on the UCS 6324 FI to be utilized with a 1 x QSFP to 4 x SFP+ cable, to add another chassis to the UCS Mini.

Personally I haven’t installed a UCS Mini, but the form factor is a great fit for certain use cases, and the more flexible this is, the more desire there will be to use this solution. For FlexPod, this ultimately means more suitable use cases, particularly in the ROBO (remote office/branch office) type scenario.

What’s Next? 

So with the FlexPod now having new switches, and new UCS hardware, it seems something new from NetApp is next on the list for FlexPod. The FAS8000 series is a couple of years old now, so we will likely see a refresh on this at some point, probably with 40GbE on board, more flash options, and faster CPUs. The recent purchase of SolidFire by NetApp will also quite probably see some kind of SolidFire based FlexPod CVD coming out of the Cisco/NetApp partnership in the near future.

We are also seeing the release of some exciting (or at least as exciting as these things can be!) new software releases this year: Windows Server 2016, vSphere 6.5 (assuming this will be revealed at VMworld), and OpenStack Mitaka, all of which will likely bring new CVDs.