Becoming a Briklayer

This month saw me start a new chapter in my career, as a DevOps Sales Engineer at Rubrik. This is an exciting leap from me, moving from the channel into vendorland, and to a company which is changing the market in data management.

Rubrik’s approach to product development, being API first, and rooted in a modern distributed systems approach, is evident when using their products, and this was hugely attractive to me as a consumer. The API has truly full coverage, is simple, consistent, and well documented. Despite the number of software and hardware companies promoting themselves as modern and API first, using Rubrik in a POC I did earlier this year was truly a breath of fresh air.

This is my first foray into both a sales organisation, and to a vendor, but my feeling is that it will open my mind to the wider IT industry, and give me some great opportunities to not only see how the sausage is made, but also how to sell sausages (as a vegetarian, I should note that the sausages in this analogy are made from a soya derivative).

The last few years for me have been a fast paced journey through systems engineering and automation, product development, and learning and implementing modern software engineering and delivery processes both on-premises and in the cloud, and these skills should come in very useful in my new role helping customers and potential customers to realise the possibilities that a platform that enables end-to-end systems automation can deliver, and spreading the good word.

You can see more of the latest of what Rubrik are doing from their appearance at Cloud Field Day 2 a couple of weeks ago:

There is also a raft of the kind of DevOps things my new team work on available at the following GitHub repositories:

https://github.com/rubrikinc

https://github.com/rubrik-devops

Advertisements

rbvami – Managing the vCSA VAMI using Ruby

I have been putting together a module for managing vCSA’s VAMI using Ruby. This uses the vCenter 6.5 REST API, and the long term plan is to build it out to cover the entire REST API.

My intention is to use this module as the basis for a Chef cookbook for managing VAMI configuration, and is mainly a learning rather than a practical exercise.

The project is on my GitHub site, feel free to contact me if there is functionality you would like to see added.

Cleaning up AWS OpsWorks Automate Nodes

I’ve been playing with Chef and AWS’ OpsWorks Automate product a lot in the last few weeks, one problem I had was that as I kept bootstrapping EC2 instances, using the excellent Knife EC2 tool, the nodes were not being cleaned up out of the Chef Automate portal. I’m imagining this will be a common issue for folks using ephemeral type workloads with Chef Automate in any cloud.

AWS’ documentation has some AWS CLI commands to run to remove old nodes, but this refers to AWS CLI commands which do not seem to be present in the latest version of AWS CLI (there is no ‘aws opsworks-cm’ domain now in the CLI, so no way of managing OpsWorks Automate).

I found this page in Chef’s highly recommended Learn Chef Rally training site which led me to the way to do this. The following can be run from an SSH connection into your Chef Automate server (or in my case, as I had not assigned a keypair on creation of my Automate server, through EC2 Systems Manager’s Run Command feature):

sudo automate-ctl delete-visibility-node <NODE_NAME>

If you have multiple nodes with the same name, you may receive the following response:

Multiple nodes were found matching your request. Please delete by ID using: automate-ctl delete-visibility-node-by-id NODE_UUID

Node UUID Node Name Org Name Chef Server
==================================== ========= ======== ===========
1c298e89-7c9f-4feb-b784-20b3858bfd6f webtest2 default chefautomate-1abcdefgo12abcde.eu-west-1.opsworks-cm.io
7f9b96df-7c02-4277-a5bb-879962b17136 webtest2 default chefautomate-1abcdefgo12abcde.eu-west-1.opsworks-cm.io
05f55344-2425-4764-8db6-9c0a0ef8d015 webtest2 default chefautomate-1abcdefgo12abcde.eu-west-1.opsworks-cm.io

You can delete these using the following command instead:

sudo automate-ctl delete-visibility-node-by-id <NODE_ID>

This wraps up the post, hopefully it comes in useful for people.

Deploying NX-OSv 9000 on vSphere

Cisco have recently released (1st March 2017) an updated virtual version of their Nexus 9K switch, and the good news is that this is now available as an OVA for deployment onto ESXi. We used to use VIRL in a lab, which was fine until a buggy earlier version of the virtual 9K was introduced which prevented core functionality like port channels. This new release doesn’t require the complex environment that VIRL brings, and lets you deploy a quick pair of appliances in vPC to test code against.

The download is available here, and while there are some instructions available, I did not find them particularly useful in deploying the switch to my ESXi environment. As a result, I decided to write up how I did this to hopefully save people spending time smashing their face off it.

Getting the OVA

NOTE: you will need a Cisco login to download the OVA file. My login has access to a bunch of bits so not sure exactly what the requirements are around this.

There are a few versions available from the above link, including a qcow2 (KVM) image, a .vmdk file (for rolling your own VM), a VirtualBox image (for use with VirtualBox and/or Vagrant), and an OVA (for use with Fusion, Workstation, ESXi).

Once downloaded we are ready to deploy the appliance. There are a few things to bear in mind here:

  1. This can be used to pass VM traffic between virtual machines: there are 6 connected vNICs on deployment, 1 of these simulates the mgmt0 port on the 9K, and the other 5 are able to pass VM traffic.
  2. vNICs 2-6 should not be attached to the management network (best practice)
  3. We will need to initially connect over a virtual serial port through the host, this will require opening up the ESXi host firewall temporarily

Deploying the OVA

You can deploy the OVA through the vSphere Web Client, or the new vSphere HTML5 Web Client, I’ve detailed how to do this via PowerShell here, because who’s got time for clicking buttons?

1p474b


# Simulator is available at:
# https://software.cisco.com/download/release.html?mdfid=286312239&amp;amp;softwareid=282088129&amp;amp;release=7.0(3)I5(1)&amp;amp;relind=AVAILABLE&amp;amp;rellifecycle=&amp;amp;reltype=latest
# Filename: nxosv-final.7.0.3.I5.2.ova
# Documentation: http://www.cisco.com/c/en/us/td/docs/switches/datacenter/nexus9000/sw/7-x/nx-osv/configuration/guide/b_NX-OSv_9000/b_NX-OSv_chapter_01.html

Function New-SerialPort {
  # stolen from http://access-console-port-virtual-machine.blogspot.co.uk/2013/07/add-serial-port-to-vm-through-gui-or.html
  Param(
     [string]$vmName,
     [string]$hostIP,
     [string]$prt
  ) #end
  $dev = New-Object VMware.Vim.VirtualDeviceConfigSpec
  $dev.operation = "add"
  $dev.device = New-Object VMware.Vim.VirtualSerialPort
  $dev.device.key = -1
  $dev.device.backing = New-Object VMware.Vim.VirtualSerialPortURIBackingInfo
  $dev.device.backing.direction = "server"
  $dev.device.backing.serviceURI = "telnet://"+$hostIP+":"+$prt
  $dev.device.connectable = New-Object VMware.Vim.VirtualDeviceConnectInfo
  $dev.device.connectable.connected = $true
  $dev.device.connectable.StartConnected = $true
  $dev.device.yieldOnPoll = $true

  $spec = New-Object VMware.Vim.VirtualMachineConfigSpec
  $spec.DeviceChange += $dev

  $vm = Get-VM -Name $vmName
  $vm.ExtensionData.ReconfigVM($spec)
}

# Variables - edit these...
$ovf_location = '.\nxosv-final.7.0.3.I5.1.ova'
$n9k_name = 'NXOSV-N9K-001'
$target_datastore = 'VBR_MGTESX01_Local_SSD_01'
$target_portgroup = 'vSS_Mgmt_Network'
$target_cluster = 'VBR_Mgmt_Cluster'

$vi_server = '192.168.1.222'
$vi_user = 'administrator@vsphere.local'
$vi_pass = 'VMware1!'

# set this to $true to remove non-management network interfaces, $false to leave them where they are
$remove_additional_interfaces = $true

# Don't edit below here
Import-Module VMware.PowerCLI

Connect-VIServer $vi_server -user $vi_user -pass $vi_pass

$vmhost = $((Get-Cluster $target_cluster | Get-VMHost)[0])

$ovfconfig = Get-OvfConfiguration $ovf_location

$ovfconfig.NetworkMapping.mgmt0.Value = $target_portgroup
$ovfconfig.NetworkMapping.Ethernet1_1.Value = $target_portgroup
$ovfconfig.NetworkMapping.Ethernet1_2.Value = $target_portgroup
$ovfconfig.NetworkMapping.Ethernet1_3.Value = $target_portgroup
$ovfconfig.NetworkMapping.Ethernet1_4.Value = $target_portgroup
$ovfconfig.NetworkMapping.Ethernet1_5.Value = $target_portgroup
$ovfconfig.DeploymentOption.Value = 'default'

Import-VApp $ovf_location -OvfConfiguration $ovfconfig -VMHost $vmhost -Datastore $target_datastore -DiskStorageFormat Thin -Name $n9k_name

if ($remove_additional_interfaces) {
  Get-VM $n9k_name | Get-NetworkAdapter | ?{$_.Name -ne 'Network adapter 1'} | Remove-NetworkAdapter -Confirm:$false
}

New-SerialPort -vmName $n9k_name -hostIP $($vmhost | Get-VMHostNetworkAdapter -Name vmk0 | Select -ExpandProperty IP) -prt 2000

$vmhost | Get-VMHostFirewallException -Name 'VM serial port connected over network' | Set-VMHostFirewallException -Enabled $true

Get-VM $n9k_name | Start-VM

This should start the VM, we will be able to telnet into the host on port 2000 to reach the VM console, but it will not be ready for us to do that until this screen is reached:

Capture

Now when we connect we should see:

Capture

At this point we can enter ‘n’ and go through the normal Nexus 9K setup wizard. Once the management IP and SSH are configured you should be able to connect via SSH, the virtual serial port can then be removed via the vSphere Client, and the ‘VM serial port connected over network’ rule should be disabled on the host firewall.

Pimping things up

Add more NICs

Obviously here we have removed the additional NICs from the VM, which makes it only talk over the single management port. We can add a bunch more NICs and the virtual switch will let us use them to talk on. This could be an interesting use case to pass actual VM traffic through the 9K.

Set up vPC

The switch is fully vPC (Virtual Port Channel) capable, so we can spin up another virtual N9K and put them in vPC mode, this is useful to experiment with that feature.

Bring the API!

The switch is NXAPI capable, which was the main reason for me wanting to deploy it, so that I could test REST calls against it. Enable NXAPI by entering the ‘feature nxapi’ commmand.

Conclusion

Hopefully this post will help people struggling to deploy this OVA, or wanting to test out NXOS in a lab environment. I found the Cisco documentation a little confusing so though I would share my experiences.

Replacing the ‘All Services’ Icon in vRealize Automation

I had a conversation with Ricky El-Qasem (@rickyelqasem) on Twitter this week about the ‘All Services’ logo in vRealize Automation, and whether this could be replaced programatically.

For those which don’t know the pain of this particular element of vRA; when browsing the service catalog, groups of services are listed down the left hand side of the page with icons next to them:

Screen Shot 2017-03-17 at 17.52.42.png

These can all be changed, but until recently the top icon would remain as a blue lego brick, which can make the otherwise slick portal look unsightly. This is shown on the image below:

Screen Shot 2017-03-17 at 17.56.25.png

Now luckily, from vRA 7.1, this has been replaceable through the API, and steps have been documented in the accompanying guide here. This uses the REST API, and means you need to convert the image in PNG into Base-64 encoding in order to push it to the API, a little to manual for me!

So I quickly threw vRA 7.2 up in my home lab and got to work. I chose to script it using Python because I found that I could easily convert the image to Base-64, and I knew I could do the REST calls using the excellent ‘requests’ Python package (info available here). The code I used is available on my GitHub, and is shown below. I also created a script to delete the custom icon, and return things to vanilla state, you know, just in case 😉

Anyway, I hope this is useful for people who want to quickly and easily replace the icon.

#!/usr/bin/env python
# required packages, install with pip if not present
import requests
import json
# disable self-signed cert warnings
requests.packages.urllib3.disable_warnings()
# replace these variables
filename = 'service.png'
vra_ip = '192.168.1.227'
vra_user = 'administrator@vsphere.local'
vra_pass = 'VMware1!'
vra_tenant = 'vsphere.local'
# don't replace anything from here
# open file and encode it in b64
with open("./"+filename, "rb") as f:
    data = f.read()
    encoded = data.encode("base64")
encoded = encoded.replace("\r","")
encoded = encoded.replace("\n","")
# get our authorization token
uri = 'https://'+vra_ip+'/identity/api/tokens'
headers = {'Accept':'application/json','Content-Type':'application/json'}
payload = '{"username":"'+vra_user+'","password":"'+vra_pass+'","tenant":"'+vra_tenant+'"}'
r = requests.post(uri, headers=headers, verify=False, data=payload)
token = 'Bearer '+str(json.loads(r.text)["id"])
# send the new icon to the API
uri = 'https://'+vra_ip+'/catalog-service/api/icons'
headers = {'Accept':'application/json','Content-Type':'application/json','Authorization':token}
payload = '{"id":"cafe_default_icon_genericAllServices","fileName":"'+filename+'","contentType":"image/png","image":"'+encoded+'"}'
r = requests.post(uri, headers=headers, verify=False, data=payload)
if r.status_code == 201:
    print "Replacement successful"
else:
    print "Expected return code 201, got "+r.status_code+" something went wrong"

PowerCLI Core on Docker with macOS

Back in November, shortly after Microsoft’s open-sourcing of PowerShell, and subsequent cross-platform PowerShell Core release, VMware released their cross-platform version of PowerCLI: PowerCLI Core. This was made available on GitHub for general consumption, and can be installed on top of PowerShell Core on a macOS/OSX or Linux machine.

I have loads of junk installed on my MacBook, including the very first public release of PowerShell Core, but keeping it all up to date, and knowing what I have installed can be a pain, so my preference for running PowerShell Core, or PowerCLI Core at the moment is through a Docker container on my laptop, this helps keep the clutter down and makes upgrading easy.

In this post I’m going to show how to use the Docker container with an external editor, and be able to run all your scripts from within the container, removing the need to install PowerShell Core on your Mac.

Pre-requisites

We will need to install a couple of bits of software before we begin:

Beyond that we will get everything below.

Getting the Docker image

VMware have made the PowerCLI Core Docker image available on Docker Hub (here), this is the easiest place to pull container images from to your desktop, and is the general go-to place for public container images as of today. This can be downloaded once the Docker CE is installed through the command below:

Icarus:~ root$ docker pull vmware/powerclicore:latest
latest: Pulling from vmware/powerclicore
93b3dcee11d6: Already exists
d6641ceee635: Pull complete
62bbcce52faa: Pull complete
e86aa7a78685: Pull complete
db20fbdf24c0: Pull complete
37379feb8f29: Pull complete
8abb449d1e29: Pull complete
a9cd6d9452e7: Pull complete
50886ff01a73: Pull complete
74af7eaa49c1: Pull complete
878c611eaf2c: Pull complete
39b1b7978191: Pull complete
98e632013bea: Pull complete
4362432cb5ea: Pull complete
19f5f892ae79: Pull complete
29b0b093b159: Pull complete
913ad6409b89: Pull complete
ad5db0a55033: Pull complete
Digest: sha256:d33ac26c0c704a7aa48f5c7c66cb76ec3959beda2962ccd6a41a96351055b5d0
Status: Downloaded newer image for vmware/powerclicore:latest
Icarus:~ root$

This may take a couple of minutes, but the image should now be present on the local machine, and ready to fire up.

Getting the path for our scripts folder

Before we launch our container we need our scripts path, this could be a folder anywhere on your computer, in my case it is:

/Users/tim/Dropbox/Coding Projects/PowerShell/VMware

Launching our container

The idea here is to launch a folder which is accessible from both inside and outside our container, so we can edit the scripts with our full fat editor, and run them from inside the container.

To launch the container, we use the following command, I explain the switches used below:

docker run --name PowerCLI --detach -it --rm --volume '/Users/tim/Dropbox/Coding Projects/PowerShell/VMware':/usr/scripts vmware/powerclicore:latest
  • –name – this sets the container name, which will make it easier when we want to attach to the container
  • –detach – this starts the container without attaching us to it immediately, meaning if there is anything else we need to do before connecting we can
  • -it – this creates an interactive TTY connection, giving us the ability to interact with the console of the container
  • –rm – this will delete the container when we exit it, this should keep the processes tidy on our machine
  • –volume … – this maps our scripts folder to /usr/scripts, so we can consume our scripts once in the container
  • vmware/powercli:latest – the name of the image to launch the container from

Now when we run this we will see the following output:

Icarus:~ root$ docker run --name PowerCLI --detach -it --rm --volume '/Users/tim/Dropbox/Coding Projects/PowerShell/VMware':/usr/scripts vmware/powerclicore:latest
c48ff51e3f824177da8e3b0fd0210e5864b01fea94ae5f5871b3654b4f5bcd35
Icarus:~ root$

This is the UID for our container, we won’t need this, as we will attach using the friendly name for our container anyway. When you are ready to attach, use the following command:

Icarus:~ root$ docker attach PowerCLI

You may need to press return a couple of times, but you should now have a shell that looks like this:

PS /powershell>

Now we are in the container, and should be able to access our scripts by changing directory to /usr/scripts.

If we run ‘Get-Module -ListAvailable’ we can see the modules installed in this Docker image:

PS /powershell> Get-Module -ListAvailable                                                                               

    Directory: /root/.local/share/powershell/Modules

ModuleType Version    Name                                ExportedCommands
---------- -------    ----                                ----------------
Binary     1.21       PowerCLI.Vds
Binary     1.21       PowerCLI.ViCore                     HookGetViewAutoCompleter
Script     2.1.0      PowerNSX                            {Add-XmlElement, Format-Xml, Invoke-NsxRestMethod, Invoke-...
Script     2.0.0      PowervRA                            {Add-vRAPrincipalToTenantRole, Add-vRAReservationNetwork, ...

    Directory: /opt/microsoft/powershell/6.0.0-alpha.14/Modules

ModuleType Version    Name                                ExportedCommands
---------- -------    ----                                ----------------
Manifest   1.0.1.0    Microsoft.PowerShell.Archive        {Compress-Archive, Expand-Archive}
Manifest   3.0.0.0    Microsoft.PowerShell.Host           {Start-Transcript, Stop-Transcript}
Manifest   3.1.0.0    Microsoft.PowerShell.Management     {Add-Content, Clear-Content, Clear-ItemProperty, Join-Path...
Manifest   3.0.0.0    Microsoft.PowerShell.Security       {Get-Credential, Get-ExecutionPolicy, Set-ExecutionPolicy,...
Manifest   3.1.0.0    Microsoft.PowerShell.Utility        {Format-List, Format-Custom, Format-Table, Format-Wide...}
Script     1.1.2.0    PackageManagement                   {Find-Package, Get-Package, Get-PackageProvider, Get-Packa...
Script     3.3.9      Pester                              {Describe, Context, It, Should...}
Script     1.1.2.0    PowerShellGet                       {Install-Module, Find-Module, Save-Module, Update-Module...}
Script     0.0        PSDesiredStateConfiguration         {IsHiddenResource, StrongConnect, Write-MetaConfigFile, Ge...
Script     1.2        PSReadLine                          {Get-PSReadlineKeyHandler, Set-PSReadlineKeyHandler, Remov...

PS /powershell>

So we have the PowerCLI Core module, the Distributed vSwitch module, as well as PowervRA and PowerNSX. We should be able to run our scripts from the /usr/share folder, or just run stuff from scratch.

The great thing is we can now edit our scripts in the folder mapped to /usr/share using our editor, and the changes are available live to test our scripts, and we can even write output to this folder from within the container.

If you want to detach from the container without killing it then use the Ctrl+P+Q key combination, you can then reattach with ‘docker attach PowerCLI’. When you are done with the container type ‘exit’ and it will quit and be removed.

Conclusion

Though basic, this can really help with workflow when writing and testing scripts on macOS, while enabling you to simply keep up to date with the latest images, and not fill your Mac with more junk.

Dockerising a GoLang Application

I was recently messing about with GoLang to get my head around it, and came up with a small script to basically output a timestamp and a random name, based on the Docker container random name generator (here). The plan was to use this to populate a database with random junk for when I’m testing software against database servers.

I got the code working anyway, you can see it on my GitHub site here, in this article I’m going to focus on the differences between building a container from a GoLang binary, and the benefits of doing this from scratch. This is to my mind a very powerful way of delivering Go applications so that they are both tiny and self-contained, and sets it apart from other popular languages where they end up dragging round baggage in their dependencies.

The steps below will go through building a container from a Go file, first using the GoLang image from Docker Hub, and then from scratch, and we will compare the resulting sizes of the files.

The steps below were all carried out on macOS 10.12, Go 1.8, and Docker 17.03.

Image based on golang official Docker image

For this stage, we should have our Go file in the current folder, we need to create the following Dockerfile:

FROM golang:latest
RUN mkdir /app
ADD <path_to_go_file>.go /app/
WORKDIR /app
RUN go build -o <output_file> .
CMD ["/app/<output_file>"]

We can then build this into an image:

docker build -t <your_name>/<image_name> -f Dockerfile .

We can then use ‘docker images’ to show the image we created:

REPOSITORY               TAG                 IMAGE ID            CREATED             SIZE
timhynes/name_gen_full   latest              ffc0ef4bac73        3 seconds ago       698 MB

Image built from scratch with Go binary

To build the image from scratch, we first need to build our Go application into a binary. We do this with the below command:

CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -o <output_file> <path_to_go_file> 

The ‘GOOS’ flag defines what the target OS for the binary is, in our case we choose Linux so that it will run in our Docker container. The ‘CGO_ENABLED=0’ flag prevents linking to external C libraries (more info here), and will mean that the binary is fully self-contained.

Once this is run, the binary will be created in the location specified. This could then be ported around Linux systems and run as a compiled application, but we are going to build this into a Docker image instead.

To build the Docker image, as shown earlier, we need a Dockerfile; the one I used for this stage is shown below:

FROM scratch
ADD <output_file> /
CMD ["<output_file>"]

This should be saved to the same folder as the binary as ‘Dockerfile’. We can now run the ‘docker build’ command to create our image:

docker build -t <your_name>/<image_name> -f Dockerfile .

At this point we should now have an image in our local repository, this shows here as being under 2MB for my application:

REPOSITORY            TAG                 IMAGE ID            CREATED             SIZE
timhynes/name_gen     latest              436a832943c8        12 seconds ago      1.77 MB

Conclusion

This shows that despite the functional endpoint being ultimately to containerise our application, through compiling a GoLang binary, and using this as the sole contents of our image, we can save huge amounts of space. In the case of the example above, the resultant image was over 300 times smaller when using the binary alone.

I guess the takeaway is that not all containers are made equal, and thinking about the way we package our Docker application can make a large difference to our ability to be able to deliver and run it.