rbvami – Managing the vCSA VAMI using Ruby

I have been putting together a module for managing vCSA’s VAMI using Ruby. This uses the vCenter 6.5 REST API, and the long term plan is to build it out to cover the entire REST API.

My intention is to use this module as the basis for a Chef cookbook for managing VAMI configuration, and is mainly a learning rather than a practical exercise.

The project is on my GitHub site, feel free to contact me if there is functionality you would like to see added.

Advertisements

Cleaning up AWS OpsWorks Automate Nodes

I’ve been playing with Chef and AWS’ OpsWorks Automate product a lot in the last few weeks, one problem I had was that as I kept bootstrapping EC2 instances, using the excellent Knife EC2 tool, the nodes were not being cleaned up out of the Chef Automate portal. I’m imagining this will be a common issue for folks using ephemeral type workloads with Chef Automate in any cloud.

AWS’ documentation has some AWS CLI commands to run to remove old nodes, but this refers to AWS CLI commands which do not seem to be present in the latest version of AWS CLI (there is no ‘aws opsworks-cm’ domain now in the CLI, so no way of managing OpsWorks Automate).

I found this page in Chef’s highly recommended Learn Chef Rally training site which led me to the way to do this. The following can be run from an SSH connection into your Chef Automate server (or in my case, as I had not assigned a keypair on creation of my Automate server, through EC2 Systems Manager’s Run Command feature):

sudo automate-ctl delete-visibility-node <NODE_NAME>

If you have multiple nodes with the same name, you may receive the following response:

Multiple nodes were found matching your request. Please delete by ID using: automate-ctl delete-visibility-node-by-id NODE_UUID

Node UUID Node Name Org Name Chef Server
==================================== ========= ======== ===========
1c298e89-7c9f-4feb-b784-20b3858bfd6f webtest2 default chefautomate-1abcdefgo12abcde.eu-west-1.opsworks-cm.io
7f9b96df-7c02-4277-a5bb-879962b17136 webtest2 default chefautomate-1abcdefgo12abcde.eu-west-1.opsworks-cm.io
05f55344-2425-4764-8db6-9c0a0ef8d015 webtest2 default chefautomate-1abcdefgo12abcde.eu-west-1.opsworks-cm.io

You can delete these using the following command instead:

sudo automate-ctl delete-visibility-node-by-id <NODE_ID>

This wraps up the post, hopefully it comes in useful for people.

Deploying NX-OSv 9000 on vSphere

Cisco have recently released (1st March 2017) an updated virtual version of their Nexus 9K switch, and the good news is that this is now available as an OVA for deployment onto ESXi. We used to use VIRL in a lab, which was fine until a buggy earlier version of the virtual 9K was introduced which prevented core functionality like port channels. This new release doesn’t require the complex environment that VIRL brings, and lets you deploy a quick pair of appliances in vPC to test code against.

The download is available here, and while there are some instructions available, I did not find them particularly useful in deploying the switch to my ESXi environment. As a result, I decided to write up how I did this to hopefully save people spending time smashing their face off it.

Getting the OVA

NOTE: you will need a Cisco login to download the OVA file. My login has access to a bunch of bits so not sure exactly what the requirements are around this.

There are a few versions available from the above link, including a qcow2 (KVM) image, a .vmdk file (for rolling your own VM), a VirtualBox image (for use with VirtualBox and/or Vagrant), and an OVA (for use with Fusion, Workstation, ESXi).

Once downloaded we are ready to deploy the appliance. There are a few things to bear in mind here:

  1. This can be used to pass VM traffic between virtual machines: there are 6 connected vNICs on deployment, 1 of these simulates the mgmt0 port on the 9K, and the other 5 are able to pass VM traffic.
  2. vNICs 2-6 should not be attached to the management network (best practice)
  3. We will need to initially connect over a virtual serial port through the host, this will require opening up the ESXi host firewall temporarily

Deploying the OVA

You can deploy the OVA through the vSphere Web Client, or the new vSphere HTML5 Web Client, I’ve detailed how to do this via PowerShell here, because who’s got time for clicking buttons?

1p474b


# Simulator is available at:
# https://software.cisco.com/download/release.html?mdfid=286312239&amp;amp;softwareid=282088129&amp;amp;release=7.0(3)I5(1)&amp;amp;relind=AVAILABLE&amp;amp;rellifecycle=&amp;amp;reltype=latest
# Filename: nxosv-final.7.0.3.I5.2.ova
# Documentation: http://www.cisco.com/c/en/us/td/docs/switches/datacenter/nexus9000/sw/7-x/nx-osv/configuration/guide/b_NX-OSv_9000/b_NX-OSv_chapter_01.html

Function New-SerialPort {
  # stolen from http://access-console-port-virtual-machine.blogspot.co.uk/2013/07/add-serial-port-to-vm-through-gui-or.html
  Param(
     [string]$vmName,
     [string]$hostIP,
     [string]$prt
  ) #end
  $dev = New-Object VMware.Vim.VirtualDeviceConfigSpec
  $dev.operation = "add"
  $dev.device = New-Object VMware.Vim.VirtualSerialPort
  $dev.device.key = -1
  $dev.device.backing = New-Object VMware.Vim.VirtualSerialPortURIBackingInfo
  $dev.device.backing.direction = "server"
  $dev.device.backing.serviceURI = "telnet://"+$hostIP+":"+$prt
  $dev.device.connectable = New-Object VMware.Vim.VirtualDeviceConnectInfo
  $dev.device.connectable.connected = $true
  $dev.device.connectable.StartConnected = $true
  $dev.device.yieldOnPoll = $true

  $spec = New-Object VMware.Vim.VirtualMachineConfigSpec
  $spec.DeviceChange += $dev

  $vm = Get-VM -Name $vmName
  $vm.ExtensionData.ReconfigVM($spec)
}

# Variables - edit these...
$ovf_location = '.\nxosv-final.7.0.3.I5.1.ova'
$n9k_name = 'NXOSV-N9K-001'
$target_datastore = 'VBR_MGTESX01_Local_SSD_01'
$target_portgroup = 'vSS_Mgmt_Network'
$target_cluster = 'VBR_Mgmt_Cluster'

$vi_server = '192.168.1.222'
$vi_user = 'administrator@vsphere.local'
$vi_pass = 'VMware1!'

# set this to $true to remove non-management network interfaces, $false to leave them where they are
$remove_additional_interfaces = $true

# Don't edit below here
Import-Module VMware.PowerCLI

Connect-VIServer $vi_server -user $vi_user -pass $vi_pass

$vmhost = $((Get-Cluster $target_cluster | Get-VMHost)[0])

$ovfconfig = Get-OvfConfiguration $ovf_location

$ovfconfig.NetworkMapping.mgmt0.Value = $target_portgroup
$ovfconfig.NetworkMapping.Ethernet1_1.Value = $target_portgroup
$ovfconfig.NetworkMapping.Ethernet1_2.Value = $target_portgroup
$ovfconfig.NetworkMapping.Ethernet1_3.Value = $target_portgroup
$ovfconfig.NetworkMapping.Ethernet1_4.Value = $target_portgroup
$ovfconfig.NetworkMapping.Ethernet1_5.Value = $target_portgroup
$ovfconfig.DeploymentOption.Value = 'default'

Import-VApp $ovf_location -OvfConfiguration $ovfconfig -VMHost $vmhost -Datastore $target_datastore -DiskStorageFormat Thin -Name $n9k_name

if ($remove_additional_interfaces) {
  Get-VM $n9k_name | Get-NetworkAdapter | ?{$_.Name -ne 'Network adapter 1'} | Remove-NetworkAdapter -Confirm:$false
}

New-SerialPort -vmName $n9k_name -hostIP $($vmhost | Get-VMHostNetworkAdapter -Name vmk0 | Select -ExpandProperty IP) -prt 2000

$vmhost | Get-VMHostFirewallException -Name 'VM serial port connected over network' | Set-VMHostFirewallException -Enabled $true

Get-VM $n9k_name | Start-VM

This should start the VM, we will be able to telnet into the host on port 2000 to reach the VM console, but it will not be ready for us to do that until this screen is reached:

Capture

Now when we connect we should see:

Capture

At this point we can enter ‘n’ and go through the normal Nexus 9K setup wizard. Once the management IP and SSH are configured you should be able to connect via SSH, the virtual serial port can then be removed via the vSphere Client, and the ‘VM serial port connected over network’ rule should be disabled on the host firewall.

Pimping things up

Add more NICs

Obviously here we have removed the additional NICs from the VM, which makes it only talk over the single management port. We can add a bunch more NICs and the virtual switch will let us use them to talk on. This could be an interesting use case to pass actual VM traffic through the 9K.

Set up vPC

The switch is fully vPC (Virtual Port Channel) capable, so we can spin up another virtual N9K and put them in vPC mode, this is useful to experiment with that feature.

Bring the API!

The switch is NXAPI capable, which was the main reason for me wanting to deploy it, so that I could test REST calls against it. Enable NXAPI by entering the ‘feature nxapi’ commmand.

Conclusion

Hopefully this post will help people struggling to deploy this OVA, or wanting to test out NXOS in a lab environment. I found the Cisco documentation a little confusing so though I would share my experiences.

Replacing the ‘All Services’ Icon in vRealize Automation

I had a conversation with Ricky El-Qasem (@rickyelqasem) on Twitter this week about the ‘All Services’ logo in vRealize Automation, and whether this could be replaced programatically.

For those which don’t know the pain of this particular element of vRA; when browsing the service catalog, groups of services are listed down the left hand side of the page with icons next to them:

Screen Shot 2017-03-17 at 17.52.42.png

These can all be changed, but until recently the top icon would remain as a blue lego brick, which can make the otherwise slick portal look unsightly. This is shown on the image below:

Screen Shot 2017-03-17 at 17.56.25.png

Now luckily, from vRA 7.1, this has been replaceable through the API, and steps have been documented in the accompanying guide here. This uses the REST API, and means you need to convert the image in PNG into Base-64 encoding in order to push it to the API, a little to manual for me!

So I quickly threw vRA 7.2 up in my home lab and got to work. I chose to script it using Python because I found that I could easily convert the image to Base-64, and I knew I could do the REST calls using the excellent ‘requests’ Python package (info available here). The code I used is available on my GitHub, and is shown below. I also created a script to delete the custom icon, and return things to vanilla state, you know, just in case 😉

Anyway, I hope this is useful for people who want to quickly and easily replace the icon.

#!/usr/bin/env python
# required packages, install with pip if not present
import requests
import json
# disable self-signed cert warnings
requests.packages.urllib3.disable_warnings()
# replace these variables
filename = 'service.png'
vra_ip = '192.168.1.227'
vra_user = 'administrator@vsphere.local'
vra_pass = 'VMware1!'
vra_tenant = 'vsphere.local'
# don't replace anything from here
# open file and encode it in b64
with open("./"+filename, "rb") as f:
    data = f.read()
    encoded = data.encode("base64")
encoded = encoded.replace("\r","")
encoded = encoded.replace("\n","")
# get our authorization token
uri = 'https://'+vra_ip+'/identity/api/tokens'
headers = {'Accept':'application/json','Content-Type':'application/json'}
payload = '{"username":"'+vra_user+'","password":"'+vra_pass+'","tenant":"'+vra_tenant+'"}'
r = requests.post(uri, headers=headers, verify=False, data=payload)
token = 'Bearer '+str(json.loads(r.text)["id"])
# send the new icon to the API
uri = 'https://'+vra_ip+'/catalog-service/api/icons'
headers = {'Accept':'application/json','Content-Type':'application/json','Authorization':token}
payload = '{"id":"cafe_default_icon_genericAllServices","fileName":"'+filename+'","contentType":"image/png","image":"'+encoded+'"}'
r = requests.post(uri, headers=headers, verify=False, data=payload)
if r.status_code == 201:
    print "Replacement successful"
else:
    print "Expected return code 201, got "+r.status_code+" something went wrong"

PowerCLI Core on Docker with macOS

Back in November, shortly after Microsoft’s open-sourcing of PowerShell, and subsequent cross-platform PowerShell Core release, VMware released their cross-platform version of PowerCLI: PowerCLI Core. This was made available on GitHub for general consumption, and can be installed on top of PowerShell Core on a macOS/OSX or Linux machine.

I have loads of junk installed on my MacBook, including the very first public release of PowerShell Core, but keeping it all up to date, and knowing what I have installed can be a pain, so my preference for running PowerShell Core, or PowerCLI Core at the moment is through a Docker container on my laptop, this helps keep the clutter down and makes upgrading easy.

In this post I’m going to show how to use the Docker container with an external editor, and be able to run all your scripts from within the container, removing the need to install PowerShell Core on your Mac.

Pre-requisites

We will need to install a couple of bits of software before we begin:

Beyond that we will get everything below.

Getting the Docker image

VMware have made the PowerCLI Core Docker image available on Docker Hub (here), this is the easiest place to pull container images from to your desktop, and is the general go-to place for public container images as of today. This can be downloaded once the Docker CE is installed through the command below:

Icarus:~ root$ docker pull vmware/powerclicore:latest
latest: Pulling from vmware/powerclicore
93b3dcee11d6: Already exists
d6641ceee635: Pull complete
62bbcce52faa: Pull complete
e86aa7a78685: Pull complete
db20fbdf24c0: Pull complete
37379feb8f29: Pull complete
8abb449d1e29: Pull complete
a9cd6d9452e7: Pull complete
50886ff01a73: Pull complete
74af7eaa49c1: Pull complete
878c611eaf2c: Pull complete
39b1b7978191: Pull complete
98e632013bea: Pull complete
4362432cb5ea: Pull complete
19f5f892ae79: Pull complete
29b0b093b159: Pull complete
913ad6409b89: Pull complete
ad5db0a55033: Pull complete
Digest: sha256:d33ac26c0c704a7aa48f5c7c66cb76ec3959beda2962ccd6a41a96351055b5d0
Status: Downloaded newer image for vmware/powerclicore:latest
Icarus:~ root$

This may take a couple of minutes, but the image should now be present on the local machine, and ready to fire up.

Getting the path for our scripts folder

Before we launch our container we need our scripts path, this could be a folder anywhere on your computer, in my case it is:

/Users/tim/Dropbox/Coding Projects/PowerShell/VMware

Launching our container

The idea here is to launch a folder which is accessible from both inside and outside our container, so we can edit the scripts with our full fat editor, and run them from inside the container.

To launch the container, we use the following command, I explain the switches used below:

docker run --name PowerCLI --detach -it --rm --volume '/Users/tim/Dropbox/Coding Projects/PowerShell/VMware':/usr/scripts vmware/powerclicore:latest
  • –name – this sets the container name, which will make it easier when we want to attach to the container
  • –detach – this starts the container without attaching us to it immediately, meaning if there is anything else we need to do before connecting we can
  • -it – this creates an interactive TTY connection, giving us the ability to interact with the console of the container
  • –rm – this will delete the container when we exit it, this should keep the processes tidy on our machine
  • –volume … – this maps our scripts folder to /usr/scripts, so we can consume our scripts once in the container
  • vmware/powercli:latest – the name of the image to launch the container from

Now when we run this we will see the following output:

Icarus:~ root$ docker run --name PowerCLI --detach -it --rm --volume '/Users/tim/Dropbox/Coding Projects/PowerShell/VMware':/usr/scripts vmware/powerclicore:latest
c48ff51e3f824177da8e3b0fd0210e5864b01fea94ae5f5871b3654b4f5bcd35
Icarus:~ root$

This is the UID for our container, we won’t need this, as we will attach using the friendly name for our container anyway. When you are ready to attach, use the following command:

Icarus:~ root$ docker attach PowerCLI

You may need to press return a couple of times, but you should now have a shell that looks like this:

PS /powershell>

Now we are in the container, and should be able to access our scripts by changing directory to /usr/scripts.

If we run ‘Get-Module -ListAvailable’ we can see the modules installed in this Docker image:

PS /powershell> Get-Module -ListAvailable                                                                               

    Directory: /root/.local/share/powershell/Modules

ModuleType Version    Name                                ExportedCommands
---------- -------    ----                                ----------------
Binary     1.21       PowerCLI.Vds
Binary     1.21       PowerCLI.ViCore                     HookGetViewAutoCompleter
Script     2.1.0      PowerNSX                            {Add-XmlElement, Format-Xml, Invoke-NsxRestMethod, Invoke-...
Script     2.0.0      PowervRA                            {Add-vRAPrincipalToTenantRole, Add-vRAReservationNetwork, ...

    Directory: /opt/microsoft/powershell/6.0.0-alpha.14/Modules

ModuleType Version    Name                                ExportedCommands
---------- -------    ----                                ----------------
Manifest   1.0.1.0    Microsoft.PowerShell.Archive        {Compress-Archive, Expand-Archive}
Manifest   3.0.0.0    Microsoft.PowerShell.Host           {Start-Transcript, Stop-Transcript}
Manifest   3.1.0.0    Microsoft.PowerShell.Management     {Add-Content, Clear-Content, Clear-ItemProperty, Join-Path...
Manifest   3.0.0.0    Microsoft.PowerShell.Security       {Get-Credential, Get-ExecutionPolicy, Set-ExecutionPolicy,...
Manifest   3.1.0.0    Microsoft.PowerShell.Utility        {Format-List, Format-Custom, Format-Table, Format-Wide...}
Script     1.1.2.0    PackageManagement                   {Find-Package, Get-Package, Get-PackageProvider, Get-Packa...
Script     3.3.9      Pester                              {Describe, Context, It, Should...}
Script     1.1.2.0    PowerShellGet                       {Install-Module, Find-Module, Save-Module, Update-Module...}
Script     0.0        PSDesiredStateConfiguration         {IsHiddenResource, StrongConnect, Write-MetaConfigFile, Ge...
Script     1.2        PSReadLine                          {Get-PSReadlineKeyHandler, Set-PSReadlineKeyHandler, Remov...

PS /powershell>

So we have the PowerCLI Core module, the Distributed vSwitch module, as well as PowervRA and PowerNSX. We should be able to run our scripts from the /usr/share folder, or just run stuff from scratch.

The great thing is we can now edit our scripts in the folder mapped to /usr/share using our editor, and the changes are available live to test our scripts, and we can even write output to this folder from within the container.

If you want to detach from the container without killing it then use the Ctrl+P+Q key combination, you can then reattach with ‘docker attach PowerCLI’. When you are done with the container type ‘exit’ and it will quit and be removed.

Conclusion

Though basic, this can really help with workflow when writing and testing scripts on macOS, while enabling you to simply keep up to date with the latest images, and not fill your Mac with more junk.

vSphere Automation SDKs

This week VMware open sourced their SDKs for vSphere using REST APIs, and Python. The REST API was released with vSphere 6.0, while the Python SDK has been around for nearly four years now. I’m going to summarise the contents of this release below, and where these can help us make more of our vSphere environments.

REST API

The vSphere REST API has been growing since the release of vSphere 6 nearly two years ago, and brings access to the following areas of vSphere with its current release:

  • Session management
  • Tagging
  • Content Library
  • Virtual Machines
  • vCenter Server Appliance management

These cover mainly new features from vSphere 6.0 (formerly known as vCloud Suite SDK), and then some of the new bits put together for modernising the API access in vSphere 6.5. The Virtual Machine management particularly is useful in being able to start using REST based methods to do operations, and report on VMs in your environment, very useful for people looking to write quick integrations with things like vRealize Orchestrator, where the built in plugins do not do what you want.

The new material, available on GitHub, contains two main functions:

Postman Collection

Screen Shot 2017-03-12 at 10.28.54.png

Postman is a REST client used to explore APIs, providing a nice graphical display of the request-response type methods used for REST. This is a great way to get your head round what is happening with requests, and helps to build up an idea of what is going on with the API.

Pre-built packs of requests can be gathered together in Postman ‘Collections’; these can then be distributed (in JSON format) and loaded into another instance of Postman. This can be crucially important in documenting the functionality of APIs, especially when the documentation is lacking.

There are some instructions on how to set this up here; if you are new to REST APIs, or just want a quick way to have a play with the new vSphere REST APIs, you could do far worse than starting here.

Node.js Sample Pack

Node.js has taken over the world of server side web programming, and thanks to the simple syntax of Javascript, is easy to pick up and get started with. This pack (available here) has some samples of Node.js code to interact with the REST API. This is a good place to start with seeing how web requests and responses are dealt with in Node, and how we can programatically carry out administrative tasks.

These could be integrated into a web based portal to do the requests directly, or I can see these being used in the future as part of a serverless administration platform, using something like AWS Lambda along with a monitoring platform to automate the administration of a vSphere environment.

Python SDK

Python has been an incredibly popular language for automation for a number of years. Its very low barrier to getting started makes it ideal to pick up and learn, with a wealth of possibilities for building on solid simple foundations to make highly complex software solutions. VMware released their ‘pyvmomi’ Python SDK back in 2013, and it has received consistent updates since then. While not as popular, or as promoted as their PowerCLI PowerShell module, it has nevertheless had strong usage and support from the community.

The release on offer as part of the vSphere Automation SDKs consists of scripts to spin up a demo environment for developing with the Python SDK, as well as a number of sample scripts demonstrating the functionality of the new APIs released in vSphere 6.0 and 6.5.

The continued growth in popularity of Python, along with leading automation toolsets like Ansible using a Python base, mean that it is a great platform to push this kind of development and publicity in. As with Node.js; serverless platforms are widely supporting Python, so this could be integrated with Lambda, Fission, or other FaaS platforms in the future.

Conclusion

It’s great to see VMware really getting behind developing and pushing their automation toolkits in the open, they are to my mind a leader in the industry in terms of making their products programmable, and I hope they continue at this pace and in this vein. The work shown in this release will help make it easier for people new to automation to get involved and start reaping the benefits that it can bring, and the possibilities for combining these vSphere SDKs with serverless administration will be an interesting area to watch.

AWS REST API Authentication Using Node.js

I’ve been learning as much as I can on Amazon Web Services over the last couple of months; the looming shadow of it over traditional IT finally got too much, and I figured it was time to make the leap. Overall it’s been a great experience, and the biggest takeaway I’ve probably had is how every service, and the way in which we consume them, are application-centric.
Every service is fully API first, with the AWS Management Console basically acting as a front end for the API calls made to the vast multitude of services. I’ve done a fair amount of work with REST APIs over the last 18 months, and it’s always good to fire up Postman (if you don’t know what this is, there is a post here I did about REST APIs, and the use of Postman), and throw a few API calls at a new technology to see how it works.
Now while AWS services are all available via REST APIs; there are a tonne of tools available for both administrators and developers, which abstract away the nitty gritty, we have:
  • AWS CLI – a CLI based tool for Windows/Linux/OSX (available here)
  • AWS Tools for Windows PowerShell – the PowerShell module for consuming AWS services (available here)
  • SDKs (Software Development Kits) for the following (all available here):
    • Android
    • Browsers (basically a JavaScript SDK you can build web services around)
    • iOS
    • Java
    • .NET
    • Node.js
    • PHP
    • Python
    • Ruby
    • GoLang
    • C++
    • AWS IoT
    • AWS Mobile
Combined, these provide a wide variety of ways to use pre-built solutions to speak to AWS based resources, and there should be something that any developer or admin can use to introduce some automation or programability into their work, and I would recommend using one of these if at all possible to abstract away the heavy lifting of working with a very broad and deep API.
I wanted to get stuck in from a REST API side though, which basically means building things from the ground up. This turned out to take a fair amount of time, but I learned a heck of a lot about the authentication and authorisation process for AWS, and how this helps to prevent unauthorised access.
The full authentication process is described in the AWS Documentation available here. There are pages and pages describing the V4 authentication process (the current recommended version), and this gets pretty complicated. I’m going to try and break it down here, showing the bits of code used to create each element; this should hopefully make it a bit clearer.
One post I found really useful on this was by Lukasz Adamczak (@lukasz_adamczak), on how to do the authentication with Curl, which I used as the basis for some of what I did below. I couldn’t find anything where someone was doing this task via the REST API in JavaScript.

Pre-requisites

The following variables need to be set in the script before we start:
// our variables
var access_key = 'ACCESS_KEY_VALUE'
var secret_key = 'SECRET_KEY_VALUE'
var region = 'eu-west-1';
var url = 'my-bucket-name.s3.amazonaws.com';
var myService = 's3';
var myMethod = 'GET';
var myPath = '/';
In addition to this, we have these package dependencies:
// declare our dependencies
var crypto = require('crypto-js');
var https = require('https');
var xml = require('xml2js');
The crypto-js and https modules were built into the version of Node I was using (v6.9.5), but I had to use NPM to install the xml2js module.

Amazon Date Format

I started with getting the date format used by AWS in authentication, this is based on the ISO 8601 format, but has the punctuation, and milliseconds removed from it. I created the below function, and use it to create the two variables shown below:
// get the various date formats needed to form our request
var amzDate = getAmzDate(new Date().toISOString());
var authDate = amzDate.split("T")[0];

// this function converts the generic JS ISO8601 date format to the specific format the AWS API wants
function getAmzDate(dateStr) {
  var chars = [":","-"];
  for (var i=0;i<chars.length;i++) {
    while (dateStr.indexOf(chars[i]) != -1) {
      dateStr = dateStr.replace(chars[i],"");
    }
  }
  dateStr = dateStr.split(".")[0] + "Z";
  return dateStr;
}
We’ll go into this later, but the reason there are two variables for the date (amzDate, authDate) is that in generating the headers for our REST call we will need both formats at different times. One is in the ‘YYYYMMDDTHHmmssZ’ format, and one is in the ‘YYYYMMDD’ format.

Our payload

The example used in this script is to use a blank payload, for which we calculate the SHA256 hash. This is obviously always the same when calculated against a blank string (e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855, if you were interested :p), but I included the hashing of this in the script so the logic is there later if we wanted to do different payloads.
// we have an empty payload here because it is a GET request
var payload = '';
// get the SHA256 hash value for our payload
var hashedPayload = crypto.SHA256(payload).toString();
This hashed payload is used a bunch in the final request, including in the ‘x-amz-content-sha256’ HTTP header to validate the expected payload.

Canonical Request

This is where things got a bit confusing for me; we need to work out the payload of our message (in AWS’ special format), and work out what the SHA256 hash of this is. First we need to know the formatting for the canonical request. Ultimately this is a multi-line string, consisting of the following attributes:
HTTPRequestMethod
CanonicalURI
CanonicalQueryString
CanonicalHeaders
SignedHeaders
HexEncode(Hash(RequestPayload))
These attributes are described as:
  • HTTPRequestMethod – the HTTP method being used, could be GET, POST, etc. In our example this will be GET
  • CanonicalURI – the relative URI for the resource we are accessing. In our example we access the root namespace of our bucket, so this is set to “/”
  • CanonicalQueryString – we can build a query for our request, more information on this is available here. In our example we don’t need a query so we will leave this as a blank line
  • CanonicalHeaders – a carriage return separated list of the headers we are using in our request
  • SignedHeaders – a semi-colon separated list of the header keys we are including in our request
  • HexEncode(Hash(RequestPayload)) – the hash value as calculated earlier. As we used the ‘toString()’ method on this, it should already be in hexadecimal
We construct this request with the following code:
// create our canonical request
var canonicalReq =  myMethod + '\n' +
                    myPath + '\n' +
                    '\n' +
                    'host:' + url + '\n' +
                    'x-amz-content-sha256:' + hashedPayload + '\n' +
                    'x-amz-date:' + amzDate + '\n' +
                    '\n' +
                    'host;x-amz-content-sha256;x-amz-date' + '\n' +
                    hashedPayload;
This leaves us with the following as an example:
GET
/

host:my-bucket-name.s3.amazonaws.com
x-amz-content-sha256:e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
x-amz-date:20170213T045707Z

host;x-amz-content-sha256;x-amz-date
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
Note the blank CanonicalQuery line here, as we are not using that functionality, and the blank like after Canonical Headers, these are required for the string to be accepted when we hash it.
So now we can hash this multi-line string:
// hash the canonical request
var canonicalReqHash = crypto.SHA256(canonicalReq).toString();
This becomes another long hashed value.
c75b55ba2d959baf99f2c4976c7a50c7cd79067a726c21024f4a981ae2a90b50

String to Sign

Now, similar to the Canonical Request above, we create a new multi-line string which is used to generate our authentication header. This time it is in the following format:
Algorithm
RequestDate
CredentialScope
HashedCanonicalRequest

These attributes are completed as:

  • Algorithm – for SHA256, which is what we always use with AWS, this should be set to ‘AWS4-HMAC-SHA256’
  • RequestDate – this is the date/time stamp in the ‘YYYYMMDDTHHmmssZ’ format, so we will use our stored ‘amzDate’ variable here
  • CredentialScope – this takes the format ‘///aws4_request’. We have the date stored in this format already as ‘authDate’, so we can use that here, our region name can be found in this table, and the service name here is ‘s3’, further details of other namespaces can be found here
  • HashedCanonicalRequest – this was calculated above

With this information we can form our string like this:

// form our String-to-Sign
var stringToSign =  'AWS4-HMAC-SHA256\n' +
                    amzDate + '\n' +
                    authDate+'/'+region+'/'+myService+'/aws4_request\n'+
                    canonicalReqHash;
This generates a string like this:
AWS4-HMAC-SHA256
20170213T051343Z
20170213/eu-west-1/s3/aws4_request
c75b55ba2d959baf99f2c4976c7a50c7cd79067a726c21024f4a981ae2a90b50

Signing Key

We need a signing key now; this embeds our secret key, along with some other bits in a hash which is used to sign our ‘String to sign’, giving us our final hashed value which we use in the authentication header. Luckily here AWS provide some sample JS code (amongst other languages) for creating this hash:
// this function gets the Signature Key, see AWS documentation for more details
function getSignatureKey(Crypto, key, dateStamp, regionName, serviceName) {
    var kDate = Crypto.HmacSHA256(dateStamp, "AWS4" + key);
    var kRegion = Crypto.HmacSHA256(regionName, kDate);
    var kService = Crypto.HmacSHA256(serviceName, kRegion);
    var kSigning = Crypto.HmacSHA256("aws4_request", kService);
    return kSigning;
}

This can be found here.So into this function we pass our secret access key, the authDate variable we calculated earlier, our region, and the service namespace.

// get our Signing Key
var signingKey = getSignatureKey(crypto, secret_key, authDate, region, myService);

This will again return a long hash value:

9afc364e2eb6ba46f000721975d32bc2042058f80b5a8fd69efe422e7be5090d

Authentication Key

Nearly there now! So we need to take our String to Sign, and our Signing Key, and hash the string to sign with the signing key, to generate another hash which will be used in our request header. To do this we again use the CryptoJS library, with the order of the inputs being our string to hash (stringToSign), and then the key to hash it with (signingKey).
This returns another hash:
31c6a42a9aec00390317f9c714f38efeba2498fa1996cecb9b4c714b39cbc05a90332f38ef

Creating our headers

Right, no more hashing needed now, we have everything we need. So next we construct our Authentication header value:
// Form our authorization header
var authString  = 'AWS4-HMAC-SHA256 ' +
                  'Credential='+
                  access_key+'/'+
                  authDate+'/'+
                  region+'/'+
                  myService+'/aws4_request,'+
                  'SignedHeaders=host;x-amz-content-sha256;x-amz-date,'+
                  'Signature='+authKey;
This is a single line, multi-part string consisting of the following parts:
  • Algorithm – for SHA256, which is what we always use with AWS, this should be set to ‘AWS4-HMAC-SHA256’
  • CredentialScope – as used in our String To Sign above
  • SignedHeaders – a semi-colon separated list of our signed headers
  • Signature – the authentication key we hand crafted above
When we place all these together, we end up with a string like this:
WS4-HMAC-SHA256 Credential=A123EXAMPLEACCESSKEY/20170213/eu-west-1/s3/aws4_request,SignedHeaders=host;x-amz-content-sha256;x-amz-date,Signature=31c6a42a9aec00390317f9c714f38efeba2498fa1996cecb9b4c714b39cbc05a90332f38ef
Now we have everything we need to create our headers for our HTTP request:
// throw our headers together
headers = {
  'Authorization' : authString,
  'Host' : url,
  'x-amz-date' : amzDate,
  'x-amz-content-sha256' : hashedPayload
};
Here we use a hash array for simplicity, with our various headers added to the array, to end up with an array like this:
Key
Value
Authorization
WS4-HMAC-SHA256 Credential=A123EXAMPLEACCESSKEY/20170213/eu-west-1/s3/aws4_request,SignedHeaders=host;x-amz-content-sha256;x-amz-date,Signature=31c6a42a9aec00390317f9c714f38efeba2498fa1996cecb9b4c714b39cbc05a90332f38ef
Host
x-amz-date
20170213T051343Z
x-amz-content-sha256
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
These are now ready to use in our request.

Our request

Now we can send our request. I split this into a function to do the REST call:
// the REST API call using the Node.js 'https' module
function performRequest(endpoint, headers, data, success) {

  var dataString = data;

  var options = {
    host: endpoint,
    port: 443,
    path: '/',
    method: 'GET',
    headers: headers
  };

  var req = https.request(options, function(res) {
    res.setEncoding('utf-8');

    var responseString = '';

    res.on('data', function(data) {
      responseString += data;
    });

    res.on('end', function() {
      //console.log(responseString);
      success(responseString);
    });
  });

  req.write(dataString);
  req.end();
}
And the call to this function, which also processes the results, in the body:
// call our function
performRequest(url, headers, payload, function(response) {
  // parse the response from our function and write the results to the console
  xml.parseString(response, function (err, result) {
    console.log('\n=== \n'+'Bucket is named: ' + result['ListBucketResult']['Name']);
    console.log('=== \n'+'Contents: ');
    for (i=0;i<result['ListBucketResult']['Contents'].length;i++) {
      console.log(
        '=== \n'+
        'Name: '          + result['ListBucketResult']['Contents'][i]['Key'][0]           + '\n' +
        'Last modified: ' + result['ListBucketResult']['Contents'][i]['LastModified'][0]  + '\n' +
        'Size (bytes): '  + result['ListBucketResult']['Contents'][i]['Size'][0]          + '\n' +
        'Storage Class: ' + result['ListBucketResult']['Contents'][i]['StorageClass'][0]
      );
    };
    console.log('=== \n');
  });
});
This essentially passes our headers with the payload to the URL we specified in our variables, and processes the resulting XML into some useful output.
This is what this returns as output:
Tims-MacBook-Pro:GetS3BucketContent tim$ node get_bucket_content.js

===
Bucket is named: virtualbrakeman
===
Contents:
===
Name: file_a_foo.txt
Last modified: 2017-02-05T11:19:36.000Z
Size (bytes): 10
Storage Class: STANDARD
===
Name: file_b_foo.txt
Last modified: 2017-02-05T11:19:36.000Z
Size (bytes): 10
Storage Class: STANDARD
===
Name: file_c_foo.txt
Last modified: 2017-02-05T11:19:36.000Z
Size (bytes): 10
Storage Class: STANDARD
===
Name: file_d_foo.txt
Last modified: 2017-02-05T11:19:36.000Z
Size (bytes): 10
Storage Class: STANDARD
===
Name: foobar
Last modified: 2017-02-04T22:01:38.000Z
Size (bytes): 0
Storage Class: STANDARD
===

Tims-MacBook-Pro:GetS3BucketContent tim$

Conclusion

This is a fairly trivial example of data to return, but the real point behind this was building the authentication code, which proved to be very laborious. Given the wide (and growing) variety of SDKs available, it seems overly complex to try and construct these requests in this way every time. I have played with the Python and JavaScript SDKs and both take this roughly 150 line script, and achieve the same result in around 20 lines.
Regardless, this was a good learning exercise for me, and it may come in useful for people trying to interact with the AWS API in ways which are not covered by the SDKs, or via other languages where an SDK is not available.
The final script is shown below, and is also available on my GitHub library here: https://github.com/railroadmanuk/awsrestauthentication
// declare our dependencies
var crypto = require('crypto-js');
var https = require('https');
var xml = require('xml2js');

main();

// split the code into a main function
function main() {
  // this serviceList is unused right now, but may be used in future
  const serviceList = [
    'dynamodb',
    'ec2',
    'sqs',
    'sns',
    's3'
  ];

  // our variables
  var access_key = 'ACCESS_KEY_VALUE';
  var secret_key = 'SECRET_KEY_VALUE';
  var region = 'eu-west-1';
  var url = 'my-bucket-name.s3.amazonaws.com';
  var myService = 's3';
  var myMethod = 'GET';
  var myPath = '/';

  // get the various date formats needed to form our request
  var amzDate = getAmzDate(new Date().toISOString());
  var authDate = amzDate.split("T")[0];

  // we have an empty payload here because it is a GET request
  var payload = '';
  // get the SHA256 hash value for our payload
  var hashedPayload = crypto.SHA256(payload).toString();

  // create our canonical request
  var canonicalReq =  myMethod + '\n' +
                      myPath + '\n' +
                      '\n' +
                      'host:' + url + '\n' +
                      'x-amz-content-sha256:' + hashedPayload + '\n' +
                      'x-amz-date:' + amzDate + '\n' +
                      '\n' +
                      'host;x-amz-content-sha256;x-amz-date' + '\n' +
                      hashedPayload;

  // hash the canonical request
  var canonicalReqHash = crypto.SHA256(canonicalReq).toString();

  // form our String-to-Sign
  var stringToSign =  'AWS4-HMAC-SHA256\n' +
                      amzDate + '\n' +
                      authDate+'/'+region+'/'+myService+'/aws4_request\n'+
                      canonicalReqHash;

  // get our Signing Key
  var signingKey = getSignatureKey(crypto, secret_key, authDate, region, myService);

  // Sign our String-to-Sign with our Signing Key
  var authKey = crypto.HmacSHA256(stringToSign, signingKey);

  // Form our authorization header
  var authString  = 'AWS4-HMAC-SHA256 ' +
                    'Credential='+
                    access_key+'/'+
                    authDate+'/'+
                    region+'/'+
                    myService+'/aws4_request,'+
                    'SignedHeaders=host;x-amz-content-sha256;x-amz-date,'+
                    'Signature='+authKey;

  // throw our headers together
  headers = {
    'Authorization' : authString,
    'Host' : url,
    'x-amz-date' : amzDate,
    'x-amz-content-sha256' : hashedPayload
  };

  // call our function
  performRequest(url, headers, payload, function(response) {
    // parse the response from our function and write the results to the console
    xml.parseString(response, function (err, result) {
      console.log('\n=== \n'+'Bucket is named: ' + result['ListBucketResult']['Name']);
      console.log('=== \n'+'Contents: ');
      for (i=0;i<result['ListBucketResult']['Contents'].length;i++) {
        console.log(
          '=== \n'+
          'Name: '          + result['ListBucketResult']['Contents'][i]['Key'][0]           + '\n' +
          'Last modified: ' + result['ListBucketResult']['Contents'][i]['LastModified'][0]  + '\n' +
          'Size (bytes): '  + result['ListBucketResult']['Contents'][i]['Size'][0]          + '\n' +
          'Storage Class: ' + result['ListBucketResult']['Contents'][i]['StorageClass'][0]
        );
      };
      console.log('=== \n');
    });
  });
};

// this function gets the Signature Key, see AWS documentation for more details, this was taken from the AWS samples site
function getSignatureKey(Crypto, key, dateStamp, regionName, serviceName) {
    var kDate = Crypto.HmacSHA256(dateStamp, "AWS4" + key);
    var kRegion = Crypto.HmacSHA256(regionName, kDate);
    var kService = Crypto.HmacSHA256(serviceName, kRegion);
    var kSigning = Crypto.HmacSHA256("aws4_request", kService);
    return kSigning;
}

// this function converts the generic JS ISO8601 date format to the specific format the AWS API wants
function getAmzDate(dateStr) {
  var chars = [":","-"];
  for (var i=0;i<chars.length;i++) {
    while (dateStr.indexOf(chars[i]) != -1) {
      dateStr = dateStr.replace(chars[i],"");
    }
  }
  dateStr = dateStr.split(".")[0] + "Z";
  return dateStr;
}

// the REST API call using the Node.js 'https' module
function performRequest(endpoint, headers, data, success) {

  var dataString = data;

  var options = {
    host: endpoint,
    port: 443,
    path: '/',
    method: 'GET',
    headers: headers
  };

  var req = https.request(options, function(res) {
    res.setEncoding('utf-8');

    var responseString = '';

    res.on('data', function(data) {
      responseString += data;
    });

    res.on('end', function() {
      //console.log(responseString);
      success(responseString);
    });
  });

  req.write(dataString);
  req.end();
}