PowerCLI Core on Docker with macOS

Back in November, shortly after Microsoft’s open-sourcing of PowerShell, and subsequent cross-platform PowerShell Core release, VMware released their cross-platform version of PowerCLI: PowerCLI Core. This was made available on GitHub for general consumption, and can be installed on top of PowerShell Core on a macOS/OSX or Linux machine.

I have loads of junk installed on my MacBook, including the very first public release of PowerShell Core, but keeping it all up to date, and knowing what I have installed can be a pain, so my preference for running PowerShell Core, or PowerCLI Core at the moment is through a Docker container on my laptop, this helps keep the clutter down and makes upgrading easy.

In this post I’m going to show how to use the Docker container with an external editor, and be able to run all your scripts from within the container, removing the need to install PowerShell Core on your Mac.

Pre-requisites

We will need to install a couple of bits of software before we begin:

Beyond that we will get everything below.

Getting the Docker image

VMware have made the PowerCLI Core Docker image available on Docker Hub (here), this is the easiest place to pull container images from to your desktop, and is the general go-to place for public container images as of today. This can be downloaded once the Docker CE is installed through the command below:

Icarus:~ root$ docker pull vmware/powerclicore:latest
latest: Pulling from vmware/powerclicore
93b3dcee11d6: Already exists
d6641ceee635: Pull complete
62bbcce52faa: Pull complete
e86aa7a78685: Pull complete
db20fbdf24c0: Pull complete
37379feb8f29: Pull complete
8abb449d1e29: Pull complete
a9cd6d9452e7: Pull complete
50886ff01a73: Pull complete
74af7eaa49c1: Pull complete
878c611eaf2c: Pull complete
39b1b7978191: Pull complete
98e632013bea: Pull complete
4362432cb5ea: Pull complete
19f5f892ae79: Pull complete
29b0b093b159: Pull complete
913ad6409b89: Pull complete
ad5db0a55033: Pull complete
Digest: sha256:d33ac26c0c704a7aa48f5c7c66cb76ec3959beda2962ccd6a41a96351055b5d0
Status: Downloaded newer image for vmware/powerclicore:latest
Icarus:~ root$

This may take a couple of minutes, but the image should now be present on the local machine, and ready to fire up.

Getting the path for our scripts folder

Before we launch our container we need our scripts path, this could be a folder anywhere on your computer, in my case it is:

/Users/tim/Dropbox/Coding Projects/PowerShell/VMware

Launching our container

The idea here is to launch a folder which is accessible from both inside and outside our container, so we can edit the scripts with our full fat editor, and run them from inside the container.

To launch the container, we use the following command, I explain the switches used below:

docker run --name PowerCLI --detach -it --rm --volume '/Users/tim/Dropbox/Coding Projects/PowerShell/VMware':/usr/scripts vmware/powerclicore:latest
  • –name – this sets the container name, which will make it easier when we want to attach to the container
  • –detach – this starts the container without attaching us to it immediately, meaning if there is anything else we need to do before connecting we can
  • -it – this creates an interactive TTY connection, giving us the ability to interact with the console of the container
  • –rm – this will delete the container when we exit it, this should keep the processes tidy on our machine
  • –volume … – this maps our scripts folder to /usr/scripts, so we can consume our scripts once in the container
  • vmware/powercli:latest – the name of the image to launch the container from

Now when we run this we will see the following output:

Icarus:~ root$ docker run --name PowerCLI --detach -it --rm --volume '/Users/tim/Dropbox/Coding Projects/PowerShell/VMware':/usr/scripts vmware/powerclicore:latest
c48ff51e3f824177da8e3b0fd0210e5864b01fea94ae5f5871b3654b4f5bcd35
Icarus:~ root$

This is the UID for our container, we won’t need this, as we will attach using the friendly name for our container anyway. When you are ready to attach, use the following command:

Icarus:~ root$ docker attach PowerCLI

You may need to press return a couple of times, but you should now have a shell that looks like this:

PS /powershell>

Now we are in the container, and should be able to access our scripts by changing directory to /usr/scripts.

If we run ‘Get-Module -ListAvailable’ we can see the modules installed in this Docker image:

PS /powershell> Get-Module -ListAvailable                                                                               

    Directory: /root/.local/share/powershell/Modules

ModuleType Version    Name                                ExportedCommands
---------- -------    ----                                ----------------
Binary     1.21       PowerCLI.Vds
Binary     1.21       PowerCLI.ViCore                     HookGetViewAutoCompleter
Script     2.1.0      PowerNSX                            {Add-XmlElement, Format-Xml, Invoke-NsxRestMethod, Invoke-...
Script     2.0.0      PowervRA                            {Add-vRAPrincipalToTenantRole, Add-vRAReservationNetwork, ...

    Directory: /opt/microsoft/powershell/6.0.0-alpha.14/Modules

ModuleType Version    Name                                ExportedCommands
---------- -------    ----                                ----------------
Manifest   1.0.1.0    Microsoft.PowerShell.Archive        {Compress-Archive, Expand-Archive}
Manifest   3.0.0.0    Microsoft.PowerShell.Host           {Start-Transcript, Stop-Transcript}
Manifest   3.1.0.0    Microsoft.PowerShell.Management     {Add-Content, Clear-Content, Clear-ItemProperty, Join-Path...
Manifest   3.0.0.0    Microsoft.PowerShell.Security       {Get-Credential, Get-ExecutionPolicy, Set-ExecutionPolicy,...
Manifest   3.1.0.0    Microsoft.PowerShell.Utility        {Format-List, Format-Custom, Format-Table, Format-Wide...}
Script     1.1.2.0    PackageManagement                   {Find-Package, Get-Package, Get-PackageProvider, Get-Packa...
Script     3.3.9      Pester                              {Describe, Context, It, Should...}
Script     1.1.2.0    PowerShellGet                       {Install-Module, Find-Module, Save-Module, Update-Module...}
Script     0.0        PSDesiredStateConfiguration         {IsHiddenResource, StrongConnect, Write-MetaConfigFile, Ge...
Script     1.2        PSReadLine                          {Get-PSReadlineKeyHandler, Set-PSReadlineKeyHandler, Remov...

PS /powershell>

So we have the PowerCLI Core module, the Distributed vSwitch module, as well as PowervRA and PowerNSX. We should be able to run our scripts from the /usr/share folder, or just run stuff from scratch.

The great thing is we can now edit our scripts in the folder mapped to /usr/share using our editor, and the changes are available live to test our scripts, and we can even write output to this folder from within the container.

If you want to detach from the container without killing it then use the Ctrl+P+Q key combination, you can then reattach with ‘docker attach PowerCLI’. When you are done with the container type ‘exit’ and it will quit and be removed.

Conclusion

Though basic, this can really help with workflow when writing and testing scripts on macOS, while enabling you to simply keep up to date with the latest images, and not fill your Mac with more junk.

Advertisements

Dockerising a GoLang Application

I was recently messing about with GoLang to get my head around it, and came up with a small script to basically output a timestamp and a random name, based on the Docker container random name generator (here). The plan was to use this to populate a database with random junk for when I’m testing software against database servers.

I got the code working anyway, you can see it on my GitHub site here, in this article I’m going to focus on the differences between building a container from a GoLang binary, and the benefits of doing this from scratch. This is to my mind a very powerful way of delivering Go applications so that they are both tiny and self-contained, and sets it apart from other popular languages where they end up dragging round baggage in their dependencies.

The steps below will go through building a container from a Go file, first using the GoLang image from Docker Hub, and then from scratch, and we will compare the resulting sizes of the files.

The steps below were all carried out on macOS 10.12, Go 1.8, and Docker 17.03.

Image based on golang official Docker image

For this stage, we should have our Go file in the current folder, we need to create the following Dockerfile:

FROM golang:latest
RUN mkdir /app
ADD <path_to_go_file>.go /app/
WORKDIR /app
RUN go build -o <output_file> .
CMD ["/app/<output_file>"]

We can then build this into an image:

docker build -t <your_name>/<image_name> -f Dockerfile .

We can then use ‘docker images’ to show the image we created:

REPOSITORY               TAG                 IMAGE ID            CREATED             SIZE
timhynes/name_gen_full   latest              ffc0ef4bac73        3 seconds ago       698 MB

Image built from scratch with Go binary

To build the image from scratch, we first need to build our Go application into a binary. We do this with the below command:

CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -o <output_file> <path_to_go_file> 

The ‘GOOS’ flag defines what the target OS for the binary is, in our case we choose Linux so that it will run in our Docker container. The ‘CGO_ENABLED=0’ flag prevents linking to external C libraries (more info here), and will mean that the binary is fully self-contained.

Once this is run, the binary will be created in the location specified. This could then be ported around Linux systems and run as a compiled application, but we are going to build this into a Docker image instead.

To build the Docker image, as shown earlier, we need a Dockerfile; the one I used for this stage is shown below:

FROM scratch
ADD <output_file> /
CMD ["<output_file>"]

This should be saved to the same folder as the binary as ‘Dockerfile’. We can now run the ‘docker build’ command to create our image:

docker build -t <your_name>/<image_name> -f Dockerfile .

At this point we should now have an image in our local repository, this shows here as being under 2MB for my application:

REPOSITORY            TAG                 IMAGE ID            CREATED             SIZE
timhynes/name_gen     latest              436a832943c8        12 seconds ago      1.77 MB

Conclusion

This shows that despite the functional endpoint being ultimately to containerise our application, through compiling a GoLang binary, and using this as the sole contents of our image, we can save huge amounts of space. In the case of the example above, the resultant image was over 300 times smaller when using the binary alone.

I guess the takeaway is that not all containers are made equal, and thinking about the way we package our Docker application can make a large difference to our ability to be able to deliver and run it.

vSphere Automation SDKs

This week VMware open sourced their SDKs for vSphere using REST APIs, and Python. The REST API was released with vSphere 6.0, while the Python SDK has been around for nearly four years now. I’m going to summarise the contents of this release below, and where these can help us make more of our vSphere environments.

REST API

The vSphere REST API has been growing since the release of vSphere 6 nearly two years ago, and brings access to the following areas of vSphere with its current release:

  • Session management
  • Tagging
  • Content Library
  • Virtual Machines
  • vCenter Server Appliance management

These cover mainly new features from vSphere 6.0 (formerly known as vCloud Suite SDK), and then some of the new bits put together for modernising the API access in vSphere 6.5. The Virtual Machine management particularly is useful in being able to start using REST based methods to do operations, and report on VMs in your environment, very useful for people looking to write quick integrations with things like vRealize Orchestrator, where the built in plugins do not do what you want.

The new material, available on GitHub, contains two main functions:

Postman Collection

Screen Shot 2017-03-12 at 10.28.54.png

Postman is a REST client used to explore APIs, providing a nice graphical display of the request-response type methods used for REST. This is a great way to get your head round what is happening with requests, and helps to build up an idea of what is going on with the API.

Pre-built packs of requests can be gathered together in Postman ‘Collections’; these can then be distributed (in JSON format) and loaded into another instance of Postman. This can be crucially important in documenting the functionality of APIs, especially when the documentation is lacking.

There are some instructions on how to set this up here; if you are new to REST APIs, or just want a quick way to have a play with the new vSphere REST APIs, you could do far worse than starting here.

Node.js Sample Pack

Node.js has taken over the world of server side web programming, and thanks to the simple syntax of Javascript, is easy to pick up and get started with. This pack (available here) has some samples of Node.js code to interact with the REST API. This is a good place to start with seeing how web requests and responses are dealt with in Node, and how we can programatically carry out administrative tasks.

These could be integrated into a web based portal to do the requests directly, or I can see these being used in the future as part of a serverless administration platform, using something like AWS Lambda along with a monitoring platform to automate the administration of a vSphere environment.

Python SDK

Python has been an incredibly popular language for automation for a number of years. Its very low barrier to getting started makes it ideal to pick up and learn, with a wealth of possibilities for building on solid simple foundations to make highly complex software solutions. VMware released their ‘pyvmomi’ Python SDK back in 2013, and it has received consistent updates since then. While not as popular, or as promoted as their PowerCLI PowerShell module, it has nevertheless had strong usage and support from the community.

The release on offer as part of the vSphere Automation SDKs consists of scripts to spin up a demo environment for developing with the Python SDK, as well as a number of sample scripts demonstrating the functionality of the new APIs released in vSphere 6.0 and 6.5.

The continued growth in popularity of Python, along with leading automation toolsets like Ansible using a Python base, mean that it is a great platform to push this kind of development and publicity in. As with Node.js; serverless platforms are widely supporting Python, so this could be integrated with Lambda, Fission, or other FaaS platforms in the future.

Conclusion

It’s great to see VMware really getting behind developing and pushing their automation toolkits in the open, they are to my mind a leader in the industry in terms of making their products programmable, and I hope they continue at this pace and in this vein. The work shown in this release will help make it easier for people new to automation to get involved and start reaping the benefits that it can bring, and the possibilities for combining these vSphere SDKs with serverless administration will be an interesting area to watch.

AWS Certified Developer and SysOps Administrator Associate Exam Experience

So since my post on the AWS Certified Solutions Architect Associate exam (here), I’ve now completed the Certified Developer Associate, and Certified SysOps Administrator Associate exams.

I found these a lot easier to study for after doing the first exam, as I had an idea of the level the exams were pitched at, and the types of questions which would be on the exam. I didn’t do a post immediately after the Developer exam, because the study I did was largely the same as for the Solutions Architect exam, and this was the same for the SysOps Administrator exam.

Again I used mainly A Cloud Guru courses (purchased cheaply from Udemy), and the Linux Academy courses, accessed for free through Microsoft’s Dev Essentials program. As for the Solutions Architect exam, these helped to give me the basics for the exams, and focus on those areas I was lacking in.

Another good resource is the A Cloud Guru discussions forums (here), where kind people share their experiences about the exam they had, and point out specific areas which should be investigated.

I got 96% on the Developer exam, and 92% on the SysOps Administrator exam, so the methods I have been using are obviously working for me. Next I’m going to move on to the Professional certs; most likely the Solutions Architect exam first, then the DevOps Engineer exam, as that maps more to my day job.

Good luck to anyone looking to do the AWS exams, I have got through all the entry level ones in a little over 2 months from having no experience, so they are not too scary. It’s been an enjoyable ride, and I am pretty pumped to learn more and get the Pro exams passed.

AWS REST API Authentication Using Node.js

I’ve been learning as much as I can on Amazon Web Services over the last couple of months; the looming shadow of it over traditional IT finally got too much, and I figured it was time to make the leap. Overall it’s been a great experience, and the biggest takeaway I’ve probably had is how every service, and the way in which we consume them, are application-centric.
Every service is fully API first, with the AWS Management Console basically acting as a front end for the API calls made to the vast multitude of services. I’ve done a fair amount of work with REST APIs over the last 18 months, and it’s always good to fire up Postman (if you don’t know what this is, there is a post here I did about REST APIs, and the use of Postman), and throw a few API calls at a new technology to see how it works.
Now while AWS services are all available via REST APIs; there are a tonne of tools available for both administrators and developers, which abstract away the nitty gritty, we have:
  • AWS CLI – a CLI based tool for Windows/Linux/OSX (available here)
  • AWS Tools for Windows PowerShell – the PowerShell module for consuming AWS services (available here)
  • SDKs (Software Development Kits) for the following (all available here):
    • Android
    • Browsers (basically a JavaScript SDK you can build web services around)
    • iOS
    • Java
    • .NET
    • Node.js
    • PHP
    • Python
    • Ruby
    • GoLang
    • C++
    • AWS IoT
    • AWS Mobile
Combined, these provide a wide variety of ways to use pre-built solutions to speak to AWS based resources, and there should be something that any developer or admin can use to introduce some automation or programability into their work, and I would recommend using one of these if at all possible to abstract away the heavy lifting of working with a very broad and deep API.
I wanted to get stuck in from a REST API side though, which basically means building things from the ground up. This turned out to take a fair amount of time, but I learned a heck of a lot about the authentication and authorisation process for AWS, and how this helps to prevent unauthorised access.
The full authentication process is described in the AWS Documentation available here. There are pages and pages describing the V4 authentication process (the current recommended version), and this gets pretty complicated. I’m going to try and break it down here, showing the bits of code used to create each element; this should hopefully make it a bit clearer.
One post I found really useful on this was by Lukasz Adamczak (@lukasz_adamczak), on how to do the authentication with Curl, which I used as the basis for some of what I did below. I couldn’t find anything where someone was doing this task via the REST API in JavaScript.

Pre-requisites

The following variables need to be set in the script before we start:
// our variables
var access_key = 'ACCESS_KEY_VALUE'
var secret_key = 'SECRET_KEY_VALUE'
var region = 'eu-west-1';
var url = 'my-bucket-name.s3.amazonaws.com';
var myService = 's3';
var myMethod = 'GET';
var myPath = '/';
In addition to this, we have these package dependencies:
// declare our dependencies
var crypto = require('crypto-js');
var https = require('https');
var xml = require('xml2js');
The crypto-js and https modules were built into the version of Node I was using (v6.9.5), but I had to use NPM to install the xml2js module.

Amazon Date Format

I started with getting the date format used by AWS in authentication, this is based on the ISO 8601 format, but has the punctuation, and milliseconds removed from it. I created the below function, and use it to create the two variables shown below:
// get the various date formats needed to form our request
var amzDate = getAmzDate(new Date().toISOString());
var authDate = amzDate.split("T")[0];

// this function converts the generic JS ISO8601 date format to the specific format the AWS API wants
function getAmzDate(dateStr) {
  var chars = [":","-"];
  for (var i=0;i<chars.length;i++) {
    while (dateStr.indexOf(chars[i]) != -1) {
      dateStr = dateStr.replace(chars[i],"");
    }
  }
  dateStr = dateStr.split(".")[0] + "Z";
  return dateStr;
}
We’ll go into this later, but the reason there are two variables for the date (amzDate, authDate) is that in generating the headers for our REST call we will need both formats at different times. One is in the ‘YYYYMMDDTHHmmssZ’ format, and one is in the ‘YYYYMMDD’ format.

Our payload

The example used in this script is to use a blank payload, for which we calculate the SHA256 hash. This is obviously always the same when calculated against a blank string (e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855, if you were interested :p), but I included the hashing of this in the script so the logic is there later if we wanted to do different payloads.
// we have an empty payload here because it is a GET request
var payload = '';
// get the SHA256 hash value for our payload
var hashedPayload = crypto.SHA256(payload).toString();
This hashed payload is used a bunch in the final request, including in the ‘x-amz-content-sha256’ HTTP header to validate the expected payload.

Canonical Request

This is where things got a bit confusing for me; we need to work out the payload of our message (in AWS’ special format), and work out what the SHA256 hash of this is. First we need to know the formatting for the canonical request. Ultimately this is a multi-line string, consisting of the following attributes:
HTTPRequestMethod
CanonicalURI
CanonicalQueryString
CanonicalHeaders
SignedHeaders
HexEncode(Hash(RequestPayload))
These attributes are described as:
  • HTTPRequestMethod – the HTTP method being used, could be GET, POST, etc. In our example this will be GET
  • CanonicalURI – the relative URI for the resource we are accessing. In our example we access the root namespace of our bucket, so this is set to “/”
  • CanonicalQueryString – we can build a query for our request, more information on this is available here. In our example we don’t need a query so we will leave this as a blank line
  • CanonicalHeaders – a carriage return separated list of the headers we are using in our request
  • SignedHeaders – a semi-colon separated list of the header keys we are including in our request
  • HexEncode(Hash(RequestPayload)) – the hash value as calculated earlier. As we used the ‘toString()’ method on this, it should already be in hexadecimal
We construct this request with the following code:
// create our canonical request
var canonicalReq =  myMethod + '\n' +
                    myPath + '\n' +
                    '\n' +
                    'host:' + url + '\n' +
                    'x-amz-content-sha256:' + hashedPayload + '\n' +
                    'x-amz-date:' + amzDate + '\n' +
                    '\n' +
                    'host;x-amz-content-sha256;x-amz-date' + '\n' +
                    hashedPayload;
This leaves us with the following as an example:
GET
/

host:my-bucket-name.s3.amazonaws.com
x-amz-content-sha256:e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
x-amz-date:20170213T045707Z

host;x-amz-content-sha256;x-amz-date
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
Note the blank CanonicalQuery line here, as we are not using that functionality, and the blank like after Canonical Headers, these are required for the string to be accepted when we hash it.
So now we can hash this multi-line string:
// hash the canonical request
var canonicalReqHash = crypto.SHA256(canonicalReq).toString();
This becomes another long hashed value.
c75b55ba2d959baf99f2c4976c7a50c7cd79067a726c21024f4a981ae2a90b50

String to Sign

Now, similar to the Canonical Request above, we create a new multi-line string which is used to generate our authentication header. This time it is in the following format:
Algorithm
RequestDate
CredentialScope
HashedCanonicalRequest

These attributes are completed as:

  • Algorithm – for SHA256, which is what we always use with AWS, this should be set to ‘AWS4-HMAC-SHA256’
  • RequestDate – this is the date/time stamp in the ‘YYYYMMDDTHHmmssZ’ format, so we will use our stored ‘amzDate’ variable here
  • CredentialScope – this takes the format ‘///aws4_request’. We have the date stored in this format already as ‘authDate’, so we can use that here, our region name can be found in this table, and the service name here is ‘s3’, further details of other namespaces can be found here
  • HashedCanonicalRequest – this was calculated above

With this information we can form our string like this:

// form our String-to-Sign
var stringToSign =  'AWS4-HMAC-SHA256\n' +
                    amzDate + '\n' +
                    authDate+'/'+region+'/'+myService+'/aws4_request\n'+
                    canonicalReqHash;
This generates a string like this:
AWS4-HMAC-SHA256
20170213T051343Z
20170213/eu-west-1/s3/aws4_request
c75b55ba2d959baf99f2c4976c7a50c7cd79067a726c21024f4a981ae2a90b50

Signing Key

We need a signing key now; this embeds our secret key, along with some other bits in a hash which is used to sign our ‘String to sign’, giving us our final hashed value which we use in the authentication header. Luckily here AWS provide some sample JS code (amongst other languages) for creating this hash:
// this function gets the Signature Key, see AWS documentation for more details
function getSignatureKey(Crypto, key, dateStamp, regionName, serviceName) {
    var kDate = Crypto.HmacSHA256(dateStamp, "AWS4" + key);
    var kRegion = Crypto.HmacSHA256(regionName, kDate);
    var kService = Crypto.HmacSHA256(serviceName, kRegion);
    var kSigning = Crypto.HmacSHA256("aws4_request", kService);
    return kSigning;
}

This can be found here.So into this function we pass our secret access key, the authDate variable we calculated earlier, our region, and the service namespace.

// get our Signing Key
var signingKey = getSignatureKey(crypto, secret_key, authDate, region, myService);

This will again return a long hash value:

9afc364e2eb6ba46f000721975d32bc2042058f80b5a8fd69efe422e7be5090d

Authentication Key

Nearly there now! So we need to take our String to Sign, and our Signing Key, and hash the string to sign with the signing key, to generate another hash which will be used in our request header. To do this we again use the CryptoJS library, with the order of the inputs being our string to hash (stringToSign), and then the key to hash it with (signingKey).
This returns another hash:
31c6a42a9aec00390317f9c714f38efeba2498fa1996cecb9b4c714b39cbc05a90332f38ef

Creating our headers

Right, no more hashing needed now, we have everything we need. So next we construct our Authentication header value:
// Form our authorization header
var authString  = 'AWS4-HMAC-SHA256 ' +
                  'Credential='+
                  access_key+'/'+
                  authDate+'/'+
                  region+'/'+
                  myService+'/aws4_request,'+
                  'SignedHeaders=host;x-amz-content-sha256;x-amz-date,'+
                  'Signature='+authKey;
This is a single line, multi-part string consisting of the following parts:
  • Algorithm – for SHA256, which is what we always use with AWS, this should be set to ‘AWS4-HMAC-SHA256’
  • CredentialScope – as used in our String To Sign above
  • SignedHeaders – a semi-colon separated list of our signed headers
  • Signature – the authentication key we hand crafted above
When we place all these together, we end up with a string like this:
WS4-HMAC-SHA256 Credential=A123EXAMPLEACCESSKEY/20170213/eu-west-1/s3/aws4_request,SignedHeaders=host;x-amz-content-sha256;x-amz-date,Signature=31c6a42a9aec00390317f9c714f38efeba2498fa1996cecb9b4c714b39cbc05a90332f38ef
Now we have everything we need to create our headers for our HTTP request:
// throw our headers together
headers = {
  'Authorization' : authString,
  'Host' : url,
  'x-amz-date' : amzDate,
  'x-amz-content-sha256' : hashedPayload
};
Here we use a hash array for simplicity, with our various headers added to the array, to end up with an array like this:
Key
Value
Authorization
WS4-HMAC-SHA256 Credential=A123EXAMPLEACCESSKEY/20170213/eu-west-1/s3/aws4_request,SignedHeaders=host;x-amz-content-sha256;x-amz-date,Signature=31c6a42a9aec00390317f9c714f38efeba2498fa1996cecb9b4c714b39cbc05a90332f38ef
Host
x-amz-date
20170213T051343Z
x-amz-content-sha256
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
These are now ready to use in our request.

Our request

Now we can send our request. I split this into a function to do the REST call:
// the REST API call using the Node.js 'https' module
function performRequest(endpoint, headers, data, success) {

  var dataString = data;

  var options = {
    host: endpoint,
    port: 443,
    path: '/',
    method: 'GET',
    headers: headers
  };

  var req = https.request(options, function(res) {
    res.setEncoding('utf-8');

    var responseString = '';

    res.on('data', function(data) {
      responseString += data;
    });

    res.on('end', function() {
      //console.log(responseString);
      success(responseString);
    });
  });

  req.write(dataString);
  req.end();
}
And the call to this function, which also processes the results, in the body:
// call our function
performRequest(url, headers, payload, function(response) {
  // parse the response from our function and write the results to the console
  xml.parseString(response, function (err, result) {
    console.log('\n=== \n'+'Bucket is named: ' + result['ListBucketResult']['Name']);
    console.log('=== \n'+'Contents: ');
    for (i=0;i<result['ListBucketResult']['Contents'].length;i++) {
      console.log(
        '=== \n'+
        'Name: '          + result['ListBucketResult']['Contents'][i]['Key'][0]           + '\n' +
        'Last modified: ' + result['ListBucketResult']['Contents'][i]['LastModified'][0]  + '\n' +
        'Size (bytes): '  + result['ListBucketResult']['Contents'][i]['Size'][0]          + '\n' +
        'Storage Class: ' + result['ListBucketResult']['Contents'][i]['StorageClass'][0]
      );
    };
    console.log('=== \n');
  });
});
This essentially passes our headers with the payload to the URL we specified in our variables, and processes the resulting XML into some useful output.
This is what this returns as output:
Tims-MacBook-Pro:GetS3BucketContent tim$ node get_bucket_content.js

===
Bucket is named: virtualbrakeman
===
Contents:
===
Name: file_a_foo.txt
Last modified: 2017-02-05T11:19:36.000Z
Size (bytes): 10
Storage Class: STANDARD
===
Name: file_b_foo.txt
Last modified: 2017-02-05T11:19:36.000Z
Size (bytes): 10
Storage Class: STANDARD
===
Name: file_c_foo.txt
Last modified: 2017-02-05T11:19:36.000Z
Size (bytes): 10
Storage Class: STANDARD
===
Name: file_d_foo.txt
Last modified: 2017-02-05T11:19:36.000Z
Size (bytes): 10
Storage Class: STANDARD
===
Name: foobar
Last modified: 2017-02-04T22:01:38.000Z
Size (bytes): 0
Storage Class: STANDARD
===

Tims-MacBook-Pro:GetS3BucketContent tim$

Conclusion

This is a fairly trivial example of data to return, but the real point behind this was building the authentication code, which proved to be very laborious. Given the wide (and growing) variety of SDKs available, it seems overly complex to try and construct these requests in this way every time. I have played with the Python and JavaScript SDKs and both take this roughly 150 line script, and achieve the same result in around 20 lines.
Regardless, this was a good learning exercise for me, and it may come in useful for people trying to interact with the AWS API in ways which are not covered by the SDKs, or via other languages where an SDK is not available.
The final script is shown below, and is also available on my GitHub library here: https://github.com/railroadmanuk/awsrestauthentication
// declare our dependencies
var crypto = require('crypto-js');
var https = require('https');
var xml = require('xml2js');

main();

// split the code into a main function
function main() {
  // this serviceList is unused right now, but may be used in future
  const serviceList = [
    'dynamodb',
    'ec2',
    'sqs',
    'sns',
    's3'
  ];

  // our variables
  var access_key = 'ACCESS_KEY_VALUE';
  var secret_key = 'SECRET_KEY_VALUE';
  var region = 'eu-west-1';
  var url = 'my-bucket-name.s3.amazonaws.com';
  var myService = 's3';
  var myMethod = 'GET';
  var myPath = '/';

  // get the various date formats needed to form our request
  var amzDate = getAmzDate(new Date().toISOString());
  var authDate = amzDate.split("T")[0];

  // we have an empty payload here because it is a GET request
  var payload = '';
  // get the SHA256 hash value for our payload
  var hashedPayload = crypto.SHA256(payload).toString();

  // create our canonical request
  var canonicalReq =  myMethod + '\n' +
                      myPath + '\n' +
                      '\n' +
                      'host:' + url + '\n' +
                      'x-amz-content-sha256:' + hashedPayload + '\n' +
                      'x-amz-date:' + amzDate + '\n' +
                      '\n' +
                      'host;x-amz-content-sha256;x-amz-date' + '\n' +
                      hashedPayload;

  // hash the canonical request
  var canonicalReqHash = crypto.SHA256(canonicalReq).toString();

  // form our String-to-Sign
  var stringToSign =  'AWS4-HMAC-SHA256\n' +
                      amzDate + '\n' +
                      authDate+'/'+region+'/'+myService+'/aws4_request\n'+
                      canonicalReqHash;

  // get our Signing Key
  var signingKey = getSignatureKey(crypto, secret_key, authDate, region, myService);

  // Sign our String-to-Sign with our Signing Key
  var authKey = crypto.HmacSHA256(stringToSign, signingKey);

  // Form our authorization header
  var authString  = 'AWS4-HMAC-SHA256 ' +
                    'Credential='+
                    access_key+'/'+
                    authDate+'/'+
                    region+'/'+
                    myService+'/aws4_request,'+
                    'SignedHeaders=host;x-amz-content-sha256;x-amz-date,'+
                    'Signature='+authKey;

  // throw our headers together
  headers = {
    'Authorization' : authString,
    'Host' : url,
    'x-amz-date' : amzDate,
    'x-amz-content-sha256' : hashedPayload
  };

  // call our function
  performRequest(url, headers, payload, function(response) {
    // parse the response from our function and write the results to the console
    xml.parseString(response, function (err, result) {
      console.log('\n=== \n'+'Bucket is named: ' + result['ListBucketResult']['Name']);
      console.log('=== \n'+'Contents: ');
      for (i=0;i<result['ListBucketResult']['Contents'].length;i++) {
        console.log(
          '=== \n'+
          'Name: '          + result['ListBucketResult']['Contents'][i]['Key'][0]           + '\n' +
          'Last modified: ' + result['ListBucketResult']['Contents'][i]['LastModified'][0]  + '\n' +
          'Size (bytes): '  + result['ListBucketResult']['Contents'][i]['Size'][0]          + '\n' +
          'Storage Class: ' + result['ListBucketResult']['Contents'][i]['StorageClass'][0]
        );
      };
      console.log('=== \n');
    });
  });
};

// this function gets the Signature Key, see AWS documentation for more details, this was taken from the AWS samples site
function getSignatureKey(Crypto, key, dateStamp, regionName, serviceName) {
    var kDate = Crypto.HmacSHA256(dateStamp, "AWS4" + key);
    var kRegion = Crypto.HmacSHA256(regionName, kDate);
    var kService = Crypto.HmacSHA256(serviceName, kRegion);
    var kSigning = Crypto.HmacSHA256("aws4_request", kService);
    return kSigning;
}

// this function converts the generic JS ISO8601 date format to the specific format the AWS API wants
function getAmzDate(dateStr) {
  var chars = [":","-"];
  for (var i=0;i<chars.length;i++) {
    while (dateStr.indexOf(chars[i]) != -1) {
      dateStr = dateStr.replace(chars[i],"");
    }
  }
  dateStr = dateStr.split(".")[0] + "Z";
  return dateStr;
}

// the REST API call using the Node.js 'https' module
function performRequest(endpoint, headers, data, success) {

  var dataString = data;

  var options = {
    host: endpoint,
    port: 443,
    path: '/',
    method: 'GET',
    headers: headers
  };

  var req = https.request(options, function(res) {
    res.setEncoding('utf-8');

    var responseString = '';

    res.on('data', function(data) {
      responseString += data;
    });

    res.on('end', function() {
      //console.log(responseString);
      success(responseString);
    });
  });

  req.write(dataString);
  req.end();
}

Installing vSphere Integrated Containers

This document details installing and testing vSphere Integrated Containers, which went v1.0 recently. This has been tested against vSphere 6.5 only.
Download VIC from my.vmware.com.
Release notes available here.
From Linux terminal:
root@LOBSANG:~# tar -xvf vic_0.8.0-7315-c8ac999.tar.gz
root@LOBSANG:~# cd vic
root@LOBSANG:~/vic# tree .
.
├── appliance.iso
├── bootstrap.iso
├── LICENSE
├── README
├── ui
│   ├── vCenterForWindows
│   │   ├── configs
│   │   ├── install.bat
│   │   ├── uninstall.bat
│   │   ├── upgrade.bat
│   │   └── utils
│   │   └── xml.exe
│   ├── VCSA
│   │   ├── configs
│   │   ├── install.sh
│   │   ├── uninstall.sh
│   │   └── upgrade.sh
│   └── vsphere-client-serenity
│   ├── com.vmware.vicui.Vicui-0.8.0
│   │   ├── plugin-package.xml
│   │   ├── plugins
│   │   │   ├── vic-ui-service.jar
│   │   │   ├── vic-ui-war.war
│   │   │   └── vim25.jar
│   │   └── vc_extension_flags
│   └── com.vmware.vicui.Vicui-0.8.0.zip
├── vic-machine-darwin
├── vic-machine-linux
├── vic-machine-windows.exe
├── vic-ui-darwin
├── vic-ui-linux
└── vic-ui-windows.exe

7 directories, 25 files
root@LOBSANG:~/vic#
Now we have the files ready to go we can run the install command as detailed in the GitHub repository for VIC (here). We are going to use Linux here:
root@VBRPHOTON01 [ ~/vic ]# ./vic-machine-linux
NAME:
  vic-machine-linux - Create and manage Virtual Container Hosts

USAGE:
  vic-machine-linux [global options] command [command options] [arguments...]

VERSION:
  v0.8.0-7315-c8ac999

COMMANDS:
  create Deploy VCH
  delete Delete VCH and associated resources
  ls List VCHs
  inspect Inspect VCH
  version Show VIC version information
  debug Debug VCH

GLOBAL OPTIONS:
  --help, -h show help
  --version, -v print the version

root@VBRPHOTON01 [ ~/vic ]#
On all hosts in the cluster you are using, create a bridge network (has to be vDS), mine is called vDS_VCH_Bridge, and disable the ESXi firewall by doing this.
To install we use the command as follows:
root@VBRPHOTON01 [ ~/vic ]# ./vic-machine-linux create --target 192.168.1.222 --image-store VBR_MGTESX01_Local_SSD_01 --name VBR-VCH-01 --user administrator@vsphere.local --password VMware1! --compute-resource VBR_Mgmt_Cluster --bridge-network vDS_VCH_Bridge --public-network vSS_Mgmt_Network --client-network vSS_Mgmt_Network --management-network vSS_Mgmt_Network --force --no-tlsverify
This is in my lab; I’m deploying to a vCenter with a single host, and don’t care about security. The output should look something like this:
INFO[2017-01-06T21:52:14Z] ### Installing VCH ####
WARN[2017-01-06T21:52:14Z] Using administrative user for VCH operation - use --ops-user to improve security (see -x for advanced help)
INFO[2017-01-06T21:52:14Z] Loaded server certificate VBR-VCH-01/server-cert.pem
WARN[2017-01-06T21:52:14Z] Configuring without TLS verify - certificate-based authentication disabled
INFO[2017-01-06T21:52:15Z] Validating supplied configuration
INFO[2017-01-06T21:52:15Z] vDS configuration OK on "vDS_VCH_Bridge"
INFO[2017-01-06T21:52:15Z] Firewall status: DISABLED on "/VBR_Datacenter/host/VBR_Mgmt_Cluster/vbrmgtesx01.virtualbrakeman.local"
WARN[2017-01-06T21:52:15Z] Firewall configuration will be incorrect if firewall is reenabled on hosts:
WARN[2017-01-06T21:52:15Z] "/VBR_Datacenter/host/VBR_Mgmt_Cluster/vbrmgtesx01.virtualbrakeman.local"
INFO[2017-01-06T21:52:15Z] Firewall must permit dst 2377/tcp outbound to VCH management interface if firewall is reenabled
INFO[2017-01-06T21:52:15Z] License check OK on hosts:
INFO[2017-01-06T21:52:15Z] "/VBR_Datacenter/host/VBR_Mgmt_Cluster/vbrmgtesx01.virtualbrakeman.local"
INFO[2017-01-06T21:52:15Z] DRS check OK on:
INFO[2017-01-06T21:52:15Z] "/VBR_Datacenter/host/VBR_Mgmt_Cluster/Resources"
INFO[2017-01-06T21:52:15Z]
INFO[2017-01-06T21:52:15Z] Creating virtual app "VBR-VCH-01"
INFO[2017-01-06T21:52:15Z] Creating appliance on target
INFO[2017-01-06T21:52:15Z] Network role "client" is sharing NIC with "public"
INFO[2017-01-06T21:52:15Z] Network role "management" is sharing NIC with "public"
INFO[2017-01-06T21:52:16Z] Uploading images for container
INFO[2017-01-06T21:52:16Z] "bootstrap.iso"
INFO[2017-01-06T21:52:16Z] "appliance.iso"
INFO[2017-01-06T21:52:22Z] Waiting for IP information
INFO[2017-01-06T21:52:35Z] Waiting for major appliance components to launch
INFO[2017-01-06T21:52:35Z] Checking VCH connectivity with vSphere target
INFO[2017-01-06T21:52:36Z] vSphere API Test: https://192.168.1.222 vSphere API target responds as expected
INFO[2017-01-06T21:52:38Z] Initialization of appliance successful
INFO[2017-01-06T21:52:38Z]
INFO[2017-01-06T21:52:38Z] VCH Admin Portal:
INFO[2017-01-06T21:52:38Z] https://192.168.1.46:2378
INFO[2017-01-06T21:52:38Z]
INFO[2017-01-06T21:52:38Z] Published ports can be reached at:
INFO[2017-01-06T21:52:38Z] 192.168.1.46
INFO[2017-01-06T21:52:38Z]
INFO[2017-01-06T21:52:38Z] Docker environment variables:
INFO[2017-01-06T21:52:38Z] DOCKER_HOST=192.168.1.46:2376
INFO[2017-01-06T21:52:38Z]
INFO[2017-01-06T21:52:38Z] Environment saved in VBR-VCH-01/VBR-VCH-01.env
INFO[2017-01-06T21:52:38Z]
INFO[2017-01-06T21:52:38Z] Connect to docker:
INFO[2017-01-06T21:52:38Z] docker -H 192.168.1.46:2376 --tls info
INFO[2017-01-06T21:52:38Z] Installer completed successfully
Now we can check the state of our remote VIC host with:
root@VBRPHOTON01 [ ~ ]# docker -H tcp://192.168.1.46:2376 --tls info
Containers: 0
 Running: 0
 Paused: 0
 Stopped: 0
Images: 0
Server Version: v0.8.0-7315-c8ac999
Storage Driver: vSphere Integrated Containers v0.8.0-7315-c8ac999 Backend Engine
VolumeStores:
vSphere Integrated Containers v0.8.0-7315-c8ac999 Backend Engine: RUNNING
 VCH mhz limit: 2419 Mhz
 VCH memory limit: 27.88 GiB
 VMware Product: VMware vCenter Server
 VMware OS: linux-x64
 VMware OS version: 6.5.0
Execution Driver: vSphere Integrated Containers v0.8.0-7315-c8ac999 Backend Engine
Plugins:
 Volume:
 Network: bridge
Operating System: linux-x64
OSType: linux-x64
Architecture: x86_64
CPUs: 2419
Total Memory: 27.88 GiB
Name: VBR-VCH-01
ID: vSphere Integrated Containers
Docker Root Dir:
Debug mode (client): false
Debug mode (server): false
Registry: registry-1.docker.io
root@VBRPHOTON01 [ ~ ]#
This shows us it’s up and running, Now we can run our first container on here by doing:
root@VBRPHOTON01 [ ~ ]# docker -H tcp://192.168.1.46:2376 --tls run hello-world
Unable to find image 'hello-world:latest' locally
Pulling from library/hello-world
c04b14da8d14: Pull complete
a3ed95caeb02: Pull complete
Digest: sha256:548e9719abe62684ac7f01eea38cb5b0cf467cfe67c58b83fe87ba96674a4cdd
Status: Downloaded newer image for library/hello-world:latest

Hello from Docker!
This message shows that your installation appears to be working correctly.

To generate this message, Docker took the following steps:
 1. The Docker client contacted the Docker daemon.
 2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
 3. The Docker daemon created a new container from that image which runs the
  executable that produces the output you are currently reading.
 4. The Docker daemon streamed that output to the Docker client, which sent it
  to your terminal.

To try something more ambitious, you can run an Ubuntu container with:
 $ docker run -it ubuntu bash

Share images, automate workflows, and more with a free Docker Hub account:
 https://hub.docker.com

For more examples and ideas, visit:
 https://docs.docker.com/engine/userguide/

root@VBRPHOTON01 [ ~ ]#
We can see this under vSphere as follows:

vch_vapp

So our container host itself is a VM under a vApp, and all containers are spun up as VMs under the vApp. As we can see here, the container ‘VM’ is powered off. This can be seen further by running ‘docker ps’ against our remote host:
root@VBRPHOTON01 [ ~ ]# docker -H tcp://192.168.1.46:2376 --tls ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
root@VBRPHOTON01 [ ~ ]# docker -H tcp://192.168.1.46:2376 --tls ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
24598201e216 hello-world "/hello" 56 seconds ago Exited (0) 47 seconds ago silly_davinci
root@VBRPHOTON01 [ ~ ]# docker -H tcp://192.168.1.46:2376 --tls rm 24598201e216
24598201e216
root@VBRPHOTON01 [ ~ ]# docker -H tcp://192.168.1.46:2376 --tls ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
root@VBRPHOTON01 [ ~ ]#
This container is now tidied up in vSphere:
 vch_vapp_2
So now we have VIC installed and can spin up containers. In the next post we will install VMware Harbor and use that as our trusted registry.

AWS Certified Solutions Architect – Associate Exam Experience

I sat my AWS CSA Associate exam today, and I’m happy to say I passed with 78%. This post will talk about my previous experiences with AWS, and the resources I used to study it.

A bit of history about this certification; AWS currently have a total of five generally available certifications, and three in beta:

Associate level (the easiest):

  • AWS Certified Solutions Architect – Associate – aimed I guess at Solutions Architects (pre-sales or post-sales consultants), and covering most of the core services, with a focus on designing highly available infrastructures
  • AWS Certified Developer – Associate – aimed at people developing software solutions to run on top of AWS, this looks to cover most of the same ground, but aimed more around using APIs, using the services to build applications, and some of the tools available to help developers consume services more easily
  • AWS Certified SysOps Administrator – Associate – aimed at people administering AWS environments, covering a lot of the same ground as the other two associate level exams, but focussing more on the tools available to keep things running, and troubleshooting

Specialist level (the new ones – currently beta):

  • AWS Certified Big Data – Specialty – focussing on the Big Data type services available in AWS
  • AWS Certified Advanced Networking – Specialty – focussing on the networking concepts and services in AWS
  • AWS Certified Security – Specialty – focussing on security in AWS

Professional level (the hard ones):

  • AWS Certified Solutions Architect – Professional – focussed on designing and implanting highly available systems and applications on AWS
  • AWS Certified DevOps Engineer – Professional – focussed on developing and managing distributed applications on AWS

The associate level exam blueprints talk about having hands on experience and time served with AWS; I haven’t used AWS at work, so had to start from scratch basically. Having a good understanding of how modern applications are architected, databases, Linux and Windows, and infrastructure in general will definitely help with getting your head around the concepts in play in AWS.

Free Tier Account:

The first thing you will need is a Free Tier account with AWS, this gives you 12 months of basically free services (but there are limits). You will need a credit card to sign up for this, but don’t worry about spending a tonne of money with it. You can put billing notifications in place which will tell you if you are spending any cash, and as long as you shut stuff down when you are not using it then it wont cost you any money. Get your free tier account here. When you have it then spend as much time as possible playing with the services, checking out all the options, and reading about what they do, this is the easiest way to understand the service.

Some video training:

This is optional I guess, but if you’re new to AWS, as I was, then it should help focus you on the things you need to pass the exam. I used the following courses:

  • Pluralsight – Nigel Poulton (@nigelpoulton) – AWS VPC Operations – I watched this course probably a year ago or more, and while I didn’t use it directly when studying for my CSA Associate exam, it did give me a broad understanding of VPCs in AWS and how things tie together, definitely worth a watch
  • A Cloud Guru – this course is widely lauded by people on social media in my experience, and the guys have done an excellent job of the course, although I felt it didn’t cover some bits in as much detail as needed to pass the exam. This is available on their site (at a discount until 19/01/16), or (where I bought it from) on Udemy where I paid £19 (think it is £10 for January as well). I would definitely say this course is worth it anyway, but I would look at another one as well to complement the parts lacking in the ACG course.
  • Linux Academy – the instructor for this course was not quite as good as Ryan from ACG in my opinion, but the depth and breadth of the subject matter, hands on labs, and the practice exams make this worth looking at. To top it off, if you join the Microsoft Dev Essentials program (here) you can get 3 months access to both Linux Academy and Pluralsight for free!

Books:

There is an official study gude, but it’s near enough 30 quid, and the reviews on Amazon were generally negative so I avoided it.

Whitepapers etc:

The Exam Blueprint (here) lists a bunch of whitepapers and these were definitely worth swotting up on. In addition, the documentation for each technology should be read and understood: there really is a lot to learn here, and you need to know everything in reasonable detail in my opinion.

Other:

These flash cards were pretty good, but think they’re generally taken from the ACG practice exams anyway.

Note taking:

I have taken a lot of notes for each of the services; I used Evernote for this which i found to be very useful in summarising the key fundamentals of each service. I will likely tidy these up now the exam is out the way and publish them as a series of blog posts, to hopefully help other exam candidates.

The Exam itself:

I was seriously stressed going into this exam; the breadth and depth of information you need to take in is pretty daunting if you only give yourself a couple of weeks to learn it as I did, and while the exam was slightly easier than I thought it would be, I still found it tough. I would suggest giving yourself 4-6 weeks of preparation if you have no AWS experience.

One gripe I have is that although I am lucky enough to have a Kryterion exam centre only 5 miles away, they only seem to do the exams once a month, which doesn’t give much flexibility. Looks to be the same in Leeds as well, so hopefully this will improve in the future.

Glad I got it done anyway, and putting myself under intense time pressure seems to have paid off anyway. Onwards and upwards to the Certified Developer Associate exam now. I would recommend doing the exam to anyone, AWS certification is in high demand at the moment, and getting a foot on the ladder will definitely help anyone in the IT industry over the next few years. I hope to go on and do as many of these exams as I can because I genuinely find what AWS are doing, and the way the services are implemented, to be interesting.