Cleaning up AWS OpsWorks Automate Nodes

I’ve been playing with Chef and AWS’ OpsWorks Automate product a lot in the last few weeks, one problem I had was that as I kept bootstrapping EC2 instances, using the excellent Knife EC2 tool, the nodes were not being cleaned up out of the Chef Automate portal. I’m imagining this will be a common issue for folks using ephemeral type workloads with Chef Automate in any cloud.

AWS’ documentation has some AWS CLI commands to run to remove old nodes, but this refers to AWS CLI commands which do not seem to be present in the latest version of AWS CLI (there is no ‘aws opsworks-cm’ domain now in the CLI, so no way of managing OpsWorks Automate).

I found this page in Chef’s highly recommended Learn Chef Rally training site which led me to the way to do this. The following can be run from an SSH connection into your Chef Automate server (or in my case, as I had not assigned a keypair on creation of my Automate server, through EC2 Systems Manager’s Run Command feature):

sudo automate-ctl delete-visibility-node <NODE_NAME>

If you have multiple nodes with the same name, you may receive the following response:

Multiple nodes were found matching your request. Please delete by ID using: automate-ctl delete-visibility-node-by-id NODE_UUID

Node UUID Node Name Org Name Chef Server
==================================== ========= ======== ===========
1c298e89-7c9f-4feb-b784-20b3858bfd6f webtest2 default chefautomate-1abcdefgo12abcde.eu-west-1.opsworks-cm.io
7f9b96df-7c02-4277-a5bb-879962b17136 webtest2 default chefautomate-1abcdefgo12abcde.eu-west-1.opsworks-cm.io
05f55344-2425-4764-8db6-9c0a0ef8d015 webtest2 default chefautomate-1abcdefgo12abcde.eu-west-1.opsworks-cm.io

You can delete these using the following command instead:

sudo automate-ctl delete-visibility-node-by-id <NODE_ID>

This wraps up the post, hopefully it comes in useful for people.

Advertisements

AWS Certified Developer and SysOps Administrator Associate Exam Experience

So since my post on the AWS Certified Solutions Architect Associate exam (here), I’ve now completed the Certified Developer Associate, and Certified SysOps Administrator Associate exams.

I found these a lot easier to study for after doing the first exam, as I had an idea of the level the exams were pitched at, and the types of questions which would be on the exam. I didn’t do a post immediately after the Developer exam, because the study I did was largely the same as for the Solutions Architect exam, and this was the same for the SysOps Administrator exam.

Again I used mainly A Cloud Guru courses (purchased cheaply from Udemy), and the Linux Academy courses, accessed for free through Microsoft’s Dev Essentials program. As for the Solutions Architect exam, these helped to give me the basics for the exams, and focus on those areas I was lacking in.

Another good resource is the A Cloud Guru discussions forums (here), where kind people share their experiences about the exam they had, and point out specific areas which should be investigated.

I got 96% on the Developer exam, and 92% on the SysOps Administrator exam, so the methods I have been using are obviously working for me. Next I’m going to move on to the Professional certs; most likely the Solutions Architect exam first, then the DevOps Engineer exam, as that maps more to my day job.

Good luck to anyone looking to do the AWS exams, I have got through all the entry level ones in a little over 2 months from having no experience, so they are not too scary. It’s been an enjoyable ride, and I am pretty pumped to learn more and get the Pro exams passed.

AWS REST API Authentication Using Node.js

I’ve been learning as much as I can on Amazon Web Services over the last couple of months; the looming shadow of it over traditional IT finally got too much, and I figured it was time to make the leap. Overall it’s been a great experience, and the biggest takeaway I’ve probably had is how every service, and the way in which we consume them, are application-centric.
Every service is fully API first, with the AWS Management Console basically acting as a front end for the API calls made to the vast multitude of services. I’ve done a fair amount of work with REST APIs over the last 18 months, and it’s always good to fire up Postman (if you don’t know what this is, there is a post here I did about REST APIs, and the use of Postman), and throw a few API calls at a new technology to see how it works.
Now while AWS services are all available via REST APIs; there are a tonne of tools available for both administrators and developers, which abstract away the nitty gritty, we have:
  • AWS CLI – a CLI based tool for Windows/Linux/OSX (available here)
  • AWS Tools for Windows PowerShell – the PowerShell module for consuming AWS services (available here)
  • SDKs (Software Development Kits) for the following (all available here):
    • Android
    • Browsers (basically a JavaScript SDK you can build web services around)
    • iOS
    • Java
    • .NET
    • Node.js
    • PHP
    • Python
    • Ruby
    • GoLang
    • C++
    • AWS IoT
    • AWS Mobile
Combined, these provide a wide variety of ways to use pre-built solutions to speak to AWS based resources, and there should be something that any developer or admin can use to introduce some automation or programability into their work, and I would recommend using one of these if at all possible to abstract away the heavy lifting of working with a very broad and deep API.
I wanted to get stuck in from a REST API side though, which basically means building things from the ground up. This turned out to take a fair amount of time, but I learned a heck of a lot about the authentication and authorisation process for AWS, and how this helps to prevent unauthorised access.
The full authentication process is described in the AWS Documentation available here. There are pages and pages describing the V4 authentication process (the current recommended version), and this gets pretty complicated. I’m going to try and break it down here, showing the bits of code used to create each element; this should hopefully make it a bit clearer.
One post I found really useful on this was by Lukasz Adamczak (@lukasz_adamczak), on how to do the authentication with Curl, which I used as the basis for some of what I did below. I couldn’t find anything where someone was doing this task via the REST API in JavaScript.

Pre-requisites

The following variables need to be set in the script before we start:
// our variables
var access_key = 'ACCESS_KEY_VALUE'
var secret_key = 'SECRET_KEY_VALUE'
var region = 'eu-west-1';
var url = 'my-bucket-name.s3.amazonaws.com';
var myService = 's3';
var myMethod = 'GET';
var myPath = '/';
In addition to this, we have these package dependencies:
// declare our dependencies
var crypto = require('crypto-js');
var https = require('https');
var xml = require('xml2js');
The crypto-js and https modules were built into the version of Node I was using (v6.9.5), but I had to use NPM to install the xml2js module.

Amazon Date Format

I started with getting the date format used by AWS in authentication, this is based on the ISO 8601 format, but has the punctuation, and milliseconds removed from it. I created the below function, and use it to create the two variables shown below:
// get the various date formats needed to form our request
var amzDate = getAmzDate(new Date().toISOString());
var authDate = amzDate.split("T")[0];

// this function converts the generic JS ISO8601 date format to the specific format the AWS API wants
function getAmzDate(dateStr) {
  var chars = [":","-"];
  for (var i=0;i<chars.length;i++) {
    while (dateStr.indexOf(chars[i]) != -1) {
      dateStr = dateStr.replace(chars[i],"");
    }
  }
  dateStr = dateStr.split(".")[0] + "Z";
  return dateStr;
}
We’ll go into this later, but the reason there are two variables for the date (amzDate, authDate) is that in generating the headers for our REST call we will need both formats at different times. One is in the ‘YYYYMMDDTHHmmssZ’ format, and one is in the ‘YYYYMMDD’ format.

Our payload

The example used in this script is to use a blank payload, for which we calculate the SHA256 hash. This is obviously always the same when calculated against a blank string (e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855, if you were interested :p), but I included the hashing of this in the script so the logic is there later if we wanted to do different payloads.
// we have an empty payload here because it is a GET request
var payload = '';
// get the SHA256 hash value for our payload
var hashedPayload = crypto.SHA256(payload).toString();
This hashed payload is used a bunch in the final request, including in the ‘x-amz-content-sha256’ HTTP header to validate the expected payload.

Canonical Request

This is where things got a bit confusing for me; we need to work out the payload of our message (in AWS’ special format), and work out what the SHA256 hash of this is. First we need to know the formatting for the canonical request. Ultimately this is a multi-line string, consisting of the following attributes:
HTTPRequestMethod
CanonicalURI
CanonicalQueryString
CanonicalHeaders
SignedHeaders
HexEncode(Hash(RequestPayload))
These attributes are described as:
  • HTTPRequestMethod – the HTTP method being used, could be GET, POST, etc. In our example this will be GET
  • CanonicalURI – the relative URI for the resource we are accessing. In our example we access the root namespace of our bucket, so this is set to “/”
  • CanonicalQueryString – we can build a query for our request, more information on this is available here. In our example we don’t need a query so we will leave this as a blank line
  • CanonicalHeaders – a carriage return separated list of the headers we are using in our request
  • SignedHeaders – a semi-colon separated list of the header keys we are including in our request
  • HexEncode(Hash(RequestPayload)) – the hash value as calculated earlier. As we used the ‘toString()’ method on this, it should already be in hexadecimal
We construct this request with the following code:
// create our canonical request
var canonicalReq =  myMethod + '\n' +
                    myPath + '\n' +
                    '\n' +
                    'host:' + url + '\n' +
                    'x-amz-content-sha256:' + hashedPayload + '\n' +
                    'x-amz-date:' + amzDate + '\n' +
                    '\n' +
                    'host;x-amz-content-sha256;x-amz-date' + '\n' +
                    hashedPayload;
This leaves us with the following as an example:
GET
/

host:my-bucket-name.s3.amazonaws.com
x-amz-content-sha256:e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
x-amz-date:20170213T045707Z

host;x-amz-content-sha256;x-amz-date
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
Note the blank CanonicalQuery line here, as we are not using that functionality, and the blank like after Canonical Headers, these are required for the string to be accepted when we hash it.
So now we can hash this multi-line string:
// hash the canonical request
var canonicalReqHash = crypto.SHA256(canonicalReq).toString();
This becomes another long hashed value.
c75b55ba2d959baf99f2c4976c7a50c7cd79067a726c21024f4a981ae2a90b50

String to Sign

Now, similar to the Canonical Request above, we create a new multi-line string which is used to generate our authentication header. This time it is in the following format:
Algorithm
RequestDate
CredentialScope
HashedCanonicalRequest

These attributes are completed as:

  • Algorithm – for SHA256, which is what we always use with AWS, this should be set to ‘AWS4-HMAC-SHA256’
  • RequestDate – this is the date/time stamp in the ‘YYYYMMDDTHHmmssZ’ format, so we will use our stored ‘amzDate’ variable here
  • CredentialScope – this takes the format ‘///aws4_request’. We have the date stored in this format already as ‘authDate’, so we can use that here, our region name can be found in this table, and the service name here is ‘s3’, further details of other namespaces can be found here
  • HashedCanonicalRequest – this was calculated above

With this information we can form our string like this:

// form our String-to-Sign
var stringToSign =  'AWS4-HMAC-SHA256\n' +
                    amzDate + '\n' +
                    authDate+'/'+region+'/'+myService+'/aws4_request\n'+
                    canonicalReqHash;
This generates a string like this:
AWS4-HMAC-SHA256
20170213T051343Z
20170213/eu-west-1/s3/aws4_request
c75b55ba2d959baf99f2c4976c7a50c7cd79067a726c21024f4a981ae2a90b50

Signing Key

We need a signing key now; this embeds our secret key, along with some other bits in a hash which is used to sign our ‘String to sign’, giving us our final hashed value which we use in the authentication header. Luckily here AWS provide some sample JS code (amongst other languages) for creating this hash:
// this function gets the Signature Key, see AWS documentation for more details
function getSignatureKey(Crypto, key, dateStamp, regionName, serviceName) {
    var kDate = Crypto.HmacSHA256(dateStamp, "AWS4" + key);
    var kRegion = Crypto.HmacSHA256(regionName, kDate);
    var kService = Crypto.HmacSHA256(serviceName, kRegion);
    var kSigning = Crypto.HmacSHA256("aws4_request", kService);
    return kSigning;
}

This can be found here.So into this function we pass our secret access key, the authDate variable we calculated earlier, our region, and the service namespace.

// get our Signing Key
var signingKey = getSignatureKey(crypto, secret_key, authDate, region, myService);

This will again return a long hash value:

9afc364e2eb6ba46f000721975d32bc2042058f80b5a8fd69efe422e7be5090d

Authentication Key

Nearly there now! So we need to take our String to Sign, and our Signing Key, and hash the string to sign with the signing key, to generate another hash which will be used in our request header. To do this we again use the CryptoJS library, with the order of the inputs being our string to hash (stringToSign), and then the key to hash it with (signingKey).
This returns another hash:
31c6a42a9aec00390317f9c714f38efeba2498fa1996cecb9b4c714b39cbc05a90332f38ef

Creating our headers

Right, no more hashing needed now, we have everything we need. So next we construct our Authentication header value:
// Form our authorization header
var authString  = 'AWS4-HMAC-SHA256 ' +
                  'Credential='+
                  access_key+'/'+
                  authDate+'/'+
                  region+'/'+
                  myService+'/aws4_request,'+
                  'SignedHeaders=host;x-amz-content-sha256;x-amz-date,'+
                  'Signature='+authKey;
This is a single line, multi-part string consisting of the following parts:
  • Algorithm – for SHA256, which is what we always use with AWS, this should be set to ‘AWS4-HMAC-SHA256’
  • CredentialScope – as used in our String To Sign above
  • SignedHeaders – a semi-colon separated list of our signed headers
  • Signature – the authentication key we hand crafted above
When we place all these together, we end up with a string like this:
WS4-HMAC-SHA256 Credential=A123EXAMPLEACCESSKEY/20170213/eu-west-1/s3/aws4_request,SignedHeaders=host;x-amz-content-sha256;x-amz-date,Signature=31c6a42a9aec00390317f9c714f38efeba2498fa1996cecb9b4c714b39cbc05a90332f38ef
Now we have everything we need to create our headers for our HTTP request:
// throw our headers together
headers = {
  'Authorization' : authString,
  'Host' : url,
  'x-amz-date' : amzDate,
  'x-amz-content-sha256' : hashedPayload
};
Here we use a hash array for simplicity, with our various headers added to the array, to end up with an array like this:
Key
Value
Authorization
WS4-HMAC-SHA256 Credential=A123EXAMPLEACCESSKEY/20170213/eu-west-1/s3/aws4_request,SignedHeaders=host;x-amz-content-sha256;x-amz-date,Signature=31c6a42a9aec00390317f9c714f38efeba2498fa1996cecb9b4c714b39cbc05a90332f38ef
Host
x-amz-date
20170213T051343Z
x-amz-content-sha256
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
These are now ready to use in our request.

Our request

Now we can send our request. I split this into a function to do the REST call:
// the REST API call using the Node.js 'https' module
function performRequest(endpoint, headers, data, success) {

  var dataString = data;

  var options = {
    host: endpoint,
    port: 443,
    path: '/',
    method: 'GET',
    headers: headers
  };

  var req = https.request(options, function(res) {
    res.setEncoding('utf-8');

    var responseString = '';

    res.on('data', function(data) {
      responseString += data;
    });

    res.on('end', function() {
      //console.log(responseString);
      success(responseString);
    });
  });

  req.write(dataString);
  req.end();
}
And the call to this function, which also processes the results, in the body:
// call our function
performRequest(url, headers, payload, function(response) {
  // parse the response from our function and write the results to the console
  xml.parseString(response, function (err, result) {
    console.log('\n=== \n'+'Bucket is named: ' + result['ListBucketResult']['Name']);
    console.log('=== \n'+'Contents: ');
    for (i=0;i<result['ListBucketResult']['Contents'].length;i++) {
      console.log(
        '=== \n'+
        'Name: '          + result['ListBucketResult']['Contents'][i]['Key'][0]           + '\n' +
        'Last modified: ' + result['ListBucketResult']['Contents'][i]['LastModified'][0]  + '\n' +
        'Size (bytes): '  + result['ListBucketResult']['Contents'][i]['Size'][0]          + '\n' +
        'Storage Class: ' + result['ListBucketResult']['Contents'][i]['StorageClass'][0]
      );
    };
    console.log('=== \n');
  });
});
This essentially passes our headers with the payload to the URL we specified in our variables, and processes the resulting XML into some useful output.
This is what this returns as output:
Tims-MacBook-Pro:GetS3BucketContent tim$ node get_bucket_content.js

===
Bucket is named: virtualbrakeman
===
Contents:
===
Name: file_a_foo.txt
Last modified: 2017-02-05T11:19:36.000Z
Size (bytes): 10
Storage Class: STANDARD
===
Name: file_b_foo.txt
Last modified: 2017-02-05T11:19:36.000Z
Size (bytes): 10
Storage Class: STANDARD
===
Name: file_c_foo.txt
Last modified: 2017-02-05T11:19:36.000Z
Size (bytes): 10
Storage Class: STANDARD
===
Name: file_d_foo.txt
Last modified: 2017-02-05T11:19:36.000Z
Size (bytes): 10
Storage Class: STANDARD
===
Name: foobar
Last modified: 2017-02-04T22:01:38.000Z
Size (bytes): 0
Storage Class: STANDARD
===

Tims-MacBook-Pro:GetS3BucketContent tim$

Conclusion

This is a fairly trivial example of data to return, but the real point behind this was building the authentication code, which proved to be very laborious. Given the wide (and growing) variety of SDKs available, it seems overly complex to try and construct these requests in this way every time. I have played with the Python and JavaScript SDKs and both take this roughly 150 line script, and achieve the same result in around 20 lines.
Regardless, this was a good learning exercise for me, and it may come in useful for people trying to interact with the AWS API in ways which are not covered by the SDKs, or via other languages where an SDK is not available.
The final script is shown below, and is also available on my GitHub library here: https://github.com/railroadmanuk/awsrestauthentication
// declare our dependencies
var crypto = require('crypto-js');
var https = require('https');
var xml = require('xml2js');

main();

// split the code into a main function
function main() {
  // this serviceList is unused right now, but may be used in future
  const serviceList = [
    'dynamodb',
    'ec2',
    'sqs',
    'sns',
    's3'
  ];

  // our variables
  var access_key = 'ACCESS_KEY_VALUE';
  var secret_key = 'SECRET_KEY_VALUE';
  var region = 'eu-west-1';
  var url = 'my-bucket-name.s3.amazonaws.com';
  var myService = 's3';
  var myMethod = 'GET';
  var myPath = '/';

  // get the various date formats needed to form our request
  var amzDate = getAmzDate(new Date().toISOString());
  var authDate = amzDate.split("T")[0];

  // we have an empty payload here because it is a GET request
  var payload = '';
  // get the SHA256 hash value for our payload
  var hashedPayload = crypto.SHA256(payload).toString();

  // create our canonical request
  var canonicalReq =  myMethod + '\n' +
                      myPath + '\n' +
                      '\n' +
                      'host:' + url + '\n' +
                      'x-amz-content-sha256:' + hashedPayload + '\n' +
                      'x-amz-date:' + amzDate + '\n' +
                      '\n' +
                      'host;x-amz-content-sha256;x-amz-date' + '\n' +
                      hashedPayload;

  // hash the canonical request
  var canonicalReqHash = crypto.SHA256(canonicalReq).toString();

  // form our String-to-Sign
  var stringToSign =  'AWS4-HMAC-SHA256\n' +
                      amzDate + '\n' +
                      authDate+'/'+region+'/'+myService+'/aws4_request\n'+
                      canonicalReqHash;

  // get our Signing Key
  var signingKey = getSignatureKey(crypto, secret_key, authDate, region, myService);

  // Sign our String-to-Sign with our Signing Key
  var authKey = crypto.HmacSHA256(stringToSign, signingKey);

  // Form our authorization header
  var authString  = 'AWS4-HMAC-SHA256 ' +
                    'Credential='+
                    access_key+'/'+
                    authDate+'/'+
                    region+'/'+
                    myService+'/aws4_request,'+
                    'SignedHeaders=host;x-amz-content-sha256;x-amz-date,'+
                    'Signature='+authKey;

  // throw our headers together
  headers = {
    'Authorization' : authString,
    'Host' : url,
    'x-amz-date' : amzDate,
    'x-amz-content-sha256' : hashedPayload
  };

  // call our function
  performRequest(url, headers, payload, function(response) {
    // parse the response from our function and write the results to the console
    xml.parseString(response, function (err, result) {
      console.log('\n=== \n'+'Bucket is named: ' + result['ListBucketResult']['Name']);
      console.log('=== \n'+'Contents: ');
      for (i=0;i<result['ListBucketResult']['Contents'].length;i++) {
        console.log(
          '=== \n'+
          'Name: '          + result['ListBucketResult']['Contents'][i]['Key'][0]           + '\n' +
          'Last modified: ' + result['ListBucketResult']['Contents'][i]['LastModified'][0]  + '\n' +
          'Size (bytes): '  + result['ListBucketResult']['Contents'][i]['Size'][0]          + '\n' +
          'Storage Class: ' + result['ListBucketResult']['Contents'][i]['StorageClass'][0]
        );
      };
      console.log('=== \n');
    });
  });
};

// this function gets the Signature Key, see AWS documentation for more details, this was taken from the AWS samples site
function getSignatureKey(Crypto, key, dateStamp, regionName, serviceName) {
    var kDate = Crypto.HmacSHA256(dateStamp, "AWS4" + key);
    var kRegion = Crypto.HmacSHA256(regionName, kDate);
    var kService = Crypto.HmacSHA256(serviceName, kRegion);
    var kSigning = Crypto.HmacSHA256("aws4_request", kService);
    return kSigning;
}

// this function converts the generic JS ISO8601 date format to the specific format the AWS API wants
function getAmzDate(dateStr) {
  var chars = [":","-"];
  for (var i=0;i<chars.length;i++) {
    while (dateStr.indexOf(chars[i]) != -1) {
      dateStr = dateStr.replace(chars[i],"");
    }
  }
  dateStr = dateStr.split(".")[0] + "Z";
  return dateStr;
}

// the REST API call using the Node.js 'https' module
function performRequest(endpoint, headers, data, success) {

  var dataString = data;

  var options = {
    host: endpoint,
    port: 443,
    path: '/',
    method: 'GET',
    headers: headers
  };

  var req = https.request(options, function(res) {
    res.setEncoding('utf-8');

    var responseString = '';

    res.on('data', function(data) {
      responseString += data;
    });

    res.on('end', function() {
      //console.log(responseString);
      success(responseString);
    });
  });

  req.write(dataString);
  req.end();
}

AWS Certified Solutions Architect – Associate Exam Experience

I sat my AWS CSA Associate exam today, and I’m happy to say I passed with 78%. This post will talk about my previous experiences with AWS, and the resources I used to study it.

A bit of history about this certification; AWS currently have a total of five generally available certifications, and three in beta:

Associate level (the easiest):

  • AWS Certified Solutions Architect – Associate – aimed I guess at Solutions Architects (pre-sales or post-sales consultants), and covering most of the core services, with a focus on designing highly available infrastructures
  • AWS Certified Developer – Associate – aimed at people developing software solutions to run on top of AWS, this looks to cover most of the same ground, but aimed more around using APIs, using the services to build applications, and some of the tools available to help developers consume services more easily
  • AWS Certified SysOps Administrator – Associate – aimed at people administering AWS environments, covering a lot of the same ground as the other two associate level exams, but focussing more on the tools available to keep things running, and troubleshooting

Specialist level (the new ones – currently beta):

  • AWS Certified Big Data – Specialty – focussing on the Big Data type services available in AWS
  • AWS Certified Advanced Networking – Specialty – focussing on the networking concepts and services in AWS
  • AWS Certified Security – Specialty – focussing on security in AWS

Professional level (the hard ones):

  • AWS Certified Solutions Architect – Professional – focussed on designing and implanting highly available systems and applications on AWS
  • AWS Certified DevOps Engineer – Professional – focussed on developing and managing distributed applications on AWS

The associate level exam blueprints talk about having hands on experience and time served with AWS; I haven’t used AWS at work, so had to start from scratch basically. Having a good understanding of how modern applications are architected, databases, Linux and Windows, and infrastructure in general will definitely help with getting your head around the concepts in play in AWS.

Free Tier Account:

The first thing you will need is a Free Tier account with AWS, this gives you 12 months of basically free services (but there are limits). You will need a credit card to sign up for this, but don’t worry about spending a tonne of money with it. You can put billing notifications in place which will tell you if you are spending any cash, and as long as you shut stuff down when you are not using it then it wont cost you any money. Get your free tier account here. When you have it then spend as much time as possible playing with the services, checking out all the options, and reading about what they do, this is the easiest way to understand the service.

Some video training:

This is optional I guess, but if you’re new to AWS, as I was, then it should help focus you on the things you need to pass the exam. I used the following courses:

  • Pluralsight – Nigel Poulton (@nigelpoulton) – AWS VPC Operations – I watched this course probably a year ago or more, and while I didn’t use it directly when studying for my CSA Associate exam, it did give me a broad understanding of VPCs in AWS and how things tie together, definitely worth a watch
  • A Cloud Guru – this course is widely lauded by people on social media in my experience, and the guys have done an excellent job of the course, although I felt it didn’t cover some bits in as much detail as needed to pass the exam. This is available on their site (at a discount until 19/01/16), or (where I bought it from) on Udemy where I paid £19 (think it is £10 for January as well). I would definitely say this course is worth it anyway, but I would look at another one as well to complement the parts lacking in the ACG course.
  • Linux Academy – the instructor for this course was not quite as good as Ryan from ACG in my opinion, but the depth and breadth of the subject matter, hands on labs, and the practice exams make this worth looking at. To top it off, if you join the Microsoft Dev Essentials program (here) you can get 3 months access to both Linux Academy and Pluralsight for free!

Books:

There is an official study gude, but it’s near enough 30 quid, and the reviews on Amazon were generally negative so I avoided it.

Whitepapers etc:

The Exam Blueprint (here) lists a bunch of whitepapers and these were definitely worth swotting up on. In addition, the documentation for each technology should be read and understood: there really is a lot to learn here, and you need to know everything in reasonable detail in my opinion.

Other:

These flash cards were pretty good, but think they’re generally taken from the ACG practice exams anyway.

Note taking:

I have taken a lot of notes for each of the services; I used Evernote for this which i found to be very useful in summarising the key fundamentals of each service. I will likely tidy these up now the exam is out the way and publish them as a series of blog posts, to hopefully help other exam candidates.

The Exam itself:

I was seriously stressed going into this exam; the breadth and depth of information you need to take in is pretty daunting if you only give yourself a couple of weeks to learn it as I did, and while the exam was slightly easier than I thought it would be, I still found it tough. I would suggest giving yourself 4-6 weeks of preparation if you have no AWS experience.

One gripe I have is that although I am lucky enough to have a Kryterion exam centre only 5 miles away, they only seem to do the exams once a month, which doesn’t give much flexibility. Looks to be the same in Leeds as well, so hopefully this will improve in the future.

Glad I got it done anyway, and putting myself under intense time pressure seems to have paid off anyway. Onwards and upwards to the Certified Developer Associate exam now. I would recommend doing the exam to anyone, AWS certification is in high demand at the moment, and getting a foot on the ladder will definitely help anyone in the IT industry over the next few years. I hope to go on and do as many of these exams as I can because I genuinely find what AWS are doing, and the way the services are implemented, to be interesting.