PowerCLI Core on Docker with macOS

Back in November, shortly after Microsoft’s open-sourcing of PowerShell, and subsequent cross-platform PowerShell Core release, VMware released their cross-platform version of PowerCLI: PowerCLI Core. This was made available on GitHub for general consumption, and can be installed on top of PowerShell Core on a macOS/OSX or Linux machine.

I have loads of junk installed on my MacBook, including the very first public release of PowerShell Core, but keeping it all up to date, and knowing what I have installed can be a pain, so my preference for running PowerShell Core, or PowerCLI Core at the moment is through a Docker container on my laptop, this helps keep the clutter down and makes upgrading easy.

In this post I’m going to show how to use the Docker container with an external editor, and be able to run all your scripts from within the container, removing the need to install PowerShell Core on your Mac.

Pre-requisites

We will need to install a couple of bits of software before we begin:

Beyond that we will get everything below.

Getting the Docker image

VMware have made the PowerCLI Core Docker image available on Docker Hub (here), this is the easiest place to pull container images from to your desktop, and is the general go-to place for public container images as of today. This can be downloaded once the Docker CE is installed through the command below:

Icarus:~ root$ docker pull vmware/powerclicore:latest
latest: Pulling from vmware/powerclicore
93b3dcee11d6: Already exists
d6641ceee635: Pull complete
62bbcce52faa: Pull complete
e86aa7a78685: Pull complete
db20fbdf24c0: Pull complete
37379feb8f29: Pull complete
8abb449d1e29: Pull complete
a9cd6d9452e7: Pull complete
50886ff01a73: Pull complete
74af7eaa49c1: Pull complete
878c611eaf2c: Pull complete
39b1b7978191: Pull complete
98e632013bea: Pull complete
4362432cb5ea: Pull complete
19f5f892ae79: Pull complete
29b0b093b159: Pull complete
913ad6409b89: Pull complete
ad5db0a55033: Pull complete
Digest: sha256:d33ac26c0c704a7aa48f5c7c66cb76ec3959beda2962ccd6a41a96351055b5d0
Status: Downloaded newer image for vmware/powerclicore:latest
Icarus:~ root$

This may take a couple of minutes, but the image should now be present on the local machine, and ready to fire up.

Getting the path for our scripts folder

Before we launch our container we need our scripts path, this could be a folder anywhere on your computer, in my case it is:

/Users/tim/Dropbox/Coding Projects/PowerShell/VMware

Launching our container

The idea here is to launch a folder which is accessible from both inside and outside our container, so we can edit the scripts with our full fat editor, and run them from inside the container.

To launch the container, we use the following command, I explain the switches used below:

docker run --name PowerCLI --detach -it --rm --volume '/Users/tim/Dropbox/Coding Projects/PowerShell/VMware':/usr/scripts vmware/powerclicore:latest
  • –name – this sets the container name, which will make it easier when we want to attach to the container
  • –detach – this starts the container without attaching us to it immediately, meaning if there is anything else we need to do before connecting we can
  • -it – this creates an interactive TTY connection, giving us the ability to interact with the console of the container
  • –rm – this will delete the container when we exit it, this should keep the processes tidy on our machine
  • –volume … – this maps our scripts folder to /usr/scripts, so we can consume our scripts once in the container
  • vmware/powercli:latest – the name of the image to launch the container from

Now when we run this we will see the following output:

Icarus:~ root$ docker run --name PowerCLI --detach -it --rm --volume '/Users/tim/Dropbox/Coding Projects/PowerShell/VMware':/usr/scripts vmware/powerclicore:latest
c48ff51e3f824177da8e3b0fd0210e5864b01fea94ae5f5871b3654b4f5bcd35
Icarus:~ root$

This is the UID for our container, we won’t need this, as we will attach using the friendly name for our container anyway. When you are ready to attach, use the following command:

Icarus:~ root$ docker attach PowerCLI

You may need to press return a couple of times, but you should now have a shell that looks like this:

PS /powershell>

Now we are in the container, and should be able to access our scripts by changing directory to /usr/scripts.

If we run ‘Get-Module -ListAvailable’ we can see the modules installed in this Docker image:

PS /powershell> Get-Module -ListAvailable                                                                               

    Directory: /root/.local/share/powershell/Modules

ModuleType Version    Name                                ExportedCommands
---------- -------    ----                                ----------------
Binary     1.21       PowerCLI.Vds
Binary     1.21       PowerCLI.ViCore                     HookGetViewAutoCompleter
Script     2.1.0      PowerNSX                            {Add-XmlElement, Format-Xml, Invoke-NsxRestMethod, Invoke-...
Script     2.0.0      PowervRA                            {Add-vRAPrincipalToTenantRole, Add-vRAReservationNetwork, ...

    Directory: /opt/microsoft/powershell/6.0.0-alpha.14/Modules

ModuleType Version    Name                                ExportedCommands
---------- -------    ----                                ----------------
Manifest   1.0.1.0    Microsoft.PowerShell.Archive        {Compress-Archive, Expand-Archive}
Manifest   3.0.0.0    Microsoft.PowerShell.Host           {Start-Transcript, Stop-Transcript}
Manifest   3.1.0.0    Microsoft.PowerShell.Management     {Add-Content, Clear-Content, Clear-ItemProperty, Join-Path...
Manifest   3.0.0.0    Microsoft.PowerShell.Security       {Get-Credential, Get-ExecutionPolicy, Set-ExecutionPolicy,...
Manifest   3.1.0.0    Microsoft.PowerShell.Utility        {Format-List, Format-Custom, Format-Table, Format-Wide...}
Script     1.1.2.0    PackageManagement                   {Find-Package, Get-Package, Get-PackageProvider, Get-Packa...
Script     3.3.9      Pester                              {Describe, Context, It, Should...}
Script     1.1.2.0    PowerShellGet                       {Install-Module, Find-Module, Save-Module, Update-Module...}
Script     0.0        PSDesiredStateConfiguration         {IsHiddenResource, StrongConnect, Write-MetaConfigFile, Ge...
Script     1.2        PSReadLine                          {Get-PSReadlineKeyHandler, Set-PSReadlineKeyHandler, Remov...

PS /powershell>

So we have the PowerCLI Core module, the Distributed vSwitch module, as well as PowervRA and PowerNSX. We should be able to run our scripts from the /usr/share folder, or just run stuff from scratch.

The great thing is we can now edit our scripts in the folder mapped to /usr/share using our editor, and the changes are available live to test our scripts, and we can even write output to this folder from within the container.

If you want to detach from the container without killing it then use the Ctrl+P+Q key combination, you can then reattach with ‘docker attach PowerCLI’. When you are done with the container type ‘exit’ and it will quit and be removed.

Conclusion

Though basic, this can really help with workflow when writing and testing scripts on macOS, while enabling you to simply keep up to date with the latest images, and not fill your Mac with more junk.

vRealize Orchestrator and Site Recovery Manager – The Missing Parts (or how to hack SOAP APIs to get what you want)

vRealize Orchestrator (vRO) forms the backbone of vRealize Automation (vRA), and provides the XaaS (Anything-as-a-Service) functionality for this product. vRO has plugins for a number of technologies; both those made by VMware, and those which are not. Having been using vRO to automate various products for the last 6 months or so, I have found that these plugins have varying degrees of quality, and some cover more functionality of the underlying product than others.

Over the last couple of weeks, I have been looking at the Site Recovery Manager (SRM) plugin (specifically version 6.1.1, in association with vRO 7.0.1, and SRM 6.1), and while this provides some of the basic functionality of SRM, it is missing some key features which I needed to expose in order to provide full-featured vRA catalog services. Specifically, the plugin documentation lists the following as being missing:

  • You cannot create, edit, or delete recovery plans.
  • You cannot add or remove test network mapping to a recovery plan.
  • You cannot rescan storage to discover newly added replicated devices.
  • You cannot delete folder, network, and resource pool mappings
  • You cannot delete protection groups
  • The unassociateVms and unrotectVms methods are not available in the plug-in. You can use them by using the Site Recovery Manager public API.

Some of these are annoying, but the last ones, around removing VMs from Protection Groups are pretty crucial for the catalog services I was looking to put together. I had to find another way to do this task, outside of the hamstrung plugin.

I dug out the SRM API Developers Guide (available here), and had a read through it, but whilst describing the API in terms of Java and C# access, it wasn’t particularly of use in helping me to use vRO’s JavaScript based programming to do what I needed to do. So I needed another way to do this, which utilised the native SOAP API presented by SRM.

Another issue I saw when using the vRO SRM plugin was that when trying to add a second SRM server (the Recovery site), the plugin fell apart. It seems that the general idea is you only automate your Protected site with this plugin, and not both sites through a single vRO instance.

I tried adding a SOAP host to vRO using the ‘Add a SOAP host’ workflow, but even after adding the WSDL available on the SRM API interface, this was still not particularly friendly, so this didn’t help too much.

Using PowerCLI, we can do some useful things using the SRM API, see this post, and this GitHub repo, for some help with doing this. Our general approach to using vRO is to avoid using a PowerShell host, as this adds a bunch of complexity around adding a host, and generally we would rather do things using REST hosts with pure JavaScript code. So we need a way to figure out how to use this undocumented SOAP API to do stuff.

Now before we go on, I appreciate that the API is subject to change, and that by using the following method to do what we need to do, the methods of automation may change in a future version of SRM. As you will see, this is a fairly simple method of getting what you need, and it should be easy enough to refactor the payloads we are using if and when the API changes. In addition to this, this method should work for any kind of SOAP or REST based API which you can access through .NET type objects in PowerShell.

So the first thing we need to do is to install Fiddler. This is the easiest tool I found to get what I wanted, and there are probably other products about, but I found and liked this one. Fiddler is a web debugging tool, which I would imagine a lot of web developers are familiar with, it can be obtained here. What I like about it is the simplicity it gives in setting up a man-in-the-middle (MitM) attack to pull the detail of what is going on. This is particularly useful when using it with PowerShell, because your client machine is the endpoint, so the proxy injection is straight forward without too much messing about.

NOTE: Because this is doing MitM attacks on the traffic, it is

I’m not going to go into installing Fiddler here, it’s a standard Windows wizard, once installed, launch the program and you should see something like this:

1

If you click in the bottom right, next to ‘All Processes’, you will see it change to ‘Capturing’:

2

We are now ready to start capturing some API calls. So open PowerShell. Now to limit the amount of junk traffic we capture, we can set to only keep a certain number of sessions (in this case I set it to 1000), and target the process to capture from (by dragging the ‘Any Process’ button to our PowerShell window).

3

9

Run the following to connect to vCenter:

Import-Module VMware.VimAutomation.Core
Connect-VIServer -Server $vcenter -Credential (Get-Credential)

4

You should see some captures appearing in the Fiddler window, we can ignore these for now as it’s just connections to the vCenter server:

You can inspect this traffic in any case, by selecting a session, and selecting the ‘Raw’ tab in the right hand pane:

5

Here we can see the URI (https://<redacted>/sdk), the SOAP method (POST), the Headers (User-Agent, Content-Type, SOAPAction, Host, Cookie etc), and the body (<?xml version….), this shows us exactly what the PowerShell client is doing to talk to the API.

Now we can connect to our local and remote SRM sites using the following command:

$srm = connect-srmserver -RemoteCredential (Get-Credential -Message 'Remote Site Credential') -Credential (Get-Credential -Message 'Local Site Credential')

If you examine the sessions in your Fiddler window now, you should see a session which looks like this:

6

This shows the URI as our SRM server, on HTTPS port 9086, with suffix ‘/vcdr/extapi/sdk’, this is the URI we use for all the SRM SOAP calls, it shows the body we use (which contains usernames and passwords for both sites), and the response with a ‘Set-Cookie’ header with a session ticket in it. This session ticket will be added as a header to each of our following calls to the SOAP API.

Let’s try and do something with the API through PowerShell now, and see what the response looks like, run the following in your PowerShell window:

$srmApi = $srm.ExtensionData
$protectionGroups= $srmApi.Protection.ListProtectionGroups()

This session will show us the following:

7

Here we can see the URI is the same as earlier, that there is a header with the name ‘Cookie’ and value of ‘vmware_soap_session=”d8ba0e7de00ae1831b253341685201b2f3b29a66″’, which ties in with the cookie returned by the last call, which has returned us some ManagedObjectReference (MoRef) names of ‘srm-vm-protection-group-1172’ and ‘srm-vm-protection-group-1823’, which represent our Protection Groups. This is great, but how do we tie these into the Protection Group names we set in SRM? Well if we run the following commands in our PowerShell window, and look at the output:

Write-Output $($pg.MoRef.Value+" is equal to "+$pg.GetInfo().Name)

The responses in Fiddler look like this:

8

This shows us a query being sent, with the Protection Group MoRef, and the returned Protection Group name.

We can repeat this process for any of the methods available through the SRM API exposed in PowerCLI, and build up a list of the bodies we have for querying, and retrieving data, and use this to build up a library of actions. As an example we have the following methods already:

Query for Protection Groups:

<?xml version="1.0" encoding="utf-8"?><soap:Envelope xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"><soap:Body><ListProtectionGroups xmlns="urn:srm0"><_this type="SrmProtection">SrmProtection</_this></ListProtectionGroups></soap:Body></soap:Envelope>

Get the name of a Protection Group from it’s MoRef:

<?xml version="1.0" encoding="utf-8"?><soap:Envelope xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"><soap:Body><GetInfo xmlns="urn:srm0"><_this type="SrmProtectionGroup">MOREFNAME</_this></GetInfo></soap:Body></soap:Envelope>

So how do we take these, and turn them into actions in vRO? Well we first need to add a REST host to vRO using the ‘Add a REST host’ built in workflow, pointing to ‘https://<SRM_Server_IP>:9086’, and then write actions to do calls against this, there is more detail on doing this around on the web, this site has a good example. For the authentication method we can do:

// let's set up our variables first, these could be pushed in through parameters on the action, which would make more sense, but keep it simple for now

var localUsername = "administrator@vsphere.local"

var localPassword = "VMware1!"

var remoteUsername = "administrator@vsphere.local"

var remotePassword = "VMware1!"

 

// We need our XML body to send

var content = '<?xml version="1.0" encoding="utf-8"?><soap:Envelope xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"><soap:Body><SrmLoginSites xmlns="urn:srm0"><_this type="SrmServiceInstance">SrmServiceInstance</_this><username>'+localUsername+'</username><password>'+localPassword+'</password><remoteUsername>'+remoteUsername+'</remoteUsername><remotePassword>'+remotePassword+'</remotePassword></SrmLoginSites></soap:Body></soap:Envelope>';

 

// create the session request

var SessionRequest = RestHost.createRequest("POST", "/vcdr/extapi/sdk", content);

// set the headers we saw on the request through Fiddler

SessionRequest.setHeader("SOAPAction","urn:srm0/4.0");

SessionRequest.setHeader("Content-Type","text/xml; charset=utf-8");

var SessionResponse = SessionRequest.execute();

 

// show the content

System.log("Session Response: " + SessionResponse.contentAsString);

 

// take the response and turn it into a string

var XmlContent = SessionResponse.contentAsString;

 

// get the headers

var responseHeaders = SessionResponse.getAllHeaders();

 

// and just the one we want

var token = responseHeaders.get("Set-Cookie");

 

// log the token we got

System.log("Token: " + token);

 

// return our token

return token

This will return us the token we can use for doing calls against the API. Now how do we use that to return a list of Protection Groups:

// We need our XML body to send, this just queries for the Protection Group MoRefs

var content = '<?xml version="1.0" encoding="utf-8"?><soap:Envelope xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"><soap:Body><ListProtectionGroups xmlns="urn:srm0"><_this type="SrmProtection">SrmProtection</_this></ListProtectionGroups></soap:Body></soap:Envelope>';

 

// create the session request

var SessionRequest = RestHost.createRequest("POST", "/vcdr/extapi/sdk", content);

// set the headers we saw on the request through Fiddler

SessionRequest.setHeader("SOAPAction","urn:srm0/4.0");

SessionRequest.setHeader("Content-Type","text/xml; charset=utf-8");

SessionRequest.setHeader("Cookie",token);

var SessionResponse = SessionRequest.execute();

 

// show the content

System.log("Session Response: " + SessionResponse.contentAsString);

 

// take the response and turn it into a string

var XmlContent = SessionResponse.contentAsString;

 

// lets get the Protection Group MoRefs from the response

var PGMoRefs = XmlContent.getElementsByTagName("returnval");

 

// declare an array of Protection Groups to return

var returnedPGs = [];

 

// iterate through each Protection Group MoRef

for each (var index=0; index<PGMoRefs.getLength(); index++) {

// extract the actual MoRef value

var thisMoRef = PGMoRefs.item(index).textContent;

// and insert it into the body of the new call

var content = '<?xml version="1.0" encoding="utf-8"?><soap:Envelope xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"><soap:Body><GetInfo xmlns="urn:srm0"><_this type="SrmProtectionGroup">'+thisMoRef+'</_this></GetInfo></soap:Body></soap:Envelope>';

// do another call to the API to get the Protection Group name

SessionRequest = RestHost.createRequest("POST", "/vcdr/extapi/sdk", content);

SessionRequest.setHeader("SOAPAction","urn:srm0/4.0");

SessionRequest.setHeader("Content-Type","text/xml; charset=utf-8");

SessionRequest.setHeader("Cookie",token);

SessionResponse = SessionRequest.execute();

XmlContent = XMLManager.fromString(SessionResponse.contentAsString);

returnedPGs += myxmlobj.getElementsByTagName("name").item(0).textContent;

};

 

// return our token

return returnedPGs;

Through building actions like this, we can build up a library to call the API directly. This should be a good starting point for building your own libraries for vRO to interact with SRM via the API, rather than the plugin. As stated earlier, using Fiddler, or something like it, you should be able to use this to capture anything being done through PowerShell, and I have even had some success with capturing browser clicks through this method, depending on how the web interface is configured. This method certainly made creating some integration with SRM through vRO less painful than trying to use the plugin.

PowerShell – Could not create SSL/TLS secure channel

I have spent a considerable amount of time in my life battling with the above error message when running PowerShell scripts. Long and short of it is that this can be caused by a few things, but most of the times I have experienced it, the reason is that the endpoint you are trying to connect to is using self-signed certificates, which causes the Invoke-WebRequest, and Invoke-RestMethod commands to throw an error stating:

The underlying connection was closed: Could not establish trust relationship for the SSL/TLS secure channel.

If you hit this, you will know as your web request via standard REST methods will simply refuse to give you anything back.

I had a bunch of scripts written to do automation of the configuration of vRealize Orchestrator, and vRealize Automation 7.0, and these had been heavily tested, and confirmed as working. The way of avoiding the above error is to use the following PowerShell function:

function Ignore-SelfSignedCerts
{
try
{
Write-Host "Adding TrustAllCertsPolicy type." -ForegroundColor White
Add-Type -TypeDefinition @"
using System.Net;
using System.Security.Cryptography.X509Certificates;
public class TrustAllCertsPolicy : ICertificatePolicy
{
public bool CheckValidationResult(
ServicePoint srvPoint, X509Certificate certificate,
WebRequest request, int certificateProblem)
{
return true;
}
}
"@
Write-Host "TrustAllCertsPolicy type added." -ForegroundColor White
}
catch
{
Write-Host $_ -ForegroundColor "Yellow"
}
[System.Net.ServicePointManager]::CertificatePolicy = New-Object TrustAllCertsPolicy
}
Ignore-SelfSignedCerts;

So not a great start to my Sunday when I found that my scripts no longer worked after a fresh install of the recently released vRealize Orchestrator and vRealize Automation 7.0.1.

After much messing about, I worked out the cause of this, which is that SSLv3 and TLSv1.0 were both disabled in the new releases, as a result we need to either:

a) Enable SSLv3 or TLSv1.0 – probably not the best idea, these have been disabled due to the growing number of security risks in these protocols, and will (presumably) continue to be disabled for every new version of the products going forward

b) Change the way we issue requests, to use TLSv1.2 – this is the way to do it in my opinion, and the code to do this is a simple one-liner:

[System.Net.ServicePointManager]::SecurityProtocol = [System.Net.SecurityProtocolType]::Tls12;

So if you  hit this problem (and if you are a PowerShell scripter, and interacting with REST APIs with your scripts, then you probably will!), then this is how to fix the issue.

vSphere PowerCLI 6.3 Release 1 – in the wild…

Yesterday VMware released PowerCLI 6.3 Release 1, this follows yesterday’s fairly exciting release of new products across the VMware portfolio including:

  • vSphere 6.0 Update 2
  • vRealize Automation 7.0.1
  • vCloud Director 8.0.1

While I am not rushing to update to the new version of vCenter or ESXi in production environments, upgrading to this latest version of PowerCLI is far less risky, so I immediately upgraded and checked out the new features.

The latest release adds the following new support:

  • Support for Windows 10 and PowerShell 5.0 – as a Windows 10 user (for my personal laptop and home PC at least), this is a welcome addition. Windows Server 2016 is just around the corner as well, so this should ensure that PowerCLI 6.3R1 works here too. Not seen any problems with running the previous version of PowerCLI on my Windows 10 machines, but at least this is officially tested and supported now anyway
  • Support for vCloud Director 8.0 – VMware are driving vCD forward again, so if you are using the latest versions, and use PowerCLI to help make your life easier (and if you’re not, then why not?), this will be a welcome addition
  • Support for vRealize Operations Manager 6.2 – there are still only 12 cmdlets available in the VMware.VimAutomation.vROps module, but this bumps up support for the latest version anyway

And adds the following new features:

  • Added Content Library support – I haven’t really got into the whole Content Library thing just yet, but this feature introduced in vSphere 6.0, and was previously only automatable through the new vSphere REST API. This release of PowerCLI includes cmdlets to let you work with the Content Library, will probably do a follow up post on configuring the content library at a later date
  • Get-EsxCli functionality updated – for those that don’t know, Get-EsxCli lets you run esxcli commands via PowerShell on a target host. This is useful for certain things which are not really possible through the standard PowerCLI host management cmdlets. This release brings in advanced functionality in this area
  • Get-VM command – this command has been streamlined to more quickly return results, which should help in larger environments

So all in all, some minor improvements, some new features, and some updates to support for newer VMware products. A solid release which will keep PowerCLI relevant as a tool in a vSphere admin’s arsenal. If you’re not already using PowerCLI, then get on the bandwagon, there are some great books and videos out there, and a fantastic community to help you along.

Getting started with vRealize APIs

I have been doing automation with VMware products for a while, this has been mostly using PowerCLI, which I have blogged about in the past. In the last few weeks I have started to work with configuring and working with vRealize Orchestrator and vRealize Automation, and the hands off automation in these products is carried out using their respective REST APIs.

REST is something I’ve pretty much had zero experience of, so I have had to pick this up from nothing and figure out how to work in this fashion. My plan is to do a series of blog posts around how to get started with working with these interfaces, configuring and working with these products, and with tips on where to start. I’ve had some great support from my fellow automation people at work, but think I have a relatively good grip on how to do things now.

REST stands for ‘Representational State Transfer’ and uses the HTTP protocol as its basic interface, with this being a standard way to do things now, and with ports 80 and 443 (and the security around using them) generally being well understood, this basically means you should not have any issues with applications requiring complex numbers of ports being open to your application servers. REST is becoming somewhat ubiquitous now, with seemingly every new of hardware and software being released coming with its own REST interface.

This is great for people wanting to automate because we do not need any special software to do REST calls, and we can access the API from pretty much any OS and from anywhere with HTTP/HTTPS access to the endpoint. A REST call basically comprises of the following components:

  • URI (Uniform Resource Indicator) – basically a URL which exposes a function of the API, if you send a correctly crafted message to this address then you will get a standard HTTP response which should indicate whether your call was successful.
  • Body – this is also known as the payload. This contains instructions on what we want to do, and will usually (in my experience) be in either JSON (JavaScript Object Notation), or XML (eXtensible Markup Language). These are both markup languages which allow you to define the parameters for what you want to do. If you are sending file type data then the body may also be in multipart MIME format.
  • Headers – these carry information about the message you are sending such as data type, authentication information, desired response type, and are carried as a hash table of names and values.
  • Method – REST uses standard HTTP methods and which method you use will depend on the API and function you are accessing. Primarily this will probably be one of the following:
    • GET – this makes no changes, and is used for retrieving information. When you access a web page, the HTTP request uses the GET method, and by using this method with an API, you can generally expect to be returned an XML/JSON response with information about how a component is configured.
    • POST – this is used for sending information to an API to be processed. Again, you should expect a return code for this, and even if this is an error, there should (depending on the call, and the API) be something meaningful telling you what went wrong
    • PUT – often interchangeable with POST, this is generally used for replacing configuration in its entirety. Again you can reasonably respect some feedback when using this method.
    • DELETE – removes configuration from a specific element or component.

The exact way in which the call is sent depends on the exact API being used, and this is where the API being well documented is of crucial importance, because if documentation is lacking, or just wrong (as is the case much more than it should be), then getting the call to be successful can be challenging.

To get started you will need a tool which can craft calls for you, and show the response received. When starting out, it is easiest for this to be a graphical tool which gives instant and meaningful feedback on the response, although this is not great for doing things in an automated and programmable fashion, this does make things simpler, and makes the learning experience easier.

If you use Chrome or Firefox then you should be able to easily find some REST clients, and it may well be worth trying a few different ones until you find one which works best for you. Postman was recommended to me, and this has a nice graphical UI which will do code highlighting on the response, and will help you to build headers and the like.

Ultimately, if you are looking to automate, then you will be using a built in way of sending REST calls from your chosen automation language. My experience of doing this is either from BASH scripts (I know, I know) where I use curl, or through PowerShell, where you have the Invoke-RestMethod or Invoke-WebRequest cmdlets.

When using these kind of tools (or indeed, doing any kind of programmatic REST call), you will need to form the header yourself. As I said above, this is basically a hash table of key-value pairs.

So let’s do an example REST call. For the example below, I will be using the vRealize Orchestrator 7.0 API. This is a pretty new release, but it can be downloaded from the VMware website, and is deployed from OVA, so should be quick to download and spin up. The new version has two distinct APIs: one for the new Orchestrator Control Center, which is used to configure the appliance, and one for the vRO application itself, used for orchestration of a vSphere (and beyond!) environment. I will show this using both PowerShell code, and the Postman GUI.

vRealize Orchestrator has a fairly straight forward API, you can access the documentation by opening your browser and going to ‘https://<vro_ip&gt;:8281/vco/api/docs’, you will be presented with this screen:

1

From here we can explore any area exposed by the API, for the sake of this example we’re going to do something simple, we are going to return a list of the available workflows in the vRO system. So select ‘Workflow Service’, and click ‘GET /workflows’, and we can see a bit of information about the REST call to list workflows. This is what it shows:

1

This being a ‘GET’ call, we don’t see a lot here, but we will run the call anyway, and see what we get back, in later articles we will go through changing configuration. First we will make the call using PowerShell, the script is as follows, this is pretty simple:

# Ignore SSL certificates
# NOTE: This should not really be required, if you are using
# proper certificates, and have working DNS
Add-type @"
using System.Net;
using System.Security.Cryptography.X509Certificates;
public class TrustAllCertsPolicy : ICertificatePolicy {
public bool CheckValidationResult(
ServicePoint srvPoint, X509Certificate certificate,
WebRequest request, int certificateProblem) {
return true;
}
}
"@
[System.Net.ServicePointManager]::CertificatePolicy = New-Object TrustAllCertsPolicy
# Define our credentials
$user_name = "vcoadmin";
$password = "vcoadmin";
$vro_server_name = "192.168.1.201"
# Convert the credentials to a format we can use in the
# header.
$auth = $user_name+":"+$password;
$Encoded = [System.Text.Encoding]::UTF8.GetBytes($auth);
$EncodedPassword = [System.Convert]::ToBase64String($Encoded)
$header = @{"Authorization"="Basic $($EncodedPassword)"};
# Define the URI for our REST call
$vro_getplugin_uri = "https://"+$vro_server_name+":8281/vco/api/plugins"
# Run the call and store the result in a variable
$get_plugins = Invoke-WebRequest -Uri $vro_getplugin_uri -Method Get -Headers $header -ContentType "application/json"

This returns the result of the call to a variable, unfortunately, although this is in JSON format, it will be horribly formatted if you try to output it:

1.png

If we run:

$get_plugins.content | ConvertFrom-Json | ConvertTo-Json

Then the built in JSON formatting will reformat it to make it readable:

1

This gives us a readable list that we can refer to, filter on, and do all the cool stuff that PowerShell makes simple.

We will now look at doing the same thing with Postman, so first you will need to install the Postman plugin from the Chrome App Store, then open it, you will see this:

1

We know our credentials and URI, so we’re just going to go ahead and step through making the request. First, enter the URI in the box:

1

Now we need to add our authentication method, so change ‘No Auth’ to ‘Basic Auth’, enter the credentials for the ‘vcoadmin’ user, and click ‘Update request’:

1

You will notice that ‘Headers (0)’ changed to ‘Headers(1)’, if you click on that you can see the header for the request, with the encoded credential:

1

Now click ‘Send’ and our REST call will be submitted, the same JSON we got earlier will be nicely formatted and displayed in the lower pane:

1

And that’s a REST call completed, with the result returned, and we know how to do it using PowerShell, to make programmatic calls, and with Postman, to explore and understand APIs.

Automating UCS System Builds

A project I have been working on recently is the full automation of the Cisco UCS system build. Automation comes from a desire to not sit clicking buttons; that’s a great way to learn how a system fits together, but once you have done it once or twice, it is no longer exciting. A look at the FlexPod Cisco Validated Design (CVD) shows around 50 pages of configuration steps. This is largely the same, regardless of the client, and will take at least 3 hours of clicking buttons. 

This shows this to be a prime candidate for automation. There are a few options available here for the automation of this task:

  • Cisco PowerTool for UCS
  • Python API for Cisco UCS
  • Altering of XML backups and restoration onto the UCS system

Altering an exported XML configuration backup, to fit the customer’s solution works, but it is not particularly neat or nice, and will result in replacing all of the system specific information through copy/paste, or a more neat solution parsing the XML and replacing elements. This is not something I really want to do, and it does not leave us with a particularly customisable solution.

The Python API is extensive, and has a tonne of documentation around it. I have run through the Codecademy course for Python, and understand the basics, but I come from a Windows administrative background, and I am not comfortable enough that i want to do this from scratch for UCS. This is something to put down for the future, as my Python knowledge grows. The great advantage in using Python is that it is platform agnostic, so I could run this from a Mac, Linux, or Windows environment (as long as i have the Python packages installed). Sounds great, but the documentation from Cisco around Managed Objects melts my brain, so this is something i discounted for now.

Luckily, Cisco have done a fantastic job with their PowerShell library for Cisco UCS. This is simple to get started with, and the great thing about scripting with PowerShell is that anyone can read it and figure out what the script is doing. As an infrastructure engineer, with a background in PowerShell and VMware’s equally excellent PowerCLI PowerShell module. this was the natural fit for me.

So where did I, and should you (in my opinion), start with automating your Cisco UCS builds?

The first element of this i have already mentioned, get Cisco PowerTool. This is available for download from Cisco’s site, and once you have the MSI installed, you can launch your PowerShell console from Windows, and run ‘Import-Module CiscoUCSPS’ to import the module. You now have the power at your fingertips.

The next good place to start, whether you know a lot about UCS or not, is to get the UCS Platform Emulator, and get this deployed. This comes as an OVA appliance, which you can deploy to VMware Player, VMware Workstation, ESXi, or any other compatible Type 1 or Type 2 hypervisor. Once this has booted, go to the console and give it an IP (if it didn’t already get one from your DHCP server).

It should be noted that there are currently 3 active versions of the UCSPE available: one for UCS standard, one for UCS M-series servers, and one for UCS Mini. Make sure you get the right one for what you are wanting to automate. In this example we are looking at standard UCS, so grab that one, and spin it up.

Now we have the PE up and IP’d, so open your web browser and go to the IP. Here you can amend the hardware configuration to match your environment if that is useful. Most of the automation you will be doing is creating policies, configuring the Fabric Interconnects, creating service profiles, so the exact hardware configuration is not hugely important, and customising the hardware in here can be quite time consuming and frustrating as the interface is clunky and unintuitive.

So now we have our Platform Emulator stood up, we can connect in to it using PowerTool. Open a PowerTool window and enter ‘Connect-Ucs <IP>’, you will be prompted for credentials. Default on the Platform Emulator is admin/admin, so enter this and you are good to go.

There are hundreds of cmdlets in the UCS PowerTool module, I am not going to go through them here, but I will show a couple of tactics for, firstly finding what you need in the set of cmdlets, and secondly, for letting PowerTool do the heavy lifting, and produce the script for the build for you.

So lets start with looking for commands in the PowerTool module, we can run ‘Get-Command -Module CiscoUcsPS’, which will give us a list of all the commands in the module. This is a great place to start, and in general the cmdlets are named fairly sensibly, although due to the way PowerTool was converted from the Python API model, some of these are pretty long.

Screen Shot 2015-11-07 at 08.22.46

If we want to search for something more specific, we can use ‘Get-Command -Module CiscoUcsPS *blade*’, for example, to search for all commands in this module with the word ‘blade’ in them. This narrows the search to something you might actually want to do at that time.

Once you have located the cmdlet you want, in this case we will use ‘Get-UcsBlade’ which lists all the blade servers installed in your UCS environment, you can get more information about that by running ‘Get-Help Get-UcsBlade’ followed by one of three optional parameters: ‘-full’, ‘-detailed’ or ‘-examples’. If you run this with no parameters, you will get the basic description and the syntax of the command. If you go for ‘-examples’ you will get examples of the command usage. Entering ‘-detailed’ will go into details on each parameter, what data type they are, whether they are mandatory or optional, basically a tonne of information you probably won’t need (although it’s useful to know this is there when you need it), and ‘-full’ will show you the combination of all three of these.

One thing to be aware of, is that different vendors have different qualities of PowerShell modules. I am a fairly heavy user of the Cisco, VMware, NetApp, and the core Microsoft cmdlets. The VMware ones tend to be very well documented, well presented, and have a tonne of options, while not always providing specific commands which you may need (although using ‘Get-View’ we can tap into anywhere in the API, a story for another day I think). The NetApp commands have excellent examples, but general documentation and community does not really happen, and there are quite a few absent commands which mean you can’t always do what you need to do. The Cisco PowerTool pack has a huge breadth of available commands, which means you can do pretty much anything in there, but they don’t always have examples in the help for the cmdlets, and some of the detailed help is lacking description so leaves success with some commands subject to your trial and error.

Screen Shot 2015-11-07 at 08.23.37

So now you can find the commands you need, and find out how to use them. Well because PowerShell is a declarative tool, we can run single commands from the shell, so if you run ‘Get-UcsBlade’ while connected to your PE, you will see the list of blade servers attached to the system. This list will likely be a few pages so you can trim it down to a table by entering ‘Get-UcsBlade | Format-Table’, or ‘Get-UcsBlade | ft’ for short, which will present the same output with only a few select fields in a nice small table. There are a load of ways of playing with this output to customise it for what you need. i’m not going to go into that here, but suffice to say this is a good way to get information out of your UCS system.

By using the commands, you can build your UCS system piece by piece, this is going to take you a while as you get used to the nuances of PowerTool, and if you have the patience for this then great, you will be far stronger at the end of it, but when I started I used a different method which I will now describe.

One of the great tools we have in PowerTool, is a cmdlet called ‘ConvertTo-UcsCmdlet’. This is a life saver, as it lets you automate, without really needing to know how to write PowerShell scripts. So it works like this.

Once you are connected to your PE through PowerTool (using Connect-Ucs <IP>), you enter the ‘ConvertTo-UcsCmdlet’ command, and the command prompt will disappear. If you need to get out of this mode, just press Ctrl+C, but just leave it for now.

Open up your UCS Manager through your PE’s web front end, and log in. Now go and create something, say a Network Control Policy, for something simple, click OK to save your new policy, and go back to your PowerShell window.

You should see what our magical cmdlet has done, it should have dumped the PowerTool commands for what you just did in the window. Now you can just copy and paste this cmdlet into your favourite text editor, and voila, you don’t need to click buttons to do that again, you can just use this piece of script.

Through doing this, you can build up the full configuration in a text file, name the file with a .ps1 extension, and when you are ready to test it you can factory reset the PE through the web interface and run it again. In a few hours you can quickly create the full build from start to finish using PowerTool.

There are some things which ConvertTo-UcsCmdlet will not convert, creation of Service Profiles is one, but if you look around online there are plenty of good people sharing scripts which can be modified for your purposes.

Hope this helps people, it certainly changed things for me, taking a 3 hour build down to around 30 seconds. Once you have this up and running, you can take what you have and replace the specific elements, say sub-organisation name, and replace them with variables, this script can then be reused again and again. This is a quick way to open your eyes to the power of scripting.

PowerCLI – where to start

I began using PowerShell around 18 months ago while working for a small UK based Managed Service provider. Prior to this, my coding/scripting experience consisted of an A-Level in Computing, which introduced me to Visual Basic 6.0 and databases, a void of around 7 years, and then some sysadmin VBScript and batch file type goodness for a few years.

Screen Shot 2015-11-02 at 21.34.14

Until I started at said company, I had only been exposed to systems running Windows Server 2003, and with a look to security über alles, no access to PowerShell, or any other exciting languages was available, so VBScript became our automation tool of choice.

I have posted before about good resources to use to learn PowerShell, this is more a rundown of how I learned, and the joy and knowledge it gave me to do this.

My first taste of PowerShell was working with Exchange 2010 servers, doing stuff like this to report on mailbox items over a certain age.

Get-Mailbox "username" | New-MailboxSearch -Name search123 -SearchQuery "Received:<01/01/2014" -estimateonly

Were it not for the necessity to use PowerShell to do anything remotely useful in Exchange 2010, I would have been happy to continue to use batch files and VBScript to automate some of the things, I was confident in using these tools, and could achieve time savings, albeit fairly slowly. But PowerShell I must, so PowerShell I did.

Around this time, I became more keen on working with infrastructure, than applications, and got transferred to a role solely looking after our fairly sizeable Cisco UCS and VMware estate. I had plenty of years of experience of VMware, and none of Cisco UCS, but was excited by the new challenge.

I was quickly steered by the senior engineers, towards Cisco PowerTool, and VMware’s PowerCLI, to help to automate some of the administrative, and reporting type tasks I would soon be inundated with, so I picked them up and learned as I went.

I started small, and Google was my friend. Scripting small tasks to save incrementally larger amounts of time. Stuff like this:


$podcsv = import-csv .\UCS_Pods.csv
$credcsv = import-csv .\UCS_Credentials.csv
$ucsuser = $credcsv.username
$ucspasswd = $credcsv.password
$secpasswd = convertto-securestring $ucspasswd -asplaintext -force
$ucscreds = new-object system.management.automation.pscredential ($ucsuser,$secpasswd)
$datetime = get-date -uformat “%C%y%m%d-%H%M”
foreach($pod in $podcsv)
{
$podname=$pod.name
$podip=$pod.ip
connect-ucs -credential $ucscreds $podip
get-ucsfault | select ucs,id,lasttransition,descr,ack,severity | export-csv -path .\$datetime-$podname-errors.csv
disconnect-ucs
}

To dump out the alerts we had in multiple UCS systems, to CSV files. This would save 20-30 minutes a day, nothing major, but clicking buttons is boring, and I can always find better things to do with my time.

On the VMware side of things, I started really small, with stuff like this which would tell you the version of VMTools on all of your virtual machines:


# Ask for connection details, then connect using these
$vcenter = Read-Host "Enter vCenter Name or IP"
$username = Read-Host "Enter your username"
$password = Read-Host "Enter your password"
# Set up our constants for logging
$datetime = get-date -uformat "%C%y%m%d-%H%M"
$outfilepsp = $(".\" + $datetime + "_" + $vcenter + "_PSPList_Log.txt")
$outfilerdm = $(".\" + $datetime + "_" + $vcenter + "_RDMList_Log.txt")
$OutputFile = ".\" + $datetime + "_" + $vcenter + "_VMTools_Report.txt"
# Connect to vCenter
$Connection = Connect-VIServer $vcenter #-User $username -Password $password
foreach($Cluster in Get-Cluster) {
foreach($esxhost in ($Cluster | Get-VMHost | Where { ($_.ConnectionState -eq "Connected") -or ($_.ConnectionState -eq "Maintenance")} | Sort Name)) {
Get-Cluster | Get-VMhost $esxhost | get-vm | % { get-view $_.id } | select Name, @{ Name="ToolsVersion"; Expression={$_.config.tools.toolsVersion}}, @{ Name="ToolStatus"; Expression={$_.Guest.ToolsVersionStatus}}, @{Name="Host";Expression={$esxhost}}, @{Name="Cluster";Expression={$cluster.name}} | Format-Table | Out-File -FilePath $OutputFile -Append
}
}
Disconnect-VIServer * -Confirm:$false

This is a real time saver, and great for getting quick figures out of your environment. As I wrote these scripts, I learned more and more what I could do, picking up ways of doing different things here and there: for/next loops, do/while loops, arrays. As I picked up these concepts again, concepts I had learned years earlier and not used to great effect, my scripts became more complex, and delivered more value in the output they gave, and the time saved. Scripts like this which reports on any datastores over 90% utilisation, these soon became a part of our daily reporting regime:


$datetime = get-date -uformat "%C%y%m%d-%H%M"
$vcentercsv = import-csv .\VCenter_Servers.csv
# Configure connection settings using Read Only account
$credcsv = import-csv .\VMware_Credentials.csv
$vmuser = $credcsv.username
$vmpasswd = $credcsv.password
$secpasswd = convertto-securestring $vmpasswd -asplaintext -force
$vmcreds = new-object system.management.automation.pscredential ($vmuser,$secpasswd)
$report = @()
foreach($vcenter in $vcentercsv)
{
$vcentername=$vcenter.name
connect-viserver $vcenter.ip -credential $vmcreds
foreach ($datastore in (get-datastore | where {$_.name -notlike "*local*" -and [math]::Round(100-($_.freespacegb/$_.capacitygb)*100) -gt 90}))
{
$row = '' | select Name,FreeSpaceGB,CapacityGB,vCenter,PercentUsed
$row.Name = $datastore.name
$row.FreeSpaceGB = $datastore.freespacegb
$row.CapacityGB = $datastore.capacitygb
$row.vCenter = $vcenter.name
$row.PercentUsed = [math]::Round(100-($datastore.freespacegb/$datastore.capacitygb)*100)
$report += $row
}
Disconnect-VIServer * -Confirm:$false
}
$report | Sort PercentUsed | export-csv -path .\$datetime-datastore-overuse.csv

My knowledge of how to do things, and confidence in what I was doing grew rapidly, and the old thing of ‘the more I know, the more I realise I don’t know’ came to pass. I am still learning at a rapid rate how better to put these things together, and new cmdlets, new modules, new ways to do things. It’s a fun journey though, one which leaves you with extremely useful and admired skills, and one which will continue to develop you as an IT technician throughout your career.

I am now doing the biggest PowerShell datacenter automation project I have ever done, it is around 5000 lines now, and growing every day. I feel like anything can be achieved with PowerShell, and the various modules released by vendors, and finding ways of solving the constant puzzles which hit me in the face is exciting and rewarding in equal measure.

Everywhere you look in IT now, it is automation and DevOps. It has been said many times that IT engineers who do not learn some form of automation are going to be automated out of a job, and to some extent I agree with this. The advent of software defined storage, networking, everything, shows that automation, and policy driven configuration, is really changing the world of IT infrastructure. If you’re in IT then you probably got in because you love technology, well get out there and learn new skills, whatever those may be, you will enjoy it more than you think.