PowerCLI Core on Docker with macOS

Back in November, shortly after Microsoft’s open-sourcing of PowerShell, and subsequent cross-platform PowerShell Core release, VMware released their cross-platform version of PowerCLI: PowerCLI Core. This was made available on GitHub for general consumption, and can be installed on top of PowerShell Core on a macOS/OSX or Linux machine.

I have loads of junk installed on my MacBook, including the very first public release of PowerShell Core, but keeping it all up to date, and knowing what I have installed can be a pain, so my preference for running PowerShell Core, or PowerCLI Core at the moment is through a Docker container on my laptop, this helps keep the clutter down and makes upgrading easy.

In this post I’m going to show how to use the Docker container with an external editor, and be able to run all your scripts from within the container, removing the need to install PowerShell Core on your Mac.

Pre-requisites

We will need to install a couple of bits of software before we begin:

Beyond that we will get everything below.

Getting the Docker image

VMware have made the PowerCLI Core Docker image available on Docker Hub (here), this is the easiest place to pull container images from to your desktop, and is the general go-to place for public container images as of today. This can be downloaded once the Docker CE is installed through the command below:

Icarus:~ root$ docker pull vmware/powerclicore:latest
latest: Pulling from vmware/powerclicore
93b3dcee11d6: Already exists
d6641ceee635: Pull complete
62bbcce52faa: Pull complete
e86aa7a78685: Pull complete
db20fbdf24c0: Pull complete
37379feb8f29: Pull complete
8abb449d1e29: Pull complete
a9cd6d9452e7: Pull complete
50886ff01a73: Pull complete
74af7eaa49c1: Pull complete
878c611eaf2c: Pull complete
39b1b7978191: Pull complete
98e632013bea: Pull complete
4362432cb5ea: Pull complete
19f5f892ae79: Pull complete
29b0b093b159: Pull complete
913ad6409b89: Pull complete
ad5db0a55033: Pull complete
Digest: sha256:d33ac26c0c704a7aa48f5c7c66cb76ec3959beda2962ccd6a41a96351055b5d0
Status: Downloaded newer image for vmware/powerclicore:latest
Icarus:~ root$

This may take a couple of minutes, but the image should now be present on the local machine, and ready to fire up.

Getting the path for our scripts folder

Before we launch our container we need our scripts path, this could be a folder anywhere on your computer, in my case it is:

/Users/tim/Dropbox/Coding Projects/PowerShell/VMware

Launching our container

The idea here is to launch a folder which is accessible from both inside and outside our container, so we can edit the scripts with our full fat editor, and run them from inside the container.

To launch the container, we use the following command, I explain the switches used below:

docker run --name PowerCLI --detach -it --rm --volume '/Users/tim/Dropbox/Coding Projects/PowerShell/VMware':/usr/scripts vmware/powerclicore:latest
  • –name – this sets the container name, which will make it easier when we want to attach to the container
  • –detach – this starts the container without attaching us to it immediately, meaning if there is anything else we need to do before connecting we can
  • -it – this creates an interactive TTY connection, giving us the ability to interact with the console of the container
  • –rm – this will delete the container when we exit it, this should keep the processes tidy on our machine
  • –volume … – this maps our scripts folder to /usr/scripts, so we can consume our scripts once in the container
  • vmware/powercli:latest – the name of the image to launch the container from

Now when we run this we will see the following output:

Icarus:~ root$ docker run --name PowerCLI --detach -it --rm --volume '/Users/tim/Dropbox/Coding Projects/PowerShell/VMware':/usr/scripts vmware/powerclicore:latest
c48ff51e3f824177da8e3b0fd0210e5864b01fea94ae5f5871b3654b4f5bcd35
Icarus:~ root$

This is the UID for our container, we won’t need this, as we will attach using the friendly name for our container anyway. When you are ready to attach, use the following command:

Icarus:~ root$ docker attach PowerCLI

You may need to press return a couple of times, but you should now have a shell that looks like this:

PS /powershell>

Now we are in the container, and should be able to access our scripts by changing directory to /usr/scripts.

If we run ‘Get-Module -ListAvailable’ we can see the modules installed in this Docker image:

PS /powershell> Get-Module -ListAvailable                                                                               

    Directory: /root/.local/share/powershell/Modules

ModuleType Version    Name                                ExportedCommands
---------- -------    ----                                ----------------
Binary     1.21       PowerCLI.Vds
Binary     1.21       PowerCLI.ViCore                     HookGetViewAutoCompleter
Script     2.1.0      PowerNSX                            {Add-XmlElement, Format-Xml, Invoke-NsxRestMethod, Invoke-...
Script     2.0.0      PowervRA                            {Add-vRAPrincipalToTenantRole, Add-vRAReservationNetwork, ...

    Directory: /opt/microsoft/powershell/6.0.0-alpha.14/Modules

ModuleType Version    Name                                ExportedCommands
---------- -------    ----                                ----------------
Manifest   1.0.1.0    Microsoft.PowerShell.Archive        {Compress-Archive, Expand-Archive}
Manifest   3.0.0.0    Microsoft.PowerShell.Host           {Start-Transcript, Stop-Transcript}
Manifest   3.1.0.0    Microsoft.PowerShell.Management     {Add-Content, Clear-Content, Clear-ItemProperty, Join-Path...
Manifest   3.0.0.0    Microsoft.PowerShell.Security       {Get-Credential, Get-ExecutionPolicy, Set-ExecutionPolicy,...
Manifest   3.1.0.0    Microsoft.PowerShell.Utility        {Format-List, Format-Custom, Format-Table, Format-Wide...}
Script     1.1.2.0    PackageManagement                   {Find-Package, Get-Package, Get-PackageProvider, Get-Packa...
Script     3.3.9      Pester                              {Describe, Context, It, Should...}
Script     1.1.2.0    PowerShellGet                       {Install-Module, Find-Module, Save-Module, Update-Module...}
Script     0.0        PSDesiredStateConfiguration         {IsHiddenResource, StrongConnect, Write-MetaConfigFile, Ge...
Script     1.2        PSReadLine                          {Get-PSReadlineKeyHandler, Set-PSReadlineKeyHandler, Remov...

PS /powershell>

So we have the PowerCLI Core module, the Distributed vSwitch module, as well as PowervRA and PowerNSX. We should be able to run our scripts from the /usr/share folder, or just run stuff from scratch.

The great thing is we can now edit our scripts in the folder mapped to /usr/share using our editor, and the changes are available live to test our scripts, and we can even write output to this folder from within the container.

If you want to detach from the container without killing it then use the Ctrl+P+Q key combination, you can then reattach with ‘docker attach PowerCLI’. When you are done with the container type ‘exit’ and it will quit and be removed.

Conclusion

Though basic, this can really help with workflow when writing and testing scripts on macOS, while enabling you to simply keep up to date with the latest images, and not fill your Mac with more junk.

Advertisements

Dockerising a GoLang Application

I was recently messing about with GoLang to get my head around it, and came up with a small script to basically output a timestamp and a random name, based on the Docker container random name generator (here). The plan was to use this to populate a database with random junk for when I’m testing software against database servers.

I got the code working anyway, you can see it on my GitHub site here, in this article I’m going to focus on the differences between building a container from a GoLang binary, and the benefits of doing this from scratch. This is to my mind a very powerful way of delivering Go applications so that they are both tiny and self-contained, and sets it apart from other popular languages where they end up dragging round baggage in their dependencies.

The steps below will go through building a container from a Go file, first using the GoLang image from Docker Hub, and then from scratch, and we will compare the resulting sizes of the files.

The steps below were all carried out on macOS 10.12, Go 1.8, and Docker 17.03.

Image based on golang official Docker image

For this stage, we should have our Go file in the current folder, we need to create the following Dockerfile:

FROM golang:latest
RUN mkdir /app
ADD <path_to_go_file>.go /app/
WORKDIR /app
RUN go build -o <output_file> .
CMD ["/app/<output_file>"]

We can then build this into an image:

docker build -t <your_name>/<image_name> -f Dockerfile .

We can then use ‘docker images’ to show the image we created:

REPOSITORY               TAG                 IMAGE ID            CREATED             SIZE
timhynes/name_gen_full   latest              ffc0ef4bac73        3 seconds ago       698 MB

Image built from scratch with Go binary

To build the image from scratch, we first need to build our Go application into a binary. We do this with the below command:

CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -o <output_file> <path_to_go_file> 

The ‘GOOS’ flag defines what the target OS for the binary is, in our case we choose Linux so that it will run in our Docker container. The ‘CGO_ENABLED=0’ flag prevents linking to external C libraries (more info here), and will mean that the binary is fully self-contained.

Once this is run, the binary will be created in the location specified. This could then be ported around Linux systems and run as a compiled application, but we are going to build this into a Docker image instead.

To build the Docker image, as shown earlier, we need a Dockerfile; the one I used for this stage is shown below:

FROM scratch
ADD <output_file> /
CMD ["<output_file>"]

This should be saved to the same folder as the binary as ‘Dockerfile’. We can now run the ‘docker build’ command to create our image:

docker build -t <your_name>/<image_name> -f Dockerfile .

At this point we should now have an image in our local repository, this shows here as being under 2MB for my application:

REPOSITORY            TAG                 IMAGE ID            CREATED             SIZE
timhynes/name_gen     latest              436a832943c8        12 seconds ago      1.77 MB

Conclusion

This shows that despite the functional endpoint being ultimately to containerise our application, through compiling a GoLang binary, and using this as the sole contents of our image, we can save huge amounts of space. In the case of the example above, the resultant image was over 300 times smaller when using the binary alone.

I guess the takeaway is that not all containers are made equal, and thinking about the way we package our Docker application can make a large difference to our ability to be able to deliver and run it.

Installing vSphere Integrated Containers

This document details installing and testing vSphere Integrated Containers, which went v1.0 recently. This has been tested against vSphere 6.5 only.
Download VIC from my.vmware.com.
Release notes available here.
From Linux terminal:
root@LOBSANG:~# tar -xvf vic_0.8.0-7315-c8ac999.tar.gz
root@LOBSANG:~# cd vic
root@LOBSANG:~/vic# tree .
.
├── appliance.iso
├── bootstrap.iso
├── LICENSE
├── README
├── ui
│   ├── vCenterForWindows
│   │   ├── configs
│   │   ├── install.bat
│   │   ├── uninstall.bat
│   │   ├── upgrade.bat
│   │   └── utils
│   │   └── xml.exe
│   ├── VCSA
│   │   ├── configs
│   │   ├── install.sh
│   │   ├── uninstall.sh
│   │   └── upgrade.sh
│   └── vsphere-client-serenity
│   ├── com.vmware.vicui.Vicui-0.8.0
│   │   ├── plugin-package.xml
│   │   ├── plugins
│   │   │   ├── vic-ui-service.jar
│   │   │   ├── vic-ui-war.war
│   │   │   └── vim25.jar
│   │   └── vc_extension_flags
│   └── com.vmware.vicui.Vicui-0.8.0.zip
├── vic-machine-darwin
├── vic-machine-linux
├── vic-machine-windows.exe
├── vic-ui-darwin
├── vic-ui-linux
└── vic-ui-windows.exe

7 directories, 25 files
root@LOBSANG:~/vic#
Now we have the files ready to go we can run the install command as detailed in the GitHub repository for VIC (here). We are going to use Linux here:
root@VBRPHOTON01 [ ~/vic ]# ./vic-machine-linux
NAME:
  vic-machine-linux - Create and manage Virtual Container Hosts

USAGE:
  vic-machine-linux [global options] command [command options] [arguments...]

VERSION:
  v0.8.0-7315-c8ac999

COMMANDS:
  create Deploy VCH
  delete Delete VCH and associated resources
  ls List VCHs
  inspect Inspect VCH
  version Show VIC version information
  debug Debug VCH

GLOBAL OPTIONS:
  --help, -h show help
  --version, -v print the version

root@VBRPHOTON01 [ ~/vic ]#
On all hosts in the cluster you are using, create a bridge network (has to be vDS), mine is called vDS_VCH_Bridge, and disable the ESXi firewall by doing this.
To install we use the command as follows:
root@VBRPHOTON01 [ ~/vic ]# ./vic-machine-linux create --target 192.168.1.222 --image-store VBR_MGTESX01_Local_SSD_01 --name VBR-VCH-01 --user administrator@vsphere.local --password VMware1! --compute-resource VBR_Mgmt_Cluster --bridge-network vDS_VCH_Bridge --public-network vSS_Mgmt_Network --client-network vSS_Mgmt_Network --management-network vSS_Mgmt_Network --force --no-tlsverify
This is in my lab; I’m deploying to a vCenter with a single host, and don’t care about security. The output should look something like this:
INFO[2017-01-06T21:52:14Z] ### Installing VCH ####
WARN[2017-01-06T21:52:14Z] Using administrative user for VCH operation - use --ops-user to improve security (see -x for advanced help)
INFO[2017-01-06T21:52:14Z] Loaded server certificate VBR-VCH-01/server-cert.pem
WARN[2017-01-06T21:52:14Z] Configuring without TLS verify - certificate-based authentication disabled
INFO[2017-01-06T21:52:15Z] Validating supplied configuration
INFO[2017-01-06T21:52:15Z] vDS configuration OK on "vDS_VCH_Bridge"
INFO[2017-01-06T21:52:15Z] Firewall status: DISABLED on "/VBR_Datacenter/host/VBR_Mgmt_Cluster/vbrmgtesx01.virtualbrakeman.local"
WARN[2017-01-06T21:52:15Z] Firewall configuration will be incorrect if firewall is reenabled on hosts:
WARN[2017-01-06T21:52:15Z] "/VBR_Datacenter/host/VBR_Mgmt_Cluster/vbrmgtesx01.virtualbrakeman.local"
INFO[2017-01-06T21:52:15Z] Firewall must permit dst 2377/tcp outbound to VCH management interface if firewall is reenabled
INFO[2017-01-06T21:52:15Z] License check OK on hosts:
INFO[2017-01-06T21:52:15Z] "/VBR_Datacenter/host/VBR_Mgmt_Cluster/vbrmgtesx01.virtualbrakeman.local"
INFO[2017-01-06T21:52:15Z] DRS check OK on:
INFO[2017-01-06T21:52:15Z] "/VBR_Datacenter/host/VBR_Mgmt_Cluster/Resources"
INFO[2017-01-06T21:52:15Z]
INFO[2017-01-06T21:52:15Z] Creating virtual app "VBR-VCH-01"
INFO[2017-01-06T21:52:15Z] Creating appliance on target
INFO[2017-01-06T21:52:15Z] Network role "client" is sharing NIC with "public"
INFO[2017-01-06T21:52:15Z] Network role "management" is sharing NIC with "public"
INFO[2017-01-06T21:52:16Z] Uploading images for container
INFO[2017-01-06T21:52:16Z] "bootstrap.iso"
INFO[2017-01-06T21:52:16Z] "appliance.iso"
INFO[2017-01-06T21:52:22Z] Waiting for IP information
INFO[2017-01-06T21:52:35Z] Waiting for major appliance components to launch
INFO[2017-01-06T21:52:35Z] Checking VCH connectivity with vSphere target
INFO[2017-01-06T21:52:36Z] vSphere API Test: https://192.168.1.222 vSphere API target responds as expected
INFO[2017-01-06T21:52:38Z] Initialization of appliance successful
INFO[2017-01-06T21:52:38Z]
INFO[2017-01-06T21:52:38Z] VCH Admin Portal:
INFO[2017-01-06T21:52:38Z] https://192.168.1.46:2378
INFO[2017-01-06T21:52:38Z]
INFO[2017-01-06T21:52:38Z] Published ports can be reached at:
INFO[2017-01-06T21:52:38Z] 192.168.1.46
INFO[2017-01-06T21:52:38Z]
INFO[2017-01-06T21:52:38Z] Docker environment variables:
INFO[2017-01-06T21:52:38Z] DOCKER_HOST=192.168.1.46:2376
INFO[2017-01-06T21:52:38Z]
INFO[2017-01-06T21:52:38Z] Environment saved in VBR-VCH-01/VBR-VCH-01.env
INFO[2017-01-06T21:52:38Z]
INFO[2017-01-06T21:52:38Z] Connect to docker:
INFO[2017-01-06T21:52:38Z] docker -H 192.168.1.46:2376 --tls info
INFO[2017-01-06T21:52:38Z] Installer completed successfully
Now we can check the state of our remote VIC host with:
root@VBRPHOTON01 [ ~ ]# docker -H tcp://192.168.1.46:2376 --tls info
Containers: 0
 Running: 0
 Paused: 0
 Stopped: 0
Images: 0
Server Version: v0.8.0-7315-c8ac999
Storage Driver: vSphere Integrated Containers v0.8.0-7315-c8ac999 Backend Engine
VolumeStores:
vSphere Integrated Containers v0.8.0-7315-c8ac999 Backend Engine: RUNNING
 VCH mhz limit: 2419 Mhz
 VCH memory limit: 27.88 GiB
 VMware Product: VMware vCenter Server
 VMware OS: linux-x64
 VMware OS version: 6.5.0
Execution Driver: vSphere Integrated Containers v0.8.0-7315-c8ac999 Backend Engine
Plugins:
 Volume:
 Network: bridge
Operating System: linux-x64
OSType: linux-x64
Architecture: x86_64
CPUs: 2419
Total Memory: 27.88 GiB
Name: VBR-VCH-01
ID: vSphere Integrated Containers
Docker Root Dir:
Debug mode (client): false
Debug mode (server): false
Registry: registry-1.docker.io
root@VBRPHOTON01 [ ~ ]#
This shows us it’s up and running, Now we can run our first container on here by doing:
root@VBRPHOTON01 [ ~ ]# docker -H tcp://192.168.1.46:2376 --tls run hello-world
Unable to find image 'hello-world:latest' locally
Pulling from library/hello-world
c04b14da8d14: Pull complete
a3ed95caeb02: Pull complete
Digest: sha256:548e9719abe62684ac7f01eea38cb5b0cf467cfe67c58b83fe87ba96674a4cdd
Status: Downloaded newer image for library/hello-world:latest

Hello from Docker!
This message shows that your installation appears to be working correctly.

To generate this message, Docker took the following steps:
 1. The Docker client contacted the Docker daemon.
 2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
 3. The Docker daemon created a new container from that image which runs the
  executable that produces the output you are currently reading.
 4. The Docker daemon streamed that output to the Docker client, which sent it
  to your terminal.

To try something more ambitious, you can run an Ubuntu container with:
 $ docker run -it ubuntu bash

Share images, automate workflows, and more with a free Docker Hub account:
 https://hub.docker.com

For more examples and ideas, visit:
 https://docs.docker.com/engine/userguide/

root@VBRPHOTON01 [ ~ ]#
We can see this under vSphere as follows:

vch_vapp

So our container host itself is a VM under a vApp, and all containers are spun up as VMs under the vApp. As we can see here, the container ‘VM’ is powered off. This can be seen further by running ‘docker ps’ against our remote host:
root@VBRPHOTON01 [ ~ ]# docker -H tcp://192.168.1.46:2376 --tls ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
root@VBRPHOTON01 [ ~ ]# docker -H tcp://192.168.1.46:2376 --tls ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
24598201e216 hello-world "/hello" 56 seconds ago Exited (0) 47 seconds ago silly_davinci
root@VBRPHOTON01 [ ~ ]# docker -H tcp://192.168.1.46:2376 --tls rm 24598201e216
24598201e216
root@VBRPHOTON01 [ ~ ]# docker -H tcp://192.168.1.46:2376 --tls ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
root@VBRPHOTON01 [ ~ ]#
This container is now tidied up in vSphere:
 vch_vapp_2
So now we have VIC installed and can spin up containers. In the next post we will install VMware Harbor and use that as our trusted registry.