Slipstreaming VMXNET3 drivers into Windows builds

 

A question came up on Twitter this week about pre-loading drivers into Windows builds to allow future devices to be ready to use. I have been doing this for a while, but take it for granted I guess. I was prompted on responding to blog this, which totally makes sense, so here goes.

In my case we are slipstreaming the VMXNET3 driver, which is needed if you want to provision Windows Server VMs on vCenter with the VMXNET3 network adapter, which is required if you don’t want to limit your Windows VMs to the 1Gbps available to the E1000 adapter type.

Here we are doing it using the autoUnattend.xml file, which I use on a virtual floppy disk to automate Windows builds, but this could apply to however you build your servers, and want to install drivers.

So the following command is in my autoUnattend.xml file, near the end, where we do commands to run at first logon:

Capture2

This uses the pnputil.exe, which is a built in utility to install drivers, and points to the .inf file, sitting on the virtual floppy drive, in a folder with the below files:

Capture

As a heads up, I got these files from a VM which already had VMware Tools installed, from the VMXNET3 specific version folder in ‘C:\Windows\System32\DriverStore\FileRepository’.

Hopefully one day this will be included with Windows, but not sure if this is something Microsoft want to do or not, if not then this is the way I do it for now. I don’t think I entirely figured this out on my own, but have been doing it this way for a while, so apologies for not referencing anyone’s blog I poached this from. I’m sure if you are using SCCM or whatever then you could use a similar method to this, with the command line tool used above to do this (and with any driver, not just VMXNET3).

In the zone…basic zoning on Cisco Nexus switches

In this post I look to go over some basic FCoE zoning concepts for Cisco Nexus switches, although FCoE has not really captured the imagination of the industry, it is used in a large number of Cisco infrastructure deployments, particularly around Nexus and UCS technologies. My experience is mostly based on FlexPods where we have this kind of design (this shows FCoE connectivity only, to keep things simple):

Screen Shot 2015-11-08 at 18.38.23

Zoning in a FlexPod is simple enough, we may have a largish number of hosts, but we are only zoning on two switches, only have 4 or 8 FCoE targets, depending on our configuration. In fact the zoning configuration can be fairly simply automated using PowerShell by tapping into the NetApp, Cisco UCS, and NX-OS APIs. The purpose of what we are doing here though is to describe the configuration steps required to complete the zoning.

The purpose of zoning is to restrict access to a LUN (Logical Unit Number), or essentially a Fibre Channel block device, on our storage, to one or more access devices, or hosts. This is useful in the case of boot disks, where we only ever want a single host accessing that device, and in the case of shared data devices, like cluster shared disks in Microsoft Clustering, or VMFS datastore in the VMware world, where we only want a subset of hosts to be able to access the device.

I found configuring zoning on a Cisco switch took a bit of getting my head around, so hopefully the explanation below will help to make this simpler for someone else.

From the Cisco UCS (or any server infrastructure you are running), you will need to gather a list of the WWPNs for the initiators wanting to connect to the storage. These will be in the format of 50:02:77:a4:10:0c:4e:21, this being a 16 byte hexadecimal number. Likewise, you will need to gather the WWPNs from your storage (in the case of FlexPod, your NetApp storage system).

Once we have these, we are ready to do our zoning on the switch. When doing the zoning configuration there are three main elements of configuration we need to understand:

  1. Aliases – these match the WWN to a friendly name, and sit in the device alias database on the switch. You can get away without using this, and just use the native WWNs later, but this will make things far more difficult should something go wrong. So basically these just match WWNs to devices.
  2. Zones – these logically group initiators and targets together, meaning that only the device aliases listed in the zone are able to talk to one another. This provides security and ease of management, a device can exist in more than one zone.
  3. Zonesets – this groups together zones, allowing the administrator to bring all the zones online or offline together. Only one zoneset can be active at a time.

On top of this, there is one more thing to understand when creating zoning on our Nexus switch, and that is the concept of a VSAN. A VSAN, or Virtual Storage Area Network, is the Fibre Channel equivalent of a VLAN. It is a logical collection of ports which together form a single discrete fabric.

So let’s create a fictional scenario, and create the zoning configuration for this. We have a FlexPod with two Nexus 5k switches, with separate fabrics, as shown in the diagram above, meaning that our 0a ports on the servers only go to Nexus on fabric A, and 0b ports only go to Nexus on fabric B. Both of our e0c ports on our NetApp storage go to NexusA, and both our e0d ports go to NexusB:

AFAS01 – e0c, e0d

NAFAS02 – e0c, e0d

And 3 Cisco UCS service profiles, each with two vHBAs, wanting to access storage on these targets, these are created as follows:

UCSServ01 – 0a, 0b

UCSServ02 – 0a, 0b

UCSServ03 – 0a, 0b

So on NexusA, we need the following aliases in our database:

Device Port WWPN Alias Name
NAFAS01 e0c 35:20:01:0c:11:22:33:44 NAFAS01_e0c
NAFAS02 e0c 35:20:02:0c:11:22:33:44 NAFAS01_e0c
UCSServ01 0a 50:02:77:a4:10:0c:0a:01 UCSServ01_0a
UCSServ02 0a 50:02:77:a4:10:0c:0a:02 UCSServ02_0a
UCSServ03 0a 50:02:77:a4:10:0c:0a:03 UCSServ03_0a

And on NexusB, we need the following:

Device Port WWPN Alias Name
NAFAS01 e0d 35:20:01:0d:11:22:33:44 NAFAS01_e0d
NAFAS02 e0d 35:20:02:0d:11:22:33:44 NAFAS02_e0d
UCSServ01 0b 50:02:77:a4:10:0c:0b:01 UCSServ01_0b
UCSServ02 0b 50:02:77:a4:10:0c:0b:02 UCSServ02_0b
UCSServ03 0b 50:02:77:a4:10:0c:0b:03 UCSServ03_0b

And the zones we need on each switch are, firstly for NexusA:

Zone Name Members
UCSServ01_a NAFAS01_e0c

NAFAS01_e0c

UCSServ01_0a

UCSServ02_a NAFAS01_e0c

NAFAS01_e0c

UCSServ02_0a

UCSServ03_a NAFAS01_e0c

NAFAS01_e0c

UCSServ03_0a

And for Nexus B:

Zone Name Members
UCSServ01_b NAFAS01_e0d

NAFAS01_e0d

UCSServ01_0b

UCSServ02_b NAFAS01_e0d

NAFAS01_e0d

UCSServ02_0b

UCSServ03_b NAFAS01_e0d

NAFAS01_e0d

UCSServ03_0b

This gives us a zone for each server to boot from, allowing that vHBA on the server to boot from either of the NetApp interfaces that it will be able to see on its fabric. The boot order itself will be controlled from within UCS, by creating zoning for the server to boot on either fabric we create resilience. All of this is just to demonstrate how we construct the zoning configuration so things will no doubt be different in a different environment.

So now we know what we should have in our populated alias database, and our zone configuration, we just need to create our zoneset. Well, we will have one zoneset per fabric, so one for NexusA:

Zoneset Name Members
UCSZonesetA UCSServ01_a

UCSServ02_a

UCSServ03_a

And the zoneset for NexusB:

Zoneset Name Members
UCSZonesetB UCSServ01_b

UCSServ02_b

UCSServ03_b

Now we are ready to put this into some NXOS CLI, and enter this on our switches. The general commands for creating new aliases are:
device-alias database
device-alias name <alias_name> pwwn <device_wwpn>
exit
device-alias commit

So for our NexusA, we do the following:
device-alias database
device-alias name NAFAS01_e0c pwwn 35:20:01:0c:11:22:33:44
device-alias name NAFAS02_e0c pwwn 35:20:02:0c:11:22:33:44
device-alias name UCSServ01_0a pwwn 50:02:77:a4:10:0c:0a:01
device-alias name UCSServ02_0a pwwn 50:02:77:a4:10:0c:0a:02
device-alias name UCSServ03_0a pwwn 50:02:77:a4:10:0c:0a:03
exit
device-alias commit

And for Nexus B, we do:
device-alias database
device-alias name NAFAS01_e0d pwwn 35:20:01:0d:11:22:33:44
device-alias name NAFAS02_e0d pwwn 35:20:02:0d:11:22:33:44
device-alias name UCSServ01_0b pwwn 50:02:77:a4:10:0c:0b:01
device-alias name UCSServ02_0b pwwn 50:02:77:a4:10:0c:0b:02
device-alias name UCSServ03_0b pwwn 50:02:77:a4:10:0c:0b:03
exit
device-alias commit

So that’s our alias database taken care of, now we can create our zones. The command set for creating a zone is:
zone name <zone_name> vsan <vsan_id>
member device-alias <device_1_alias>
member device-alias <device_2_alias>
member device-alias <device_3_alias>
exit

I will use VSAN IDs 101 for fabric A, and 102 for fabric B. So here we will create our zones for NexusA:
zone name UCSServ01_a vsan 101
member device-alias NAFAS01_e0c
member device-alias NAFAS02_e0c
member device-alias UCSServ01_0a
exit
zone name UCSServ02_a vsan 101
member device-alias NAFAS01_e0c
member device-alias NAFAS02_e0c
member device-alias UCSServ02_0a
exit
zone name UCSServ03_a vsan 101
member device-alias NAFAS01_e0c
member device-alias NAFAS02_e0c
member device-alias UCSServ03_0a
exit

And for NexusB:
zone name UCSServ01_b vsan 102
member device-alias NAFAS01_e0d
member device-alias NAFAS02_e0d
member device-alias UCSServ01_0b
exit
zone name UCSServ02_b vsan 102
member device-alias NAFAS01_e0d
member device-alias NAFAS02_e0d
member device-alias UCSServ02_0b
exit
zone name UCSServ03_b vsan 102
member device-alias NAFAS01_e0d
member device-alias NAFAS02_e0d
member device-alias UCSServ03_0b
exit

So this is all of our zones created, now we just need to create and activate our zoneset and we have our completed zoning configuration. The commands to create and activate a zoneset are:
zoneset name <zoneset_name> vsan <vsan_id>
member <zone_1_name>
member <zone_2_name>
exit
zoneset activate name <zoneset_name> vsan <vsan_id>
exit

So now we have our NexusA configuration:
zoneset name UCSZonesetA vsan 101
member UCSServ01_a
member UCSServ02_a
member UCSServ03_a
exit
zoneset activate name UCSZonesetA vsan 101
exit

And our NexusB configuration:
zoneset name UCSZonesetB vsan 102
member UCSServ01_b
member UCSServ02_b
member UCSServ03_b
exit
zoneset activate name UCSZonesetB vsan 102
exit

So that’s how we compose our zoning configuration, and apply it to our Nexus switch. Hopefully this will be a useful reference on how to do this.