Hi, itâ€™s Alastair again, this is the second post uncovering the automation that is in the AutoLab, if you donâ€™t know what the AutoLab is then head to my AutoLab announcement post. If you missed the first article on AutoLab automation then you can head there now, before returning here.
This article is all about how the windows servers in the AutoLab are built, there were a lot of Yaks to Shave before you can install the VMware goodness.
Windows unattended install
Microsoft have been very good at providing unattended install methods for Windows, with Windows Server 2008 the "answer" file is in XML format. The official tool to generate the answer file is the Windows System Image Manager (WSIM) tool, a part of the Windows Automated Install Kit (WAIK) which is a large download as it includes WindowsPE. The overview and lots of additional information is here on the TechNet web site. Of course there are plenty of samples of unattend.xml files on the Internet along with lots of suggestions of snippets that do things. I like this sample but there are a hundred more.
One of the tricks we use in the AutoLab is to place the answer file on a floppy and boot from a normal Windows install ISO, I haven’t seen this method described anywhere but it works really well for us. The xml file is named autounattend.xml, rather than the default unattend.xml, as this lets the installer know not to seek confirmation of things like repartitioning the boot disk. The answer file covers the TCP/IP setup as well as local accounts, autologon for the console, disabling the firewall, installing optional services and removing most of the out of box experience.
The answer file includes a directive to autologon once Windows is installed and a script to run on first startup of the installed copy of windows. At the moment this script is pulled from the boot floppy, in the next release it will come direct from the build share. The initial script is kept short, it transfers control to the script on the build share as early as is practical. Each Windows server has a folder on the build share under the Automate folder, this is used to hold everything specific to the Windows server and that can be distributed with the AutoLab kit. Only small and freely redistributable software is included with the kit and we try to minimise
Domain Controller build
The DC build starts like by presenting the Windows Server 2008R2 ISO and the boot floppy to the VM, the BIOS boot order is set to CDROM first then hard disk. The floppy image is not bootable so it must be after the other two in the boot order.
The DC build includes DHCP server installation in the answer file, the first build script is called Build.cmd. The script sets up to run the Phase2.cmd script after reboot and then launches DCPromo to create the domain. Settings for the creation of the domain are in the file dcpromo.txt, the file is first copied to the DC system disk then used in the command dcpromo /answer:c:\dcpromo.txt the settings include installing DNS on the DC and having the zones setup automatically. At the end of the DCPromo process the VM will reboot. After the reboot after DCPromo the Phse2.cmd script will launch, this will first connect to the build share and then start configuring the server.
The Jounin TFTP server installer doesn’t have a silent setup option, so it’s files are copied into place and the SC command used to configure and start the "Tftpd32_svc" service. The folder C:\TFTP-Root is then populated from TFTP-Root under the DC folder on the build share. This source folder contains files that aren’t version specific. Next the script checks for each of the ESX server version installers and copies the required files into the TFTP-Root folder. For ESXi 5 the script uses a hash check to identify the version of ESXi 5.0 that is in the ESXi50 folder, this is quite a brittle solution as any additional file or difference in file name case will case the hash to fail. In the future Update releases of ESXi won’t be supported so the folder name will identify the build, vSphere 5.1 will be supported but not vSphere 5.0 Update 2.
The DHCP server has a scope added and additional options set on that scope using netsh commands. These are to enable PXE for the ESXi server builds as well as allow nested VMs to operate.
DNS is setup using the DNSCMD command, a reverse lookup zone is created for the lab subnet as well as both forward and reverse DNS entries for everything that isn’t going to be a Windows domain member, the Windows servers will do dynamic DNS.
The DC will provide the SQL server for the lab, SQL Server Express 2008 R2 is installed from the vCenter 5.0 installer folder, this avoids a separate download of the SQL server installer. The SQL installer requires quite a lot of options in order to achieve the required configuration, happily Microsoft provides an excellent reference about half way down this page on the MSDN web site.
If the SQL Management Studio installer is found on the Build share (renamed to sqlmsssetup.exe) then it is also installed, this is mainly for my convenience as I setup the AutoLab automation and need to check why it hasn’t worked. The SQL Management Studio isn’t required to use the AutoLab so it isn’t mentioned in the Deployment Guide, not installing it saves quite a bit of time in the DC build.
Once the database engine is installed SQLCMD.exe is used to create the databases and database users. This is kept as simple as possible and uses a text file MakeDB.txt to drive SQLCMD. Again there is good documentation on SQLCMD on this page and the TransactSQL commands on this page
After the SQL install and setup a series of smaller tasks are completed, copying PowerShell scripts locally, elevating privileges and setting the time zone.
The final step is a silent install of the VMware tools, these are taken from the VMTools folder and after this install the DC does a final reboot.
Virtual Center Build
The VC build begins with the AutoUnattend.xml on floppy and Windows Server 2008 DVD just like the DC build. The VC build includes joining the lab.local domain so it cannot begin until the DC build is complete. The script on the floppy hands over to the script from the Build share almost immediately. This script looks to Automate.ini in the build share to decide whether to prompt for the automation level.
The minimum amount of automated install is to install the VMware Tools, which is the last step of the build script and causes the VM to restart. This step is common to the end of all the automated sequences.
For any level of automation there are a few useful basics. The Microsoft SQL Native client is a pre-requisite for using a SQL Server with vCenter. The SQL Native client is included in the SQLExpress installer, which is part of the vCenter DVD contents. However there doesn’t seem to be any way to have the SQL Express installer just install the Native client. We use a command line switch of /extract with the SQLExpress installer and then pull the SQL Native client installer msi package from a folder in the extracted directory tree. Once again there is a different path with different versions of the SQLExpress installer so there are some conditional copy statements in the build script. Once SQL Native is installed VMTools are installed and teh server reboots.
Once of the tricks is creating ODBC DSNs for both vCenter and VUM, these are simply registry entries but in quite different locations as vCenter wants a 64bit DSN and VUM wants a 32bit one. Then vCenter and VUM are installed, VMware publish a useful guide to automating the installation of vCenter and the VUM plugin these steps are executed for the chosen version of vCenter. In addition the vSphere client and VUM plugin are installed again with silent install switches. As much as possible the silent install show status as they install, this helps pass the time while you wait for them to finish.
Since the AutoLab uses an old OS for the nested guests we need to provide the sysprep files. First 7Zip is used to extract the deploy.cab file from the Windows Install iso on the build share and then the sysprep files are extracted to the right folders on the VC.
The vSphere CLI and PowerCLI are installed using simple silent install switches, then there are a few user experience tunings to be done. As the VC VM is the primary place where work gets done with the AutoLab we have spent some time on cleaning up the desktop, providing convenient shortcuts and avoiding annoyances like the vSphere client Getting Started Tabs.
Nested Windows Install
The nested Windows install is Windows Server 2003, chosen for itâ€™s small RAM requirements and ease of working, in the near future this may change to Windows Server 2008 as 2003 Server is not so easy to come by. We will need to stick to 32bit editions as some labs (like mine) donâ€™t supported nested 64bit VMs. The automation file is usually named unattend.txt however like 2008 there is an alternate name, winnt.sif, that we use to have everything happen automatically. The unattend file goes on the floppy and the standard ISO is used to boot. The floppy image is built as part of the AddHosts script on the VC, the unattend file is updated with the Windows Product key from the Automate.ini file and the floppy image is assembled onto the Build share.
The only script is the one on the floppy image, unlike the 2008 server that use scripts from the Build share, alongside the script is a useful piece of software called LoadStorm. LoadStorm allows an arbitrary CPU and RAM load to be generated in the VM and is useful for causing performance issues and resource contention. It was written by Andrew Mitchell who is a VCDX based in Australia, he doesnâ€™t seem to have a blog but his LoadStorm is linked from this page on Yellow-Bricks. In order to install LoadStorm we first install Microsoft .Net 3.5 from under the vCenter 5.0 folder, afterwards we install the VMTools and finally reboot.
Bringing it home
Producing the automation of these builds tends to be an iterative process, each step has to be separately tested and the further into the build the longer it takes to test each change. The good news is you tend to only have to get things right once and the scripts usually keep doing the right thing.
The next post in the series will look at the automation of the ESXi server builds.