A. Mikkelsen

VMware ESX scripts, commands, tools and other nice to know things that will make your virtualization days easier!!!!

Browsing Posts in Books

If you haven’t read or read about the must have PowerCLI book “VMware vSphere PowerCLI Reference: Automating vSphere Administration“, by Luc Dekens, Alan Renouf, Glen Sizemore, Arnim van Lieshout and Jonathan Medd, then you need to check it out.

The book will show you how to automate your VMware infrastructure from vCenter to VM’s.

  • Automate installations
  • Create and configure VM’s
  • Secure your environment
  • Create reports

and much more.

Read a few chaphers from the book or buy the book (like I did :-)) at:

Download the PowerCLI examples from each chapter:

When you upgrade your vSphere environment you normally also upgrade the VM’s virtual hardware to version 7, to take advantage of the new features. This is pretty normal procedure for all VMware admins.

But in some very very rare cases you might need to move a VM upgraded to hardware version 7, to a host that doesn’t support VM’s running hardware version 7.
From a host running ESX 4.x to a host running ESX 3.x

So what to do.
There is two ways you can accomplice this task.

The first way is to use the free VMware Converter tool.
Some great guides have been created by others so I don’t want to do it all over again.
The only thing is that it can take some time to convert the VM, but it is a proven and stable method.

The other way is to do it manually, this way is a lot faster, but there is a risk that it will corrupt the VM, so make sure you have a working backup.
Use this guide on your own risk

  • Powered off the VM
  • Make sure the VM doesn’t have any snapshots before proceeding
  • From the ESX console or from a Putty session, edit the VMs VMX file, using your favorite editor
    vi /vmfs/volume/DS1/WIN2008-001/WIN2008-001.vmx
  • Change the virtual hardware version from:
    virtualHW.version = “7”


    virtualHW.version = “4”
  • You don’t need to change config.version = “8”, since ESX 3.x already uses this version
  • Change the virtual controller, because virtual hardware version 4 doesn’t understand the version 7 virtual controller, from:
    scsi0.virtualDev = “lsisas1068”


    scsi0.virtualDev = “lsilogic”
  • From the ESX console or from a Putty session, edit the VMs VMDK pointer file/files (if more than one virtual disk), using your favorite editor
    vi /vmfs/volume/DS1/WIN2008-001/WIN2008-001.vmdk
  • Change the virtual hardware version from:
    ddb.virtualHWVersion = "7"


    ddb.virtualHWVersion = "4"
  • You should now be able to power on the VM as virtual hardware version 4.

Using IBM Blades and Cisco switches to run your ESX enviroment?

If yes, have you tested what happens if you unplug the network cables going into one switch?

If you like me have bundled 2 or more cables going from one switch, to one backbone switch and done the same for the other switch, then your VM’s using that switch will loose network connection (from outside the host).
This is not the way I wanted the setup to work.

After a bit of googling i found a blog from Scott Lowe (http://blog.scottlowe.org/2007/06/22/link-state-tracking-in-blade-deployments/) about the problem and also a solution.
The solution is called Link State Tracking. Many users have tried the solution and have got it to work, so I had to test it…..

I added the following lines to each of the Blade Switches (Port-Channel, group and interfaces may be different on your system).

----------UPLINK to CORE switch------------
interface Port-Channel1
link state group 1 upstream

----------LINK to Blade server------------
interface range GigabitEthernet0/1 - 14
link state group 1 downstream

----------Global command------------
link state track 1

conf t
interface Port-Channel1
link state group 1 upstream
interface range GigabitEthernet0/1 - 14
link state group 1 downstream
link state track 1

Remember to write the changes to memory using


After this was done on both Blade switches, i just had to test it.
I started a ping to a VM that I knew was using Switch1 to communicate with external network traffic.
Then I unplugged the to 2 network cables going into Switch1 and waited to see if the ping command would loose the communication with the VM….
It didn’t loose connection. So the the VM must have switched to Switch2.

So configuring the Blade Switches for Link State Tracking is to proper way to configure the switches.
A big thanks goes to Scott Lowe for the blog on Link State Tracking.

Yesterday I by mistake extended a disk on a VM that had snapshots.
Using vmkfstools.
Kind of like this thread (http://communities.vmware.com/thread/238035).

This resulted in the following PowerOn error:

Failed to power on Servername on Host in Cluster:

Cannot open the disk ‘/vmfs/volumes/LUN/Folder/VM.vmdk’
Reason: The parent virtual machine disk has been modified since the child was created

The server was a database server so I had no choice than to fix it.

I tried the following with no luck:

  • Reverting to snapshot didn’t help – (Don’t try this if you don’t have a good backup)
  • Shrinking the vmdk again using vmkfstools – (This has not been possible since ESX 3.0)

Then I tried to use VMware Converter to do a V2V and in the first try (all defaults) that didn’t help the VM started with a BSOD.I tried again using the Converter but this time I changed one default parameter.In the “View/Edit Options” tap, remove the check mark in “Reconfigure destination virtual machine” and click YES to the warning.Removing the reconfigure option saved my day.
The VM started and I was able to restore the latest files from backup.

Duncan Epping has released a great post on how to partition your ESX 4.0 (vSphere) using scripting and the grafical installer


You cannot define the sizes of the /boot, vmkcore, and /vmfs partitions when you use the graphical or text installation modes. You can define these partition sizes when you do a scripted installation.

The ESX boot disk requires 1.25GB of free space and includes the /boot and vmkcore partitions. The /boot partition alone requires 1100MB.

The vmcore is created automatically by the installer

/     - 5120MB
Swap  - 1600MB
Extended Partition:
/var  - 4096MB
/home - 2048MB
/opt  - 2048MB
/tmp  - 2048MB

Today when I needed to upgrade my VC database from MSSQL2000 to MSSQL 2005 i came upon these quick guides.



Over the past few months we have seen a few Windows servers with a black screen.

  • You can’t see the logon promt
  • You get a black screen when you connect with RDP

We found that the problem was caused by a change in the Windows color scheme.

The solution is to copy the color scheme from a simular Windows servers registry and add it the VM/server that has the problem using registry to connect to a remote server.

  1. On a simular windows server locate “[HKEY_USERS\.DEFAULT\Control Panel\Colors]” and export it to a file.
  2. Using the same Registry Editor connect to the remote server.
  3. Import the registry file just created or change the color scheme manually.
  4. Reboote the affected server to change the color scheme.

Default color scheme for a Windows 2003 server.
Default color scheme for a Windows XP.
Default color scheme for a Windows 2000 server.
Default color scheme for a Windows 2008 server.

But ESX as a VM with running VM’s is new.

It’s now possible to run ESX as a VM on an ESX server or in a Workstation.

See howto

Right after lunch today I had a host (3.5 U3) crach with a kernel panic.

It shouldn’t be a problem for clusters running HA, but on this cluster I had disabled HA bacause of an error and hadn’t had the time to depug :-(.
So my VM’s couldn’t be started on a new host and I’m not going to manually register 116 VM’s on another host – so I was forced to find a soultion.

I did a bit of googleing and found VMware KB10196.
I followed the steps and everything is now working – AND HA IS ENABLED 🙂

This is the steps i followed:

  1. Reboot the host into “VMware ESX servere (Debug mode)”
  2. Log in with a user with root permissions and run the following commands.
    esxcfg-boot -p (reloads the PCI data)
    esxcfg-boot -b (sets up boot information)
    esxcfg-boot -r (refreshes initrd)
  3. Now just reboot the host.

After the reboot the host should work as normal. From the VC Client you can verify that the host is connected to VC.

After we have allowed Windows 2008 Servers in our VMware VI enviroment, we been having problems sysprepping Windows 2008 Server.
So here is a quick guide to sysprep a Windows 2008 server in a VMware VI enviroment.
(Use this workaround until VMware VI allows you run sysprep against a WIN2008 Server)

  1. Change the source or templates Guest Operating Systemsetting to “Vista (32 bit)” or “Vista (64 bit)” depending on the installation of the Windows 2008 server installation.
  2. Clone the VM or template and you are now able to customize your Win2008 server with sysprep.
  3. After the cloning is done Power On the new VM and let the customization complete.
  4. Shutdown the VM and change the Guest Operating Systemsetting back to “Windows Server 2008 (32/64 bit)”

The reason the above workaround works is that Vista and Server 2008 has sysprep build into the OS and the sysprep in both OS’s and are based on the same technology.

Read more here




Powered by WordPress Web Design by SRS Solutions © 2018 A. Mikkelsen Design by SRS Solutions