Friday, June 29, 2012

Disabling VAAI in VMware vSphere

vSphere APIs for Array Integration (VAAI) is a set of features introduced in vSphere 4.1, which allow for offloading certain storage related tasks (e.g. VM cloning, disk zeroing etc.) from VMware hosts to the storage systems. VAAI is included in vSphere Enterprise and Enterprise Plus licensing and enabled by default on ESXi 4.1 and later hosts , but in order to work properly, VAAI also needs to be supported by the underlying storage system (usually achieved through a storage firmware update). 


There are some setups in which  it is recommended to completely disable VAAI - e.g. when using an EMC RecoverPoint fabric splitter or EMC CX4 array with vSphere 5. This blog post describes how to disable the three base VAAI features from vSphere 4.1, as well as "Space Reclamation" (SCSI UNMAP) feature introduced in vSphere 5. Disabling VAAI is done on a per-host basis and doesn't require host restart.


vSphere 4.1 VAAI features include:

  • Atomic Test & Set (ATS) - advanced VMFS file locking intended to replace traditional SCSI locks; host parameter is called HardwareAcceleratedLocking
  • Clone Blocks/Full Copy/XCOPY - for offloading copying/cloning/storage vMotion operations to the array; host parameter is called HardwareAcceleratedMove
  • Zero Blocks/Write Same - for offloading disk zeroing (when creating eager zeroed thick disks) to the storage array;  host parameter is called HardwareAcceleratedInit


Disabling VAAI using vSphere Client



In order to disable the three base VAAI features, select your host in the vCenter inventory, choose the Configuration tab and select Advanced Options. Then change the following settings to 0:


DataMover.HardwareAcceleratedMove


DataMover.HardwareAcceleratedInit


VMFS3.HardwareAcceleratedLocking



Disabling VAAI using esxcli


Note: help on accessing the host through CLI can be found in a previous blogpost - VMware ESXi 5 CLI Commands Part 1.

In order to disable VAAI features using esxcli commands through ESXi shell or SSH in vSphere 5, type away:

# esxcli system settings advanced set --int-value 0 --option /DataMover/HardwareAcceleratedMove

# esxcli system settings advanced set --int-value 0 --option /DataMover/HardwareAcceleratedInit

# esxcli system settings advanced set --int-value 0 --option /VMFS3/HardwareAcceleratedLocking


Disabling SCSI UNMAP


This is a new VAAI feature introduced in vSphere 5, which allows for reclaiming space on the storage system after a file is deleted from a VMFS datastore. Shortly after vSphere 5 was released, it was determined that this feature can cause problems with certain storage systems and storage vMotion / snapshot creation operations, so VMware recommended disabling it completely (see VMware KB 2007427 - Disabling VAAI Thin Provisioning Block Space Reclamation (UNMAP) in ESXi 5.0). 

Since ESXi 5.0 Patch 2 (ESXi build number 515841, released on December 15, 2011) this feature is disabled by default (ESXi 5.0U1 keeps it disabled, but introduces an option to run Space Reclamation manually from the CLI - see VMware KB 2014849), so if you're using ESXi 5.0 with a lower build number, you can either patch your hosts to Patch 2 level, or use the following workaround from the CLI.

To check whether this feature is enabled on your host:

# esxcli system settings advanced list --option /VMFS3/EnableBlockDelete

To disable it, type:

# esxcli system settings advanced set --int-value 0 --option /VMFS3/EnableBlockDelete

Friday, June 8, 2012

Cofiguring EVC on a vSphere cluster

Enhanced vMotion Compatibility (EVC) is a feature of vSphere clusters which allows vMotion between hosts with CPUs of different generations (e.g. a host with an Intel Xeon E5520 CPU and a host with an Intel Xeon E5-2630 CPU). If you have hosts with different generation CPUs, properly setting EVC is a prerequisite for adding these hosts to the same cluster. Setting EVC on a cluster basically forces CPUs of all hosts in the cluster to use a common baseline - a set of instructions which is compatible with all CPUs in the cluster. 


EVC is a vSphere feature that can be enabled regardless of vSphere license edition you're using. It can be enabled on a cluster by right clicking on a cluster in your inventory -> Edit Settings -> VMware EVC -> Change EVC Mode.


Enabling VMware EVC on a vSphere cluster


After choosing Change EVC mode, you will be presented with options for Enabling EVC for AMD hosts or Enabling EVC for Intel hosts. Select your CPU of choice and head to this VMware KB article - Enhanced vMotion Compatibility (EVC) processor support.


This KB article is the best possible EVC reference in which you can find which EVC cluster baselines are supported by different CPU models. All you need to do is find the (not required but recommendedhighest) baseline which is supported by all of your host CPUs and configure EVC to work in this mode. Although you can choose any baseline supported by all CPUs, it is recommended to choose the highest baseline, because of reasons described at the end of the post. For example, if you need to create a cluster with the previously mentioned E5520 hosts and E5-2630 hosts, you would set "Intel Nehalem Gen." as your EVC mode. The EVC menu can also be of help, because when you choose an EVC mode it will tell you whether the hosts which are already a part of the cluster support that EVC mode.


There are a few things which you need to have in mind when configuring EVC in an existing cluster: 

  • when EVC is disabled, this is equal to all host CPUs working with the highest supported EVC baseline
  • if you have a cluster with older hosts (e.g. with E5520 CPUs) with EVC disabled, and you need to add new host(s) to the cluster (e.g. with E5-2630 CPUs), you need to first enable EVC in the proper mode and then add new hosts; this can be done without any disruption - setting a CPU to work with it's highest supported baseline (in this example "Intel Nehalem") can be done with VMs running on the hosts
  • if you need to add an older host (e.g. a server left after virtualizing a service it used to run) to a cluster with EVC disabled and newer hosts which are running VMs, you will be configuring hosts to work in a lower EVC baseline than they are currently; in order to do this, all hosts in the cluster have to be in maintenance mode, which means that all VMs running on them need to be powered off (or migrated to a different cluster, if possible) 


Friday, June 1, 2012

Changing VMware ESXi host logging level

Collecting logs from an ESXi host is a recommended best practice, but if you ever tried to collect ESXi host logs over syslog, you may have noticed that ESXi hosts can be very chatty and spam your syslog server relentlessly. 


There are 8 logging levels of the ESXi host agent (hostd) and these are (sorted by the amount of information logged in the increasing order): "none", "quiet", "panic", "error", "warning", "info", "verbose" and "trivia". vCenter agent (vpxa) has 6 logging levels: "none", "error", "warning", "info", "verbose" and "trivia". Default logging levels for both host and vCenter agent in ESXi 5.x is "verbose" and this is the reason why your syslog server is filled with a lot of seemingly useless information.


If you want to change the logging level, fire up your vSphere Client, connect to vCenter/host, select a host in the inventory, then the Configuration tab -> Advanced Settings -> Config -> HostAgent -> Log. Here you'll be able to review all logging levels for hostd and vpxa agents and change logging levels by typing in name of the desired level (hostd) or selecting from a drop down menu (vpxa). This setting takes effect immediately after clicking OK (no host or hostd/vpxa restart is required).


ESXi host hostd and vpxa logging levels

Monday, May 28, 2012

Disabling VM device hot add/remove

Since virtual machine hardware version 7 (introduced by vSphere 4.0), you may have noticed that VM's network adapter and hard drives appear as one of the devices available for removal when you click the Safely Remove Hardware icon in Windows taskbar. This can cause problems, especially in VDI environments, where users can accidentaly remove VM's network adapter instead of e.g. their USB device, which will result in VM's immediate loss of connectivity and tech support calls.

You can avoid situations like this, if you completely disable virtual machine device hot add / remove. In this way, network adapter and hard drives won't appear in the Safely Remove Hardware menu, but have in mind that setting this option will also prevent you from hot adding / removing ANY device from the VM (network adapter, hard drive, USB controller etc.), which means that you'll have to shutdown the VM in order to add/remove any part of its virtual hardware. CPU / memory hot add/remove is not affected by this setting.

In order to disable VM device hot add/remove, shutdown the VM, right click on it and select Edit Settings, choose Options tab -> General -> Configuration Parameters. Click Add Row and add a row with name devices.hotplug and value false, confirm the setting with a couple of OKs and power on the virtual machine again. The following picture illustrates how your advanced parameters dialog should look like after adding this setting.


Disable virtual machine device hot add/remove with devices.hotplug advanced parameter.

Thursday, May 24, 2012

Join ESXi host to Active Directory Domain

ESXi hosts can be joined to Active Directory, or more precisely can use Active Directory for authenticating users, which allows for assigning permissions to domain users on the host level. This post describes the procedure for joining an ESXi host to the domain through vSphere Client and vSphere CLI, and you can alternatively use Host Profiles or PowerCLI for performing the same task. Unlike for Windows machines, joining ESXi to a domain doesn't require a reboot.

First you need to make sure that your host can reach your domain controllers and resolve the FQDN of your domain, which is commonly accomplished by setting your domain controllers / DNS servers for your domain as the host's DNS servers (select a host in vSphere Client -> choose Configuration tab -> DNS and routing -> Properties -> set Preferred and Alternate DNS server to the appropriate addresses).



Joining host to a domain through vSphere Client


In the host's Configuration tab, select Authentication Services option and then Properties in the upper right corner. From the drop down menu, select "Active Directory" as the Directory Service type, type the FQDN of your domain and select Join Domain as shown in the following picture. 

ESXi Directory Service Configuration menu

After this, you will be prompted to enter credentials of a domain user with enough privileges to join a computer to a domain (you can do enter the username in <domain FQDN>\<user>, <user>@<domain FQDN> or just <user> format). Alternatively, you can use vSphere Authentication Proxy, which is a new feature introduced in vSphere 5 and represents a server which securely stores credentials for joining AD (commonly used in environments with Auto Deploy hosts so that these credentials don't have to be stored as a part of the Host Profile).



Joining host to a domain through vSphere CLI


You can also join the host to a domain through vSphere CLI. Power up the vSphere CLI on your client machine and type away:


vicfg-authconfig --server=<IP address /DNS name of your host> --username=<username of the administrative user on the host> --password=<password 
of the administrative user on the host
> --authscheme AD --joindomain <FQDN of your domain> --adusername=<username of AD user with privileges to join computer to a domain> --adpassword=<password of AD user>

After you've joined a host to the domain, you may notice a new computer object for the host created in the defaults Computers container in AD. You can move this object to the appropriate OU according to your AD structure, but since ESXi is not a Windows machine, obviously don't expect your Group Policies to apply to it :)


Assigning host permissions to domain users


When a host uses Active Directory for authentication, you can assign host privileges to domain users, which is useful in cases when you e.g. don't have a vCenter server, but only a standalone host (when you have a vCenter server joined to a domain, you can assign vCenter Roles on a host level to them even if your host is not a part of the domain). Connect to your host using vSphere Client, right click your host and select Add Permission. When you select Add in the User and Groups part of the screen, you'll notice that you can choose between local users (marked as (server) in the Domain drop down) and AD users.


Also, when a host is a part of the domain, you can assign Administrator role on a host level to domain members in a very simple way. What you need to do is to create a domain security group called "ESX Admins" (note that it's ESX not ESXi in the name), and all domain users which are members of this group are automatically assigned the Administrator role on the ESXi servers in the domain. These users can also log on to host locally through vSphere Client, SSH or ESXi Shell.


Leaving the domain


If you decide to remove the host from domain and switch back to local user authentication, you can do this through vSphere Client by selecting the host in the inventory, Configuration tab -> Authentication Services, and choosing Leave Domain. Host will then continue to authenticate only locally created users (e.g. root), and you can delete the computer object representing your host from the domain.

Sunday, May 20, 2012

Configuring multiple vMotion VMkernel port groups in vSphere 5

vMotion has been completely rewritten in vSphere 5, and several improvements to the mechanism have been added, including the possibility to saturate 10Gbps links when performing a live migration, as well as improved convergence in cases when virtual machine memory changes faster than the vMotion transfer rate. Complete list of improvements and best practices for vMotion in vSphere 5 can be found in this VMware document - VMware vSphere vMotion Architecture, Performance and Best Practices in VMware vSphere 5.

One of the improvements is also the ability to configure multiple port groups (which are using different uplinks) for vMotion, and the mechanism can now use them simultaneously in order to migrate the VMs, even in cases when only one VM is migrated. In this way, if you have a 1Gbps vMotion network between hosts, you can utilize multiple host NICs for vMotion migration and therefore benefit from improved throughput resulting in faster vMotion migrations (which can be very useful especially in cases when you need to e.g. migrate all VMs from a host in order to perform maintenance tasks).


Configuring vMotion to use multiple host NICs is very simple - you need to create two VMkernel port groups on a virtual switch with different IP addresses and in the appropriate vMotion VLAN, mark them to be used for vMotion (Edit port group settings -> General tab -> mark the Enabled field next to vMotion) and edit their NIC Teaming settings so that they use different vSwitch uplink as the active uplink.


For example, if vmnic0 and vmnic1 are the uplinks of your virtual switch where these port groups are located, and you created port groups vmotion1 and vmotion2, you would configure vmotion1 to use vmnic0 as the active and vmnic1 as the standby adapter, and configure vmotion2 to use vmnic1 as the active and vmnic0 as the standby adapter.


This is how your port group NIC Teaming configuration should look like on the port groups:


vSphere 5 Multiple NIC vMotion port group configuration

vSphere 5 Multiple NIC vMotion port group configuration




These are the two vMotion port groups on the vSwitch:


vSphere 5 Multiple NIC vMotion vSwitch configuration




If you are using distributed switches, you need to create two port groups with the same uplink configuration as described above, create two Virtual Adapters for vMotion on every host and bind them to different port groups previously created on the distributed switch.


The procedures described are for the most common case when you have two uplinks on the management / vMotion vSwitch, but vSphere 5 supports using up to 16 1Gbps or 4 10Gbps uplinks for Multi-NIC vMotion in this way. If you have more than two uplinks available and configure more than two vMotion port groups, all you need to do is configure one uplink to be active per a port group and all the other uplinks as standby.


Detailed configurations steps and videos for setting up Multiple NIC vMotion can be found in this VMware KB - Multiple-NIC vMotion in vSphere 5.


One word of advice for the end - be sure to isolate vMotion in a separate VLAN/subnet than your Management / IP storage / VM networks if you haven't done that already per VMware best practices, since I've seen cases when performing a vMotion migration in vSphere 5 causes loss of connectivity to the hosts or between the host and iSCSI/NAS storage when the same VLAN/subnet and even port group was used for Management, IP storage and vMotion traffic.

Thursday, May 17, 2012

Removing ESXi shell / SSH for the host has been enabled warning in VMware ESXi 5

Since vSphere 5, you can remove the annoying warning that appears on ESXi hosts when you turn ESXi Shell or SSH on. This warning can be seen when selecting host's Summary tab and therefore can't be removed by simply going to the Alarms tab and acknowledging / clearing it.


ESXi Shell / SSh for the host has been enabled warning in VMware ESXi

Both of these warnings can now be removed by going to host's advanced parameters (Configuration tab -> Advanced Settings), selecting UserVars on the left side and setting parameter UserVars.SupressShellWarning to 1. 

Monday, May 14, 2012

Updating HP ESXi image using vSphere Update Manager

When using HP servers, recommended practice is to use customized HP ESXi image for installation, which contains base VMware ESXi plus HP specific utilities, tools and drivers (similar goes for other server vendors, which all usually have their own customized ESXi installation images). The following post describes how to proceed with patching hosts running HP ESXi 5.x image, while the complete HP official document on HP ESXi deploying and patching can be found here - Deploying and updating VMware vSphere 5.0 on HP ProLiant Servers.

Host with installed HP ESXi image can be updated using vSphere Update Manager and default download repositories, but it is necessary to assure that HP-specific VIBs that are a part of the customized image are also regularly updated. This can be accomplished by adding HP VIB depot as one of the Download Sources in vSphere Update Manager, creating a baseline that contains HP updates and attaching this baseline to the hosts running HP ESXi image.

First, connect with vSphere Client to your vCenter, go to Home -> Solutions and Applications -> Update Manager (if you don't see this option, you probably need to install vSphere Update Manager plugin in your vSphere Client - check Plug-ins -> Manage Plug-ins on top of vSphere Client screen).

In order to add HP's patch repository, go to Configuration tab -> Download Settings -> Add Download Source and paste the following URL:  http://vibsdepot.hp.com/index.xmlYou can validate this address, and when you make sure everything's OK in connecting to HP depot, select Apply below the list of Download Sources. Then choose Download Now in order to check the new patch repo for patches.

Next, we need to create a baseline with HP updates. Click on the Baselines and Groups tab, then select Create above the list of existing baselines. Name the new baseline, select Host Patch as the baseline type, Next, choose Dynamic and then Next again. 

Now it's necessary to define rules which patches need to match in order to be a part of the baseline. In the Patch Vendor column, select both "hp" and "Hewlett-Packard Company" and then you can click Next to the finish of the wizard.

After we created a baseline, we need to attach it to hosts. Go to Home -> Host and Clusters view and select a vCenter, datacenter, cluster or a single host to which you want to attach the baseline. Then go to the Update Manager tab, and choose Attach... in the upper right corner, select the newly created baseline and click Attach.

After attaching the baseline, each new Scan operation will result in hosts also being checked for compliance with the updates released for HP specific VIBs.

Sunday, May 13, 2012

Installing VMware Tools on Linux Guest OS

This post describes the general procedure for installing VMware Tools on Linux Guest OSs from the command line. It presumes that you are logged in as root to the VM.

First, right click the virtual machine from vSphere Client and choose VM -> Guest -> Install/Upgrade VMware Tools (equivalent on VMware Workstation / Player would be Virtual Machine -> Install VMware Tools..)

This results in .iso file with VMware Tools package being mounted on CD/DVD drive of the virtual machine.

Create a directory as a mount point for the CD/DVD drive:

mkdir /mnt/vmt

Mount the CD/DVD drive:

mount /dev/cdrom /mnt/vmt

If you do ls /mnt/vmt, you'll see that the installation .iso contains of one .tar.gz file with the packed installation. Since this is a read-only mount point, we need to extract this .tar.gz to a different directory that is writable.

First create the new directory:

mkdir /vmtools

Then unpack the contents of the installation .tar.gz file to the new directory:

tar xzf /mnt/vmt/VMwareTools-<xxxx>.tar.gz  -C /vmtools

(name of .tar.gz file can vary depending on the installed ESXi/Workstation/Player version).

Run the extracted installation file:

cd /vmtools/vmware-tools-distrib
./vmware-install.pl

Proceed to answer installation prompts.

Unlike on Windows, installation of VMware Tools on Linux Guest OSs does not require reboot. After the installation is finished, you can check the status of VMware Tools service by typing:

service vmware-tools status

or, if you're using RHEL or CentOS >=6.0 as the guest OS:

status vmware-tools

since these OSs use Upstart instead of legacy init for starting VMware Tools service since OS version 6.0.

After the installation is finished, perform cleanup:

umount /mnt/vmt (if not already done by the installer)
rmdir /mnt/vmt
rm -rf /vmtools

Saturday, May 12, 2012

VMware ESXi 5 CLI Commands Part 1


Since vSphere 5, the only hypervisor available in the VMware virtualization suite is ESXi. One of the main differences in ESXi compared to ESX is the lack of Service Console, which was basically stripped down RHEL used for communicating with VMkernel. As a result, CLI command syntax in ESXi siginificantly differs.


This post covers a few of the most important CLI commands in ESXi, while One of the future posts will cover esxcli, a new CLI framework that will in future completely replace legacy esxcfg-*/vicfg-* commands.


You can directly access command line in ESXi 5.x either through the console (so called ESXi Shell) or SSH, but you need to first enable it through vSphere Client (Configuration -> Security Profile -> Services -> ESXi shell / SSH -> Options -> Start) or console menu (F2 -> login as user with root privileges -> Troubleshooting Options -> Enable ESXi Shell / Enable SSH).



Show services on the host and their default state upon boot (on/off)


cat /etc/chkconfig.db




Restart all services on the host


/sbin/services.sh restart




Show config file of vCenter vpxa agent


cat /etc/vmware/vpxa/vpxa.cfg




Enter maintenance mode


vim-cmd hostsvc/maintenance_mode_enter




Virtual machine operations


vim-vmd solo/registervm <path to VM's .vmx file> - register VM on a host
vim-cmd vmsvc/getallvms - list all VMs registered on the host


* in the following commands replace <vmid> with virtual machine ID obtained by running the previous command


vim-cmd vmsvc/power.getstate <vmid> - show power state of a VM
vim-cmd vmsvc/power.shutdown  <vmid>  - shutdown a VM (shutdown guest)
vim-cmd vmsvc/power.reset  <vmid>  - reset a VM
vim-cmd vmsvc/power.off <vmid>  - power off a VM
vim-cmd vmsvc/power.on <vmid> - power on a VM
vim-cmd vmsvc/power.reboot <vmid> - reboot a VM
vim-cmd vmsvc/get.summary  <vmid>  - get summary information for a VM
vim-cmd vmsvc/unregister  <vmid>  - unregister a VM from a host