Friday, June 29, 2012

Disabling VAAI in VMware vSphere

vSphere APIs for Array Integration (VAAI) is a set of features introduced in vSphere 4.1, which allow for offloading certain storage related tasks (e.g. VM cloning, disk zeroing etc.) from VMware hosts to the storage systems. VAAI is included in vSphere Enterprise and Enterprise Plus licensing and enabled by default on ESXi 4.1 and later hosts , but in order to work properly, VAAI also needs to be supported by the underlying storage system (usually achieved through a storage firmware update). 


There are some setups in which  it is recommended to completely disable VAAI - e.g. when using an EMC RecoverPoint fabric splitter or EMC CX4 array with vSphere 5. This blog post describes how to disable the three base VAAI features from vSphere 4.1, as well as "Space Reclamation" (SCSI UNMAP) feature introduced in vSphere 5. Disabling VAAI is done on a per-host basis and doesn't require host restart.


vSphere 4.1 VAAI features include:

  • Atomic Test & Set (ATS) - advanced VMFS file locking intended to replace traditional SCSI locks; host parameter is called HardwareAcceleratedLocking
  • Clone Blocks/Full Copy/XCOPY - for offloading copying/cloning/storage vMotion operations to the array; host parameter is called HardwareAcceleratedMove
  • Zero Blocks/Write Same - for offloading disk zeroing (when creating eager zeroed thick disks) to the storage array;  host parameter is called HardwareAcceleratedInit


Disabling VAAI using vSphere Client



In order to disable the three base VAAI features, select your host in the vCenter inventory, choose the Configuration tab and select Advanced Options. Then change the following settings to 0:


DataMover.HardwareAcceleratedMove


DataMover.HardwareAcceleratedInit


VMFS3.HardwareAcceleratedLocking



Disabling VAAI using esxcli


Note: help on accessing the host through CLI can be found in a previous blogpost - VMware ESXi 5 CLI Commands Part 1.

In order to disable VAAI features using esxcli commands through ESXi shell or SSH in vSphere 5, type away:

# esxcli system settings advanced set --int-value 0 --option /DataMover/HardwareAcceleratedMove

# esxcli system settings advanced set --int-value 0 --option /DataMover/HardwareAcceleratedInit

# esxcli system settings advanced set --int-value 0 --option /VMFS3/HardwareAcceleratedLocking


Disabling SCSI UNMAP


This is a new VAAI feature introduced in vSphere 5, which allows for reclaiming space on the storage system after a file is deleted from a VMFS datastore. Shortly after vSphere 5 was released, it was determined that this feature can cause problems with certain storage systems and storage vMotion / snapshot creation operations, so VMware recommended disabling it completely (see VMware KB 2007427 - Disabling VAAI Thin Provisioning Block Space Reclamation (UNMAP) in ESXi 5.0). 

Since ESXi 5.0 Patch 2 (ESXi build number 515841, released on December 15, 2011) this feature is disabled by default (ESXi 5.0U1 keeps it disabled, but introduces an option to run Space Reclamation manually from the CLI - see VMware KB 2014849), so if you're using ESXi 5.0 with a lower build number, you can either patch your hosts to Patch 2 level, or use the following workaround from the CLI.

To check whether this feature is enabled on your host:

# esxcli system settings advanced list --option /VMFS3/EnableBlockDelete

To disable it, type:

# esxcli system settings advanced set --int-value 0 --option /VMFS3/EnableBlockDelete

Friday, June 8, 2012

Cofiguring EVC on a vSphere cluster

Enhanced vMotion Compatibility (EVC) is a feature of vSphere clusters which allows vMotion between hosts with CPUs of different generations (e.g. a host with an Intel Xeon E5520 CPU and a host with an Intel Xeon E5-2630 CPU). If you have hosts with different generation CPUs, properly setting EVC is a prerequisite for adding these hosts to the same cluster. Setting EVC on a cluster basically forces CPUs of all hosts in the cluster to use a common baseline - a set of instructions which is compatible with all CPUs in the cluster. 


EVC is a vSphere feature that can be enabled regardless of vSphere license edition you're using. It can be enabled on a cluster by right clicking on a cluster in your inventory -> Edit Settings -> VMware EVC -> Change EVC Mode.


Enabling VMware EVC on a vSphere cluster


After choosing Change EVC mode, you will be presented with options for Enabling EVC for AMD hosts or Enabling EVC for Intel hosts. Select your CPU of choice and head to this VMware KB article - Enhanced vMotion Compatibility (EVC) processor support.


This KB article is the best possible EVC reference in which you can find which EVC cluster baselines are supported by different CPU models. All you need to do is find the (not required but recommendedhighest) baseline which is supported by all of your host CPUs and configure EVC to work in this mode. Although you can choose any baseline supported by all CPUs, it is recommended to choose the highest baseline, because of reasons described at the end of the post. For example, if you need to create a cluster with the previously mentioned E5520 hosts and E5-2630 hosts, you would set "Intel Nehalem Gen." as your EVC mode. The EVC menu can also be of help, because when you choose an EVC mode it will tell you whether the hosts which are already a part of the cluster support that EVC mode.


There are a few things which you need to have in mind when configuring EVC in an existing cluster: 

  • when EVC is disabled, this is equal to all host CPUs working with the highest supported EVC baseline
  • if you have a cluster with older hosts (e.g. with E5520 CPUs) with EVC disabled, and you need to add new host(s) to the cluster (e.g. with E5-2630 CPUs), you need to first enable EVC in the proper mode and then add new hosts; this can be done without any disruption - setting a CPU to work with it's highest supported baseline (in this example "Intel Nehalem") can be done with VMs running on the hosts
  • if you need to add an older host (e.g. a server left after virtualizing a service it used to run) to a cluster with EVC disabled and newer hosts which are running VMs, you will be configuring hosts to work in a lower EVC baseline than they are currently; in order to do this, all hosts in the cluster have to be in maintenance mode, which means that all VMs running on them need to be powered off (or migrated to a different cluster, if possible) 


Friday, June 1, 2012

Changing VMware ESXi host logging level

Collecting logs from an ESXi host is a recommended best practice, but if you ever tried to collect ESXi host logs over syslog, you may have noticed that ESXi hosts can be very chatty and spam your syslog server relentlessly. 


There are 8 logging levels of the ESXi host agent (hostd) and these are (sorted by the amount of information logged in the increasing order): "none", "quiet", "panic", "error", "warning", "info", "verbose" and "trivia". vCenter agent (vpxa) has 6 logging levels: "none", "error", "warning", "info", "verbose" and "trivia". Default logging levels for both host and vCenter agent in ESXi 5.x is "verbose" and this is the reason why your syslog server is filled with a lot of seemingly useless information.


If you want to change the logging level, fire up your vSphere Client, connect to vCenter/host, select a host in the inventory, then the Configuration tab -> Advanced Settings -> Config -> HostAgent -> Log. Here you'll be able to review all logging levels for hostd and vpxa agents and change logging levels by typing in name of the desired level (hostd) or selecting from a drop down menu (vpxa). This setting takes effect immediately after clicking OK (no host or hostd/vpxa restart is required).


ESXi host hostd and vpxa logging levels