Configure default settings on a VMware virtual distributed switch

If the portgroups on your VMware virtual distributed switch (vDS) need to have specific settings in order to work, you may want to set the defaults so that you don’t need to manually modify them each time a new one is created, which can lead to inconsistencies in the environment.  For example, etherchannel is used in my environment so the portgroups must be configured according to http://kb.vmware.com/kb/1001938.

Unfortunately, when vCloud creates portgroups, it will honor all of the settings on the ‘Teaming and Failover’ page except for the uplinks.  vCloud portgroups are always created with one active uplink and one standby uplink.  I’ll describe how that can be resolved in an upcoming post.

Here is the PowerCLI script that will set the ‘Teaming and Failover’ defaults on the vDS to work with etherchannel and two active uplinks.  

connect-viserver vCenter

$vDSName = “”
$vds = Get-VDSwitch $vDSName
$spec = New-Object VMware.Vim.DVSConfigSpec
$spec.configVersion = $vds.ExtensionData.Config.ConfigVersion

$spec.defaultPortConfig = New-Object VMware.Vim.VMwareDVSPortSetting
$uplinkTeamingPolicy =  New-Object VMware.Vim.VmwareUplinkPortTeamingPolicy

# Set load balancing policy to IP hash
$uplinkTeamingPolicy.policy = New-Object VMware.Vim.StringPolicy
$uplinkTeamingPolicy.policy.inherited = $false
$uplinkTeamingPolicy.policy.value = “loadbalance_ip”

# Configure uplinks.  If an uplink is not specified, it is placed into the ‘Unused Uplinks’ section.
$uplinkTeamingPolicy.uplinkPortOrder = New-Object VMware.Vim.VMwareUplinkPortOrderPolicy
$uplinkTeamingPolicy.uplinkPortOrder.inherited = $false
$uplinkTeamingPolicy.uplinkPortOrder.activeUplinkPort = New-Object System.String[] (2)  # (#) designates the number of uplinks you will be specifying.
$uplinkTeamingPolicy.uplinkPortOrder.activeUplinkPort[0] = “dvUplink1”
$uplinkTeamingPolicy.uplinkPortOrder.activeUplinkPort[1] = “dvUplink2”

# Set notify switches to true
$uplinkTeamingPolicy.notifySwitches = New-Object VMware.Vim.BoolPolicy
$uplinkTeamingPolicy.notifySwitches.inherited = $false
$uplinkTeamingPolicy.notifySwitches.value = $true

# Set to failback to true
$uplinkTeamingPolicy.rollingOrder = New-Object VMware.Vim.BoolPolicy
$uplinkTeamingPolicy.rollingOrder.inherited = $false
$uplinkTeamingPolicy.rollingOrder.value = $true

# Set network failover detection to “link status only”
$uplinkTeamingPolicy.failureCriteria = New-Object VMware.Vim.DVSFailureCriteria
$uplinkTeamingPolicy.failureCriteria.inherited = $false
$uplinkTeamingPolicy.failureCriteria.checkBeacon = New-Object VMware.Vim.BoolPolicy
$uplinkTeamingPolicy.failureCriteria.checkBeacon.inherited = $false
$uplinkTeamingPolicy.failureCriteria.checkBeacon.value = $false

$spec.DefaultPortConfig.UplinkTeamingPolicy = $uplinkTeamingPolicy
$vds.ExtensionData.ReconfigureDvs_Task($spec)


Prevent the incrementing of eth devices on Linux systems after guest customization of a cloned VM

After the guest customization process runs on cloned VMs in some VMware products, you may notice that on your Linux systems the eth device number gets incremented.  For example, when the system is first built, the first eth device will be eth0.  If the system is cloned & customized, the eth device will become eth1.  This may not be a problem on some systems, but people often need/prefer the first eth device to be eth0 or at least to not change after the system is customized.

The issue arises because of old entries in the udev network file – /etc/udev/rules.d/70-persistent-net.rules.  After an initial install of a Linux system that has a NIC with a MAC of “00:50:56:02:00:7c”, /etc/udev/rules.d/70-persistent-net.rules will look something like

# This file was automatically generated by the /lib/udev/write_net_rules
# program, run by the persistent-net-generator.rules rules file.
#
# You can modify it, as long as you keep each rule on a single
# line, and change only the value of the NAME= key.

# PCI device 0x15ad:0x07b0 (vmxnet3)
SUBSYSTEM==”net”, ACTION==”add”, DRIVERS==”?*”, ATTR{address}==”00:50:56:02:00:7c”, ATTR{type}==”1″, KERNEL==”eth*”, NAME=”eth0″

When you perform a clone & customization (as in creating a new vApp from a template in vCloud), the source VM is cloned and has NIC with a new MAC address.  When the cloned VMs boots, udev notices the new NIC and updates /etc/udev/rules.d/70-persistent-net.rules with the new NIC and gives it the name eth1.

# This file was automatically generated by the /lib/udev/write_net_rules
# program, run by the persistent-net-generator.rules rules file.
#
# You can modify it, as long as you keep each rule on a single
# line, and change only the value of the NAME= key.

# PCI device 0x15ad:0x07b0 (vmxnet3)
SUBSYSTEM==”net”, ACTION==”add”, DRIVERS==”?*”, ATTR{address}==”00:50:56:02:00:7c”, ATTR{type}==”1″, KERNEL==”eth*”, NAME=”eth0″

# PCI device 0x15ad:0x07b0 (vmxnet3)
SUBSYSTEM==”net”, ACTION==”add”, DRIVERS==”?*”, ATTR{address}==”00:50:56:02:01:9e”, ATTR{type}==”1″, KERNEL==”eth*”, NAME=”eth1″

A new file named /etc/sysconfig/network-scripts/ifcfg-eth1 will be created that points to the eth1 device

DEVICE=eth1
NETMASK=255.255.255.0
IPADDR=192.168.5.101
BOOTPROTO=static
ONBOOT=yes

Now when ifconfig is ran, you will see eth1 instead of eth0.

To prevent the issue from occurring, delete the /etc/udev/rules.d/70-persistent-net.rules file before shutting down the VM and turning it into a template.  This will cause a new /etc/udev/rules.d/70-persistent-net.rules to be created when the customizing VM boots up.  The new file will only contain the NICs on the system and they should be labelled as eth0, eth1, etc.

Another thing you may want do before shutting the VM down to be added as a template is modify /etc/sysconfig/network-scripts/ifcfg-eth0 so that ONBOOT is set to no (ONBOOT=no).  I’ve seen issues in vCloud where multiple vApp templates are being deploying onto the same network and the VMs have the same IP (that was initially on the VM before it was turned into a template).  Then the systems boot, ifup is ran, which runs arping.  I’ve seen arpping return an error in these situations, which causes VMware tools to not start.  This then causes guest customization to fail since VMware tools is relied up by guest customization.

 


vCenter Orchestrator Workflow Runner

If you’re using vCloud in combination with vCenter Orchestrator, you should take a look at Workflow Runner provided by the vCO team.  The workflow is included in the vCloud Director 5.1 blocking tasks and notification package. I think is the most valuable aspect of the workflow is that is extracts all the parameters from the vCloud blocking task (organization, vapp, user, etc) and passes them to any workflow that you specify.  Not having to do this yourself saves a lot of time.  

The vCO team has also released a Workflow Runner 1.5 to 5.1 compatibility chart that is helpful for converting your workflows after you’ve upgraded to the 5.1 vCloud plug-in.  The guide can be found at vCloud Director 1.5 to 5.1 workflow compatibility guide.

 


vCloud System Alert – The Virtual Machine’s memory reservation was modified in vCenter Server

I noticed an issue in multiple vCloud 5.1.2 environments where VMs would report the System Alert “The Virtual Machine’s memory reservation was modified in vCenter Server and doesnot match the reservation set by vCloud Director. Please stop and restart the Virtual Machine from vCloud Director to fix the problem.”  I’ve never noticed this warning in 1.5.x, 5.1.0 or 5.1.1 (nothing was modifying the VMs in vCenter).

The VMs with the problem had been either imported from vCenter or through the vCloud OVF upload process and only affected VMs in allocation models of type allocation pool.  The environments had a lot of different variables including pointing to vCenter 5.0 U1 and 5.1 U1.  Some of the vClouds were upgraded from 1.5.2 and others 5.1.1.  The issue affected both org vDCs created before and after the vCD upgrades.  The memory allocations settings in the org vDCs were wildly different (except one) and the setting “Make Allocation pool Org VDCs elastic” was disabled.  The only commonality that I could find was that org vDC setting “Memory resources guaranteed” was set to the lowest value of 20%.  In most environments the System Alert warning went away if I changed the “Memory resources guaranteed” value to 21%.  In some environments I had to raise it to 22%.  Setting the value back to 20% caused the alert to come back.

It seems like this is could be a bug in vCD 5.1.2 and may have been introduced with all the changes made to allocation pool although more testing could be done.

I tried to use the API to get the alerts but nothing was returned.  You can query the vCD database directly to retrieve all VMs with this alert and clear the alert in the vCD UI:

Display all VMs with the system alert

select * from object_condition where condition = ‘vmMemoryReservationModified’

Clear the alert in the vCD UI

update object_condition set ignore = 1 where condition = ‘vmMemoryReservationModified’