Tag Archives: 5.0 - Page 3

Storage vMotion and dvSwitch / HA problem explained

Today I tried to explain why this Storage vMotion / dvSwitch / HA problem actually existis, how the virtual machines getting affected and what I can do to mitigate the problem. This issue seems to be hard to explain so i started to search the internet for pictures. I could find many explanations, but no one draws a picture of it. I've done it:

Explanation

  1. We have 2 ESXi hosts, both connected to a shared storage containing 2 LUNs. The first ESXi runs two virtual machines (VM1 & VM2), the second runs only one (VM3). All three virtual machines are connected to a Distributed Virtual Switch. There is an additional folder on each LUN containing the dvSwitch Port information (.dvsData). This information is required for ESXi servers to know where the ports belong to, without asking the vCenter.
  2. Using storage vMotion VM2 is migrated to LUN 2. This could be done either manually or triggered by Storage DRS. Actually, this is where the bug happens. All files like .vmx or .vmdk are moved to LUN 2 and whyever the dvSwitch port information remains on LUN 1. The bug has happend, but nothing noticeable at this point. VM2 stays up and running without any network issues.
  3. The first ESXi dies. This is where HA shoud get active and initiates a restart on another host.
  4. HA initiates VM1 to be started on the second ESXi. Everything fine.
  5. HA initiates VM2 to be started on the second ESXi. During this process, HA tries to access the port information inside the .dvsData directory. HA fails as it can't find the port information within LUN2 .dvsData directory.

Operation failed, diagnostics report: Failed to open file /vmfs/volumes/UUID/.dvsData/ID/Port Status (bad0003)= Not found

Additional Information

VMware KB2013639
Issue explaind by Duncan Epping @ Yellow-Bricks
Script to identify and fix affected VMs by Alan Renouf

Datastore cluster permissions lost - Script Workaround

Placing the datastore clusters inside a folder in some cases is not an option, so i decided to write a PowerCLI script which creates the permisson after vCenter service restart. As you might know, all permissons set at datastore cluster level are gone after vCenter restarts. This workaround referred to VMware KB: 2008326.

First you have to find affected permissons. This applies to permissons which are set directly to datastore clusters. A datastore cluster is referred as "StoragePod", so this is the keyword:

Read more »

Datastore cluster permissions lost

After migrating datastores to datastore clusters and adding permissons at datastore cluster level I run into incomprehensible issues where users suddenly failed to create VMs. Users getting an error messege while selecting the Cluster:

You do not have the privilege 'Datastore > Allocate space' on the datastore connected to the selected Cluster

I checked the vCenter permissons and noticed that the datastore permission is missing. I remembered that there was this bug that causes all vCenter permissons to disappear after renaming Windows users or groups, so i just reassigned the permisson. Shortly later the problem recurred, so I searched the VMware KB and noticed that this is a known issue: KB: 2008326

VMwares resolution is to place the datastore cluster inside a folder, set the permissons to that folder and propagate them. Unfortunately you can't simply move the cluster:

The specified folder does not support this operation

Move entities - The specified folder does not support this operation.

The solution is to create a new storage cluster, recreate all settings and move the VMFS-Datastores into the new cluster. This task is doable withou any interruption to the running virtual machines.

SDRS permissons inside a folder

Shutting down the VSA

As you can not control the VSA Appliance, questions come up how to properly shutdown the VSA Cluster. So here is the answer:

  1. Shut down all VMs
  2. Put the VSA Cluster into maintenance mode
  3. Shut Down the ESXi Hosts

Do not...

  • ...put the ESXi hosts into maintenance mode
  • ...shutdown the VSA Appliance

vsa_maintenance_mode

shutdown_esx_host

HP N40L Shared Storage with vSphere Storage Appliance (VSA)

Without a shared storage it is quite hard to deploy a reasonable test scenario. Within vSphere 5 VMware introduced the vSphere Storage Appliance (VSA). The VSA transforms the local storage from up to 3 servers into a mirrored shared storage. This sounds really great for a testing environment because it supports plenty VMware Features like vMotion, HA and DRS.

Prior to installation there are a few things to check because the VSA has very strict system requirements. As it is only a testing environment and I do not consider getting support, so the main goal is getting the VSA up and running. The server requirements are:

  • 6GB RAM
  • 2GHz CPU
  • 4 NICs
  • Identical configuration across all nodes
  • Clean ESXi 5.0 Installation

I deliberately ignored all the vendor/model or hardware raid controller requirements, as this are only soft-requirements. The HP Proliant N40L supports all above requirements, except the 2GHz CPU. But there is a little XML File which contains the host audit configuration the installer uses during the installation. I am going to tweak this file a little bit to get the installation done.
Read more »

vSphere 5 Homelab - ESX on HP ProLiant N36L/N40L/N54L Microserver

Hewlett Packard has launched an extremely affordable server for SMB and home users. Not only due to its price of approximately 200 euros, but also because of its low power consumption it is an great candidate for a virtualization home lab. An optional Remote Access Card (RAC) can extend ILO similar functions to the server.

hp-proliant-n40l-box

The HP N40L has 2 CPU cores and supports up to 8GB of RAM. Although this is quite low for a hypervisor, it should be sufficient for a pure test environment. The server does only have a software-based RAID controller which will not work with the ESX. If you want to use the local disks as an array, you have to buy an additional RAID controller. The P410 for example is supported. I decided not to buy a RAID controller, because I want to store my VMs on an shared Storage. The good news is that the server is shipped with 4 hard drive trays, which allows the installation of any SATA hard drive.

hp-proliant-n40l-front

Features

The Server is shipped with the following configuration:

  • Prozessor: AMD Turion™ II Neo N40L (2x 1,50GHz)
  • Memory: 2GB PC3-10600E UDIMMs DDR3
  • Hard Disk: 1x Seagate Barracude (250GB, 7200RPM, SATA)
  • LAN: 1x 10/100/1000 MBit (NC107i)
  • PSU: 150 Watt, non-redundant
  • Ports: VGA, eSATA, 7x USB 2.0 (4x Front, 2x Back, 1x On-Board)

Read more »