Skip to content

Installation or Removal of VIB Packages in ESXi 7.0 fails with Error: Failed to query file system stats:

While installing ESXi updates, I noticed that on one of my hosts, the installation or removal of VIB packages fails with the following error message

# esxcli software vib install -d [package]
# esxcli software vib remove -n [package]
[InstallationError]
Failed to query file system stats: Errors:
Error getting data for filesystem on '/vmfs/volumes/59a83d9c-628c6ae0-7b35-f44d306ec05a': Cannot open volume: /vmfs/volumes/59a83d9c-628c6ae0-7b35-f44d306ec05a, skipping.
cause = Errors:
Error getting data for filesystem on '/vmfs/volumes/59a83d9c-628c6ae0-7b35-f44d306ec05a': Cannot open volume: /vmfs/volumes/59a83d9c-628c6ae0-7b35-f44d306ec05a, skipping.
Please refer to the log file for more details.

The device 59a83d9c-628c6ae0-7b35-f44d306ec05a was a non existing volume, referenced by a vffs mount. VFFS (Virtual Flash File System) was used in earlier ESXi releases by vSphere Flash Read Cache. I'm not sure where that comes from but this is how you can remove the stale mount:

Read More »Installation or Removal of VIB Packages in ESXi 7.0 fails with Error: Failed to query file system stats:

VMware Cloud Director Quick Tip - API Explorer (Swagger) not visible for Org Admins

In VMware Cloud Director 10, the API Explorer (Swagger) is not visible for Organization Administrators.

When they try to access /api-explorer/tenant/[ORG]/, an HTTP ERROR 403 - Forbidden is shown.

The right to use the API Explorer is not part of the default rules set. To allow tenants to use the API Explorer, edit the Rights Bundle and default Role:

Read More »VMware Cloud Director Quick Tip - API Explorer (Swagger) not visible for Org Admins

Troubleshooting CSE 3.1 TKGm Integration with VMware Cloud Director 10.3

This article recaps Issues that I had during the integration of VMware Container Service Extension 3.1 to allow the deployment of Tanzu Kubernetes Grid Clusters (TKGm) in VMware Cloud Director 10.3.

If you are interested in an Implementation Guide, refer to Deploy CSE 3.1 with TKGm Support in VCD 10.3 and First Steps with TKGm Guest Clusters in VCD 10.3.

  • CSE Log File Location
  • DNS Issues during Photon Image Creation
  • Disable rollbackOnFailure to troubleshoot TKGm deployment errors
  • Template cookbook version 1.0.0 is incompatible with CSE running in non-legacy mode
  • https://[IP-ADDRESS] should have a https scheme and match CSE server config file
  • 403 Client Error: Forbidden for url: https://[VCD]/oauth/tenant/demo/register
  • NodeCreationError: failure on creating nodes ['mstr-xxxx']
  • Force Delete TKGm Clusters / Can't delete TKGm Cluster / Delete Stuck in DELETE:IN_PROGRESS

Read More »Troubleshooting CSE 3.1 TKGm Integration with VMware Cloud Director 10.3

First Steps with TKGm Guest Clusters in VMware Cloud Director 10.3

In the previous article, I've explained how to deploy Container Service Extension 3.1 with TKGm Support in VMware Cloud Director 10.3. In this article, I'm taking a look at how the Tanzu Kubernetes Grid Cluster is integrated into the Organization VDC and how the Tenant can access and work with the Kubernetes Cluster.

 

Read More »First Steps with TKGm Guest Clusters in VMware Cloud Director 10.3

Deploy CSE 3.1 with TKGm Support in VMware Cloud Director 10.3

With the release of Cloud Director 10.3 and Container Service Extension 3.1 (CSE), you have an additional option to deploy Kubernetes Clusters: "Tanzu Kubernetes Grid Multi-Cloud" aka. TKGm. With TKGm you now have 4 options to offer Kubernetes as a Service for your customers.

  • TKGm (Multi-Cloud)
  • TKGs (vSphere with Tanzu)
  • Native
  • TKG-I (Enterprise PKS)

Yes, there is a reason why TKGm and TKGs are in bold letters. If you are starting today, forget about "Native" and "TKG-I". "TKGm" works similar to "Native" but is far superior. TKG-I (TKG Integrated Edition, formerly known as VMware Enterprise PKS) is deprecated as of CSE 3.1 and will be removed in future releases.

This article explains how to integrate CSE 3.1 in VMware Cloud Director 10.3.

Read More »Deploy CSE 3.1 with TKGm Support in VMware Cloud Director 10.3

Quick Tip: How to Master NSX-ALB (AVI) Resources in Terraform

Using the Terraform AVI Provider can be quite challenging because it required you to understand complex object definitions. All resources in the AVI Provider are mapped directly to the corresponding Avi Vantage API which you can find here. The problem is that some objects have a huge amount of attributes, some of them even have a nested depth of 5.

I'll show you a little trick to quickly get the required information to write NSX-ALB / AVI resources.

Read More »Quick Tip: How to Master NSX-ALB (AVI) Resources in Terraform

NSX-ALB Integration in VMware Cloud Director 10.3 with Terraform

In an earlier article, I've explained how to integrate VMware NSX-T Advanced Loadbalancer (formerly known as AVI) into VMware Cloud Director. Today, I want to show how to automate those steps with Terraform. Terraform is an open-source infrastructure as code software tool created by HashiCorp. The following steps are part of the automated configuration:

  • Create vCenter Content Library for SE Images
  • Create Service Engine Management Network with DHCP in NSX-T
  • Create NSX-T Cloud in NSX-ALB
  • Create Service Engine Group in NSX-ALB
  • Add NSX-ALB Controller to VMware Cloud Director
  • Import NSX-T Cloud to VMware Cloud Director
  • Import Service Engine Group to VMware Cloud Director

Read More »NSX-ALB Integration in VMware Cloud Director 10.3 with Terraform

VMware ESXi 7.0 Update 3 on Intel NUC

VMware vSphere ESXi 7.0 Update 3 has been released in October and before you start to deploy it to production, you want to evaluate it in your testing environment or homelab. If you have Intel NUCs or similar hardware you should be very careful when updating to new ESXi releases as there might be issues. Please always keep in mind that this is not an officially supported platform and there might be compatibility issues.

In vSphere 7.0, there are ups and downs with consumer-grade network adapters. Since the deprecation of VMKlinux drivers, there is no option to use Realtek-based NICs, and previous versions had problems with the ne1000 driver. Luckily there is the great Community Networking Driver for ESXi Fling that adds support for a bunch of network cards and VMKUSB-NIC-FLING always covers your back.

I've updated my NUC portfolio to check which NUCs are safe to update and what considerations you have to take before installing the update. Additionally, I'm taking a look at the consequences of the recently deprecated USB/SD-Card usage for ESXi Installations and some general Issues in 7.0u3.

Read More »VMware ESXi 7.0 Update 3 on Intel NUC

VMware NSX-T 3.1 Edge Node Sizing

Edge Nodes in NSX-T 3.1 are available as Virtual Machines and Bare Metal Edges. When you deploy a Virtual Edge Node using the embedded deployment function in NSX-T, you can choose between 4 sizes - Small, Medium, Large and Extra Large. In this article, I'm trying to collect information about the different sizing options, what they are intended for and how to resize Edge Nodes.

Read More »VMware NSX-T 3.1 Edge Node Sizing