VMware Cloud Director 10 - Network cannot be deleted, because it is in use

A problem I ran into a couple of times in VMware Cloud Director 10 is that you can't delete an organization network, despite it is definitely no longer in use by anything. When you try to delete the network, the following error message is displayed:

Error: Network 172.16.1.0-24 cannot be deleted, because it is in use by the following vApp Networks: 172.16.1.0-24.

From my observation, this happens quite often when you work with Datacenter Groups, which has been implemented in VCD 10.2. But I've also seen it before. As stated in the error message, the network was added to a vApp, but actually, the vApp does no longer exist.

In this article, I'm explaining how to remove undeletable networks without messing with the VCD Database.
Read more »

Deploy Container Service Extension (CSE 3.0) in VMware Cloud Director 10.2

With the release of Cloud Director 10.2, the Container Service Extension 3.0 has been released. With CSE 3.0 you can extend your cloud offering by providing Kubernetes as a Service. Customers can create and manage their own K8s clusters directly in the VMware Cloud Director portal.

I've already described how to deploy vSphere with Tanzu based Kubernetes Clusters in VCD. CSE 3.0 with the "Native K8s Runtime" is is a neat alternative that allows you to deploy K8s directly into the customer's Organization networks, which is currently not possible with Tanzu.

This article explains how to integrate CSE 3.0 in VMware Cloud Director 10.2.

Read more »

Heads Up: NAT Configuration Changed in Cloud Director 10.2

With the release of Cloud Director 10.2, a major change to the NSX-T based NAT configuration has been implemented. The change affects how you set up DNAT and has caused some confusion after the upgrade.

In previous versions, the Application Profile (eg. SSH, HTTP, or HTTPS) defined the external and internal port. With the optional "Internal Port" setting it was possible to configure a custom internal port.

With Cloud Director 10.2, the Application profile defines the internal port only. If you do not fill in the "External Port" configuration, which is exactly in the same position as the "Internal Port" setting on previous versions, it translates ALL external ports to the configured Application. This is something you absolutely do not want to have and I've seen a lot of false configured NATs since Cloud Director 10.2.

Read more »

New Tool: VMware Product Interop Diff

I've published a new tool that allows you to quickly compare the list of supported product versions for VMware products. The tool's goal is to make the upgrade process easier. You no longer have to manually check the Interop Matrix for compatible product versions.

VMware Product Interoperability Diff Tool

Please do not hesitate to comment when there are any questions, or you've encountered an error with the new tool.

How does it work?

Read more »

VMware vSAN on Consumer-Grade SSDs - Endurance analysis

When you are running an ESXi based homelab, you might have considered using vSAN as the storage technology of choice. Hyperconverged storages are a growing alternative to SAN-based systems in virtual environments, so using them at home will help to improve your skillset with that technology.

To get started with vSAN you need at least 3 ESXi Hosts, each equipped with 2 drives. Alternatively, you can build a 2-node vSAN Cluster using a Raspberry Pi as a witness node.

VMware maintains a special HCL that lists supported drives to be used with vSAN. In production setups, it is very important to use certified hardware. Using non-enterprise hardware might result in data loss and bad performance caused by the lack of power loss protection and small caches.

This article takes a look at consumer-grade SSDs and their durability to be used with vSAN. Please be aware that non-certified hardware should only be used in homelabs or for demo purposes. Do not place sensitive data on vSAN that is running on consumer hardware.

Read more »

vSphere 7.0 Performance Counter Description

This is a list of all available performance metrics that are available in vSphere vCenter Server 7.0. Performance counters can be views for Virtual Machines, Hosts, Clusters, Resource Pools, and other objects by opening Monitor > Performance in the vSphere Client.

These performance counters can also be used for performance analysis with esxcfg-perf.pl, or PowerCLI.

Read more »

NSX-T: Client 'admin' exceeded request rate of 100 per second.

NSX-T has a default API rate limit of 100 requests per second, per client. This limit is sometimes already triggered when you are using the GUI with multiple people using the admin account. If you are using the API to get status information or configure your platform, you very likely know the error. When you exceed the limit, the following message is displayed.

Client 'admin' exceeded request rate of 100 per second.

This article shows a couple of methods to mitigate the limit.

Read more »

NSX-T: How to create a Principal Identity

In NSX-T, you can't create local users. Except for the three default users admin, root, and audit, you have to connect to an LDAP server or integrate NSX-T with VMware Identity Manager to authenticate with additional users. Additionally to normal users, NSX-T has the concept of Principal Identities, which are certificate-based users that can create objects that only the user itself can modify or delete.

This article explains how to create and work with a principal identity.

Read more »

NSX-T 3.1 Enhancement - Shared ESXi and Edge Transport VLAN with a Single Uplink

With the release of NSX-T 3.1, improvements to inter-TEP communication within the same host have been implemented. The Edge TEP IP can now be on the same subnet as the local hypervisor TEP. This feature reduces the complexity for collapsed setups where the Edge VM runs on an ESXi host that is also part of the Geneve overlay transport zone.

The following tunnel configuration is now possible:

NSX-T 3.1 - Shared Transport VLAN

 

Read more »

Heads Up: VMFS6 Heap Exhaustion in ESXi 7.0

In ESXi 7.0 (Build 15843807) and 7.0b (Build 16324942), there is a known issue with the VMFS6 filesystem. The problem is solved in ESXi 7.0 Update 1. In certain workflows, memory is not freed correctly resulting in VMFS heap exhaustion. You might be affected when your system shows the following symptoms:

  • Datastores are showing "Not consumed" on hosts
  • Virtual Machines fail to vMotion
  • Virtual Machines become orphaned when powered off
  • Snapshot creation fails with "An error occurred while saving the snapshot: Error."

In the vmkernel.log, you see the following error messages:

  • Heap vmfs3 already at its maximum size. Cannot expand
  • Heap vmfs3: Maximum allowed growth (#) too small for size (#)
  • Failed to initialize VMFS distributed locking on volume #: Out of memory
  • Failed to get object 28 type 1 uuid # FD 0 gen 0: Out of memory

Read more »