Virtualization

vSphere with Tanzu - SupervisorControlPlaneVM Excessive Disk WRITE IO

After deploying the latest version of VMware vSphere with Tanzu (vCenter Server 7.0 U1d / v1.18.2-vsc0.0.7-17449972), I noticed that the Virtual Machines running the Control Plane (SupervisorControlPlaneVM) had a constant disk write IO of 15 MB/s with over 3000 IOPS. This was something I didn't see in previous versions and as this is a completely new setup with no namespaces created yet, there must be an issue.

After troubleshooting the Supervisor Control Plane, it turned out that the problem was caused by fluent-bit, which is the Log processor used by Kubernetes. The log was constantly spammed with debugging messages. Reducing the log level solved the problem for me.

Read More »vSphere with Tanzu - SupervisorControlPlaneVM Excessive Disk WRITE IO

11th Gen Intel NUC - Which is the best candidate to run ESXi?

Intel has finally announced their 11th Generation NUCs. For the first time, all three product lines are announced at the same time. The NUC series is very popular to be used in homelabs or for running VMware ESXi. They are small, silent, transportable, and have very low power consumption.

In this article, I'm going to take a look at the 3 different product lines and how they compare to each other and previous NUCs.

Read More »11th Gen Intel NUC - Which is the best candidate to run ESXi?

How to Shrink vCenter Server Appliance (vCSA) Disks

After upgrading a vCenter Server Appliance (vCSA) multiple times, it might result in having very large virtual disks. The reason for the growth is, whenever you upgrade the vCenter you have to select a bigger size. In some cases, you might end up with a vCenter that has over 1TB of storage allocated, but less than 100GB in use.

When upgrading vCenter 6.7 to vCenter 7.0, the upgrader calculates the source machine size based on the old Virtual Machines Disks and allocated memory. It doesn't matter how much is actually in use. Here is an example of a "Tiny" vCSA 6.7 that I want to upgrade. The system currently has 416 GB allocated, which means that I can't upgrade to "Tiny".
In this article, I'm describing how to shrink the Virtual machine to prevent it from growing during the upgrade. I recommend using this method only before an upgrade because it will change the order of VMDKs attached to the vCenter. While this shouldn't be a problem for the vCenter itself, it might result in problems when you need VMware GSS.

Read More »How to Shrink vCenter Server Appliance (vCSA) Disks

VMware Cloud Director 10 - Network cannot be deleted, because it is in use

A problem I ran into a couple of times in VMware Cloud Director 10 is that you can't delete an organization network, despite it is definitely no longer in use by anything. When you try to delete the network, the following error message is displayed:

Error: Network 172.16.1.0-24 cannot be deleted, because it is in use by the following vApp Networks: 172.16.1.0-24.

From my observation, this happens quite often when you work with Datacenter Groups, which has been implemented in VCD 10.2. But I've also seen it before. As stated in the error message, the network was added to a vApp, but actually, the vApp does no longer exist.

In this article, I'm explaining how to remove undeletable networks without messing with the VCD Database.
Read More »VMware Cloud Director 10 - Network cannot be deleted, because it is in use

Deploy Container Service Extension (CSE 3.0) in VMware Cloud Director 10.2

With the release of Cloud Director 10.2, the Container Service Extension 3.0 has been released. With CSE 3.0 you can extend your cloud offering by providing Kubernetes as a Service. Customers can create and manage their own K8s clusters directly in the VMware Cloud Director portal.

I've already described how to deploy vSphere with Tanzu based Kubernetes Clusters in VCD. CSE 3.0 with the "Native K8s Runtime" is is a neat alternative that allows you to deploy K8s directly into the customer's Organization networks, which is currently not possible with Tanzu.

This article explains how to integrate CSE 3.0 in VMware Cloud Director 10.2.

Read More »Deploy Container Service Extension (CSE 3.0) in VMware Cloud Director 10.2

Heads Up: NAT Configuration Changed in Cloud Director 10.2

With the release of Cloud Director 10.2, a major change to the NSX-T based NAT configuration has been implemented. The change affects how you set up DNAT and has caused some confusion after the upgrade.

In previous versions, the Application Profile (eg. SSH, HTTP, or HTTPS) defined the external and internal port. With the optional "Internal Port" setting it was possible to configure a custom internal port.

With Cloud Director 10.2, the Application profile defines the internal port only. If you do not fill in the "External Port" configuration, which is exactly in the same position as the "Internal Port" setting on previous versions, it translates ALL external ports to the configured Application. This is something you absolutely do not want to have and I've seen a lot of false configured NATs since Cloud Director 10.2.

Read More »Heads Up: NAT Configuration Changed in Cloud Director 10.2

New Tool: VMware Product Interop Diff

I've published a new tool that allows you to quickly compare the list of supported product versions for VMware products. The tool's goal is to make the upgrade process easier. You no longer have to manually check the Interop Matrix for compatible product versions.

VMware Product Interoperability Diff Tool

Please do not hesitate to comment when there are any questions, or you've encountered an error with the new tool.

How does it work?

Read More »New Tool: VMware Product Interop Diff

VMware vSAN on Consumer-Grade SSDs - Endurance analysis

When you are running an ESXi based homelab, you might have considered using vSAN as the storage technology of choice. Hyperconverged storages are a growing alternative to SAN-based systems in virtual environments, so using them at home will help to improve your skillset with that technology.

To get started with vSAN you need at least 3 ESXi Hosts, each equipped with 2 drives. Alternatively, you can build a 2-node vSAN Cluster using a Raspberry Pi as a witness node.

VMware maintains a special HCL that lists supported drives to be used with vSAN. In production setups, it is very important to use certified hardware. Using non-enterprise hardware might result in data loss and bad performance caused by the lack of power loss protection and small caches.

This article takes a look at consumer-grade SSDs and their durability to be used with vSAN. Please be aware that non-certified hardware should only be used in homelabs or for demo purposes. Do not place sensitive data on vSAN that is running on consumer hardware.

Read More »VMware vSAN on Consumer-Grade SSDs - Endurance analysis

vSphere 7.0 Performance Counter Description

This is a list of all available performance metrics that are available in vSphere vCenter Server 7.0. Performance counters can be views for Virtual Machines, Hosts, Clusters, Resource Pools, and other objects by opening Monitor > Performance in the vSphere Client.

These performance counters can also be used for performance analysis with esxcfg-perf.pl, or PowerCLI.

Read More »vSphere 7.0 Performance Counter Description

NSX-T: Client 'admin' exceeded request rate of 100 per second.

NSX-T has a default API rate limit of 100 requests per second, per client. This limit is sometimes already triggered when you are using the GUI with multiple people using the admin account. If you are using the API to get status information or configure your platform, you very likely know the error. When you exceed the limit, the following message is displayed.

Client 'admin' exceeded request rate of 100 per second.

This article shows a couple of methods to mitigate the limit.

Read More »NSX-T: Client 'admin' exceeded request rate of 100 per second.