Skip to content

#unsupported

Direct Org Network to TKC Network Communication in Cloud Director 10.2

Since VMware has introduced vSphere with Tanzu support in VMware Cloud Director 10.2, I'm struggling to find a proper way to implement a solution that allows customers bidirectional communication between Virtual Machines and Pods. In earlier Kubernetes implementations using Container Service Extension (CSE) "Native Cluster", workers and the control plane were directly placed in Organization networks. Communication between Pods and Virtual Machines was quite easy, even if they were placed in different subnets because they could be routed through the Tier1 Gateway.

With Tanzu meeting VMware Cloud Director, Kubernetes Clusters have their own Tier1 Gateway. While it would be technically possible to implement routing between Tanzu and VCD Tier1s through Tier0, the typical Cloud Director Org Network is hidden behind a NAT. There is just no way to prevent overlapping networks when advertising Tier1 Routers to the upstream Tier0. The following diagram shows the VCD networking with Tanzu enabled.

With Cloud Director 10.2.2, VMware further optimized the implementation by automatically setting up Firewall Rules on the TKC Tier1 to only allow the tenants Org Networks to access Kubernetes services. They also published a guide on how customers could NAT their public IP addresses to TKC Ingress addressed to make them accessible from the Internet. The method is described here (see Publish Kubernetes Services using VCD Org Networks). Unfortunately, the need to communicate from Pods to Virtual Machines in VCD seems still not to be in VMware's scope.

While developing a decent solution by using Kubernetes Endpoints, I came up with a questionable workaround. While I highly doubt that these methods are supported and useful in production, I still want to share them, to show what actually could be possible.

Read More »Direct Org Network to TKC Network Communication in Cloud Director 10.2

How to Migrate SupervisorControlPlaneVM in vSphere with Tanzu

When you try to migrate the Control Plane of a Workload Management enabled vSphere 7 cluster using vMotion or Storage vMotion, the following warning is displayed:

"This option is not available because you do not have the required permissions."

This article explains why manual migrations of the SupervisorControlPlaneVM shouldn't be necessary in general and how to work around the limitation if you still want to migrate it manually.

Read More »How to Migrate SupervisorControlPlaneVM in vSphere with Tanzu

Create Virtual Machines in vSphere with Tanzu using kubectl

This article explains how you can create Virtual Machines in Kubernetes Namespaces in vSphere with Tanzu. The deployment of Virtual Machines in Kubernetes namespaces using kubectl was shown in demonstrations but is currently (as of vSphere 7.0 U2) not supported. Only with third-party integrations like TKG, it is possible to create Virtual Machines by leveraging the vmoperator.

With the kubernetes-admin, accessible from the SupervisorControlPlane VM, you can create Virtual Machines today.

Please keep in mind that this is not officially supported by VMware.

Read More »Create Virtual Machines in vSphere with Tanzu using kubectl

Run pgAdmin in a Docker container on the vCenter Server Appliance

In the last article, I've explained how to manage the vCenter Server Appliance vPostgres Databases with pgAdmin. This article goes a step further and explains how to run pgAdmin in a Docker Container on the vCenter Server Appliance itself. This method works with vCenter Server Appliance version 6.5, 6.7, and 7.0.

Read More »Run pgAdmin in a Docker container on the vCenter Server Appliance

VMware EVC Mode to Enable Intel Gen5-Gen10 NUC vMotion

Many VMware Homelabs are based on Intel NUCs. It is also very common that generations are mixed which can lead to compatibility issues when trying to vMotion VMs across different generations. This is typically where VMware EVC comes into play.

VMware EVC creates a baseline of CPU instructions for virtual machines running on ESXi hosts. When you add newer Hosts, EVC hides the new CPU instructions to the virtual machines. While this works great for Xeon CPUs used in professional servers, it has some limitations with consumer CPUs used in the Intel NUC ecosystem.

The problem has become worse with the latest 10th Gen Comet Lake/Frost Canyon NUC. Despite having a 10th generation CPU, it requires the EVC baseline to be configured to "Sandy Bridge", which is the 2nd generation of Intel Core-i CPUs:

  • NUC10i7FNH/NUC10i7FNK (Intel Core i7-10710U - 6 Core, up to 4.7 GHz)
  • NUC10i5FNH/NUC10i5FNK (Intel Core i5-10210U - 4 Core, up to 4.2 GHz)
  • NUC10i3FNH/NUC10i3FNK (Intel Core i3-10110U - 2 Core, up to 4.1 GHz)

When you try to activate VMware EVC higher than Sandy Bridge, the following error message is displayed:

Compatibility
The host's CPU hardware does not support the cluster's current Enhanced vMotion Compatibility mode. The host CPU lacks features required by that mode.

When you try to add the Host to an EVC Enabled Cluster, the task fails:

Operation failed!
The host's CPU hardware does not support the cluster's current Enhanced vMotion Compatibility mode. The host CPU lacks features required by that mode.
CPUID faulting is not supported.
See KB 1003212 for more information.
Host is of type: vendor intel family 0x6 model 0xa6

Read More »VMware EVC Mode to Enable Intel Gen5-Gen10 NUC vMotion

Homelab: Downsizing vCenter Server Appliance 6.5

In vSphere 6.5 the smallest supported memory configuration for the vCenter Server Appliance has been raised from 8GB to 10GB. The smallest "Tiny" deployment size allows up to 10 ESXi Hosts and 100 Virtual Machines. Resources in Homelabs are limited and you might want to lower the memory consumption of the vCenter Servcer Appliance. This article explains how to lower the resource consumption to be able to lower the memory to about 6GB without noticable impacts.

Read More »Homelab: Downsizing vCenter Server Appliance 6.5

USB Devices as VMFS Datastore in vSphere ESXi 6.5

intel-nuc-with-usb3-connected-ssdIn ESXi 6.5, there are some changes concerning devices connected with USB. The legacy drivers, including xhci, ehci-hcd, usb-uhci, and usb-storage have been replaced with a single USB driver named vmkusb. The new driver has some implications if you are trying to use USB devices like USB sticks or external hard disks as VMFS formatted datastore.

Some people have reported that they have issues with USB Datastores since ESXi 6.5. I've tried to reproduce and fix those problems. This post explains the changes in the new version and how to create VMFS 5 or VMFS6 formatted USB devices as datastore on your ESXi host.Read More »USB Devices as VMFS Datastore in vSphere ESXi 6.5