Skip to content

Cloud Director

Client VPN with WireGuard in VMware Cloud Director backed by NSX-T

When you are running an NSX-T backed VMware Cloud Director, your customers might have the requirement to implement a remote access VPN solution. While Client VPN was possible with NSX-v, it is completely missing in NSX-T. This article explains how a customer can implement a Client VPN solution based on WireGuard. The implementation is fully in the hand of the customer and does not need any VCD modification or Service Provider interaction.

Wireguard is not part of VMware Cloud Director but can be installed easily on a Virtual Machine in the tenant's network. The following diagram shows the environment that is used throughout the article.

Read More »Client VPN with WireGuard in VMware Cloud Director backed by NSX-T

VMware Cloud Director 10 - Network cannot be deleted, because it is in use

A problem I ran into a couple of times in VMware Cloud Director 10 is that you can't delete an organization network, despite it is definitely no longer in use by anything. When you try to delete the network, the following error message is displayed:

Error: Network 172.16.1.0-24 cannot be deleted, because it is in use by the following vApp Networks: 172.16.1.0-24.

From my observation, this happens quite often when you work with Datacenter Groups, which has been implemented in VCD 10.2. But I've also seen it before. As stated in the error message, the network was added to a vApp, but actually, the vApp does no longer exist.

In this article, I'm explaining how to remove undeletable networks without messing with the VCD Database.
Read More »VMware Cloud Director 10 - Network cannot be deleted, because it is in use

Deploy Container Service Extension (CSE 3.0) in VMware Cloud Director 10.2

With the release of Cloud Director 10.2, the Container Service Extension 3.0 has been released. With CSE 3.0 you can extend your cloud offering by providing Kubernetes as a Service. Customers can create and manage their own K8s clusters directly in the VMware Cloud Director portal.

I've already described how to deploy vSphere with Tanzu based Kubernetes Clusters in VCD. CSE 3.0 with the "Native K8s Runtime" is is a neat alternative that allows you to deploy K8s directly into the customer's Organization networks, which is currently not possible with Tanzu.

This article explains how to integrate CSE 3.0 in VMware Cloud Director 10.2.

Read More »Deploy Container Service Extension (CSE 3.0) in VMware Cloud Director 10.2

Heads Up: NAT Configuration Changed in Cloud Director 10.2

With the release of Cloud Director 10.2, a major change to the NSX-T based NAT configuration has been implemented. The change affects how you set up DNAT and has caused some confusion after the upgrade.

In previous versions, the Application Profile (eg. SSH, HTTP, or HTTPS) defined the external and internal port. With the optional "Internal Port" setting it was possible to configure a custom internal port.

With Cloud Director 10.2, the Application profile defines the internal port only. If you do not fill in the "External Port" configuration, which is exactly in the same position as the "Internal Port" setting on previous versions, it translates ALL external ports to the configured Application. This is something you absolutely do not want to have and I've seen a lot of false configured NATs since Cloud Director 10.2.

Read More »Heads Up: NAT Configuration Changed in Cloud Director 10.2

Change TKG Cluster Service and Pod CIDR in Cloud Director 10.2

A major problem when deploying "vSphere with Tanzu" Clusters in VMware Cloud Director 10.2 is that the defaults for TKG Clusters are overlapping with the defaults for the Supervisor Cluster configured in vCenter Server during the Workload Management enablement.

When you deploy a Kubernetes Cluster using the new Container Extension in VCD 10.2, it deploys the cluster in a namespace on top of the Supervisor Cluster in the vCenter Server. The Supervisor Clusters IP address ranges for the Ingress CIDRs and Services CIDR must not overlap with IP addresses 10.96.0.0/12 and 192.168.0.0/16, which is the default for TKG Clusters. Unfortunately, 10.96.0.0 is also the default when enabling workload management so the deployment will fail when you stick to the defaults. The following error message is displayed when you have overlapping networks:

spec.settings.network.pods.cidrBlocks intersects with the network range of the external ip pools in network provider's configuration
spec.settings.network.pods.cidrBlocks intersects with the network range of the external ip pools LB in network provider's configuration

This article explains a workaround that you can apply when deleting and reconfiguring the Namespace Management with non-overlapping addresses is not an option.

Read More »Change TKG Cluster Service and Pod CIDR in Cloud Director 10.2

Troubleshooting "vSphere with Tanzu" Integration in VCD 10.2

During my first attempts to integrate "vSphere with Tanzu" into VMware Cloud Director 10.2, I had a couple of issues. The integration just wasn't as smooth as I expected and many configuration errors are not mitigated in the GUI. Also, there are a lot of prerequisites to strictly follow.

In this article, I'm going through the issues I had during the deployment and how to solve them.Read More »Troubleshooting "vSphere with Tanzu" Integration in VCD 10.2

Configure "vSphere with Tanzu" in VMware Cloud Director 10.2

With the release of Cloud Director 10.2, you can now integrate "vSphere with Tanzu" Kubernetes Clusters into VMware Cloud Director. That enabled you to create a self-service platform for Kubernetes Clusters that are backed by the Kubernetes integration in vSphere 7.0.

This article explains how to integrate vSphere with Tanzu in VMware Cloud Director 10.2

Read More »Configure "vSphere with Tanzu" in VMware Cloud Director 10.2

Introducing Simplified Deployment for VMware Cloud Director 10.2

With the release of Cloud Director 10.2, VMware aims to make the deployment easier and more robust with a new deployment UI that includes error-checking. In previous versions, you had to provide the initial configuration with vAPP options during the OVA deployment. When there was a problem, which was very common, especially with the NFS share, you had to redeploy the system. Redeploying the appliance multiple times was was very time-consuming.

In Cloud Director 10.2, the operation has been split into 2 stages, as you know it from the vCenter Server. You first deploy the OVA with some basic settings that are not error-prone and then log into a web interface to do the actual Cloud Director configuration like setting up the NFS share and create the Administrator Account.

This article does a quick review of the installation using my OVF Helper Scripts and the new two-stage appliance system setup.

Read More »Introducing Simplified Deployment for VMware Cloud Director 10.2