VCD

Direct Org Network to TKC Network Communication in Cloud Director 10.2

Since VMware has introduced vSphere with Tanzu support in VMware Cloud Director 10.2, I'm struggling to find a proper way to implement a solution that allows customers bidirectional communication between Virtual Machines and Pods. In earlier Kubernetes implementations using Container Service Extension (CSE) "Native Cluster", workers and the control plane were directly placed in Organization networks. Communication between Pods and Virtual Machines was quite easy, even if they were placed in different subnets because they could be routed through the Tier1 Gateway.

With Tanzu meeting VMware Cloud Director, Kubernetes Clusters have their own Tier1 Gateway. While it would be technically possible to implement routing between Tanzu and VCD Tier1s through Tier0, the typical Cloud Director Org Network is hidden behind a NAT. There is just no way to prevent overlapping networks when advertising Tier1 Routers to the upstream Tier0. The following diagram shows the VCD networking with Tanzu enabled.

With Cloud Director 10.2.2, VMware further optimized the implementation by automatically setting up Firewall Rules on the TKC Tier1 to only allow the tenants Org Networks to access Kubernetes services. They also published a guide on how customers could NAT their public IP addresses to TKC Ingress addressed to make them accessible from the Internet. The method is described here (see Publish Kubernetes Services using VCD Org Networks). Unfortunately, the need to communicate from Pods to Virtual Machines in VCD seems still not to be in VMware's scope.

While developing a decent solution by using Kubernetes Endpoints, I came up with a questionable workaround. While I highly doubt that these methods are supported and useful in production, I still want to share them, to show what actually could be possible.

Read More »Direct Org Network to TKC Network Communication in Cloud Director 10.2

Access Org Network Services from TKC Guest Cluster in VMware Cloud Director with Tanzu

Many applications running in container platforms still require external resources like databases. In the last article, I've explained how to access TKC resources from VMware Cloud Director Tenant Org Networks. In This article, I'm going to explain how to access a database running on a Virtual Machine in VMware Cloud Director from a Tanzu Kubernetes Cluster that was deployed using the latest Cloud Service Extension (CSE) in VMware Cloud Director 10.2.

If you are not familiar with the vSphere with Tanzu integration in VMware Cloud Director, the following diagram shows the communication. I have a single Org VCD that has a MySQL Server running in an Org network. When leaving the Org Network, the private IP address is translated (SNAT) to an public IP from the VCD external network (203.0.113.0/24). The Customer also has a Tanzu Kubernetes Cluster (TKC) deployed using VMware Cloud Director. This creates another Tier1 Gateway, which is connected to the same upstream Tier0 Router. When the TKC communicates, it is also translated on the Tier 1 using an address from the Egress Pool (10.99.200.0/24).

So, both Networks can not communicate with each other directly. As of VMware Cloud Director 10.2.2, communication is only implemented to work in one direction - Org Network -> TKC. This is done using automatically configuring a SNAT on the Org T1 to its primary public address. With this address, the Org Network can reach all Kubernetes services that are exposed using an address from the Ingress Pool, which is the default when exposing services in TKC.

Read More »Access Org Network Services from TKC Guest Cluster in VMware Cloud Director with Tanzu

Deploy Container Service Extension (CSE 3.0) in VMware Cloud Director 10.2

With the release of Cloud Director 10.2, the Container Service Extension 3.0 has been released. With CSE 3.0 you can extend your cloud offering by providing Kubernetes as a Service. Customers can create and manage their own K8s clusters directly in the VMware Cloud Director portal.

I've already described how to deploy vSphere with Tanzu based Kubernetes Clusters in VCD. CSE 3.0 with the "Native K8s Runtime" is is a neat alternative that allows you to deploy K8s directly into the customer's Organization networks, which is currently not possible with Tanzu.

This article explains how to integrate CSE 3.0 in VMware Cloud Director 10.2.

Read More »Deploy Container Service Extension (CSE 3.0) in VMware Cloud Director 10.2

Introducing Simplified Deployment for VMware Cloud Director 10.2

With the release of Cloud Director 10.2, VMware aims to make the deployment easier and more robust with a new deployment UI that includes error-checking. In previous versions, you had to provide the initial configuration with vAPP options during the OVA deployment. When there was a problem, which was very common, especially with the NFS share, you had to redeploy the system. Redeploying the appliance multiple times was was very time-consuming.

In Cloud Director 10.2, the operation has been split into 2 stages, as you know it from the vCenter Server. You first deploy the OVA with some basic settings that are not error-prone and then log into a web interface to do the actual Cloud Director configuration like setting up the NFS share and create the Administrator Account.

This article does a quick review of the installation using my OVF Helper Scripts and the new two-stage appliance system setup.

Read More »Introducing Simplified Deployment for VMware Cloud Director 10.2