Skip to content

VCD

Access Org Network Services from TKC Guest Cluster in VMware Cloud Director with Tanzu

Many applications running in container platforms still require external resources like databases. In the last article, I've explained how to access TKC resources from VMware Cloud Director Tenant Org Networks. In This article, I'm going to explain how to access a database running on a Virtual Machine in VMware Cloud Director from a Tanzu Kubernetes Cluster that was deployed using the latest Cloud Service Extension (CSE) in VMware Cloud Director 10.2.

If you are not familiar with the vSphere with Tanzu integration in VMware Cloud Director, the following diagram shows the communication. I have a single Org VCD that has a MySQL Server running in an Org network. When leaving the Org Network, the private IP address is translated (SNAT) to an public IP from the VCD external network (203.0.113.0/24). The Customer also has a Tanzu Kubernetes Cluster (TKC) deployed using VMware Cloud Director. This creates another Tier1 Gateway, which is connected to the same upstream Tier0 Router. When the TKC communicates, it is also translated on the Tier 1 using an address from the Egress Pool (10.99.200.0/24).

So, both Networks can not communicate with each other directly. As of VMware Cloud Director 10.2.2, communication is only implemented to work in one direction - Org Network -> TKC. This is done using automatically configuring a SNAT on the Org T1 to its primary public address. With this address, the Org Network can reach all Kubernetes services that are exposed using an address from the Ingress Pool, which is the default when exposing services in TKC.

Read More »Access Org Network Services from TKC Guest Cluster in VMware Cloud Director with Tanzu

Deploy Container Service Extension (CSE 3.0) in VMware Cloud Director 10.2

With the release of Cloud Director 10.2, the Container Service Extension 3.0 has been released. With CSE 3.0 you can extend your cloud offering by providing Kubernetes as a Service. Customers can create and manage their own K8s clusters directly in the VMware Cloud Director portal.

I've already described how to deploy vSphere with Tanzu based Kubernetes Clusters in VCD. CSE 3.0 with the "Native K8s Runtime" is is a neat alternative that allows you to deploy K8s directly into the customer's Organization networks, which is currently not possible with Tanzu.

This article explains how to integrate CSE 3.0 in VMware Cloud Director 10.2.

Read More »Deploy Container Service Extension (CSE 3.0) in VMware Cloud Director 10.2

Introducing Simplified Deployment for VMware Cloud Director 10.2

With the release of Cloud Director 10.2, VMware aims to make the deployment easier and more robust with a new deployment UI that includes error-checking. In previous versions, you had to provide the initial configuration with vAPP options during the OVA deployment. When there was a problem, which was very common, especially with the NFS share, you had to redeploy the system. Redeploying the appliance multiple times was was very time-consuming.

In Cloud Director 10.2, the operation has been split into 2 stages, as you know it from the vCenter Server. You first deploy the OVA with some basic settings that are not error-prone and then log into a web interface to do the actual Cloud Director configuration like setting up the NFS share and create the Administrator Account.

This article does a quick review of the installation using my OVF Helper Scripts and the new two-stage appliance system setup.

Read More »Introducing Simplified Deployment for VMware Cloud Director 10.2