Skip to content

Cloud Director

Deploy High Available Firewall Appliances in VMware Cloud Director

When customers are deploying their services to a Cloud Datacenter delivered with VMware Cloud Director they quite often want to use their own virtual Firewall Appliance rather than the Edge and Distributed firewall that is built into the NSX infrastructure. Many Administrators prefer to use their well-known CheckPoint, Fortinet, or pfSense for seamless configuration management. While using standalone virtual Firewall Appliances is not an issue in general, there are some caveats with HA deployments, which can be addressed with features implemented in recent versions of VMware Cloud Director.

This article explains how to deploy High-Available Firewall Appliances in VMware Cloud Director 10.5

Read More »Deploy High Available Firewall Appliances in VMware Cloud Director

Terraform vcd_network_routed_v2 with cidrhost() Calculated IPv6 Address Format Issue - "forces replacement"

After a long time of missing IPv6 Support in the Terraform Provider for VMware Cloud Director, with the release of v3.10.0 IPv6 Dual-Stack support for routed networks is finally there. Unfortunately, when you want to use the Terraform native cidrhost() function, you might run into an issue that is caused by the different formats in which you can write IPv6 addresses. The format in which the IP Address is calculated differs from the format that the VCD API returns which forces Terraform to replace the resource.

Read More »Terraform vcd_network_routed_v2 with cidrhost() Calculated IPv6 Address Format Issue - "forces replacement"

NOT_AUTHENTICATED Error with PowerCLI 13.1 - Cloud Director Authentication Changes

After updating PowerCLI to version 13.1, which has been released in April 2023, a couple of scripts that are using the Session Token provided by Connect-CIServer fail to work with the following error:

Invoke-WebRequest: {"minorErrorCode":"NOT_AUTHENTICATED","message":"[] This operation is denied.","stackTrace":null}

According to the official announcement, there have been changes to the authentication mechanism of Connect-CIServer. This change does not affect any functions that come with PowerCLI, but many community functions and scripts that include custom API calls.

In previous versions, you could simply snatch the authentication token that is stored in the $global:DefaultCIServers.SessionId global variable and use with an x-vcloud-authorization header in your custom API calls. Since PowerCLI 13.1, you now get a Bearer Token as SessionId/SessionSecret.

Read More »NOT_AUTHENTICATED Error with PowerCLI 13.1 - Cloud Director Authentication Changes

VMware Cloud Director Quick Tip - API Explorer (Swagger) not visible for Org Admins

In VMware Cloud Director 10, the API Explorer (Swagger) is not visible for Organization Administrators.

When they try to access /api-explorer/tenant/[ORG]/, an HTTP ERROR 403 - Forbidden is shown.

The right to use the API Explorer is not part of the default rules set. To allow tenants to use the API Explorer, edit the Rights Bundle and default Role:

Read More »VMware Cloud Director Quick Tip - API Explorer (Swagger) not visible for Org Admins

Troubleshooting CSE 3.1 TKGm Integration with VMware Cloud Director 10.3

This article recaps Issues that I had during the integration of VMware Container Service Extension 3.1 to allow the deployment of Tanzu Kubernetes Grid Clusters (TKGm) in VMware Cloud Director 10.3.

If you are interested in an Implementation Guide, refer to Deploy CSE 3.1 with TKGm Support in VCD 10.3 and First Steps with TKGm Guest Clusters in VCD 10.3.

  • CSE Log File Location
  • DNS Issues during Photon Image Creation
  • Disable rollbackOnFailure to troubleshoot TKGm deployment errors
  • Template cookbook version 1.0.0 is incompatible with CSE running in non-legacy mode
  • https://[IP-ADDRESS] should have a https scheme and match CSE server config file
  • 403 Client Error: Forbidden for url: https://[VCD]/oauth/tenant/demo/register
  • NodeCreationError: failure on creating nodes ['mstr-xxxx']
  • Force Delete TKGm Clusters / Can't delete TKGm Cluster / Delete Stuck in DELETE:IN_PROGRESS

Read More »Troubleshooting CSE 3.1 TKGm Integration with VMware Cloud Director 10.3

First Steps with TKGm Guest Clusters in VMware Cloud Director 10.3

In the previous article, I've explained how to deploy Container Service Extension 3.1 with TKGm Support in VMware Cloud Director 10.3. In this article, I'm taking a look at how the Tanzu Kubernetes Grid Cluster is integrated into the Organization VDC and how the Tenant can access and work with the Kubernetes Cluster.

 

Read More »First Steps with TKGm Guest Clusters in VMware Cloud Director 10.3

Deploy CSE 3.1 with TKGm Support in VMware Cloud Director 10.3

With the release of Cloud Director 10.3 and Container Service Extension 3.1 (CSE), you have an additional option to deploy Kubernetes Clusters: "Tanzu Kubernetes Grid Multi-Cloud" aka. TKGm. With TKGm you now have 4 options to offer Kubernetes as a Service for your customers.

  • TKGm (Multi-Cloud)
  • TKGs (vSphere with Tanzu)
  • Native
  • TKG-I (Enterprise PKS)

Yes, there is a reason why TKGm and TKGs are in bold letters. If you are starting today, forget about "Native" and "TKG-I". "TKGm" works similar to "Native" but is far superior. TKG-I (TKG Integrated Edition, formerly known as VMware Enterprise PKS) is deprecated as of CSE 3.1 and will be removed in future releases.

This article explains how to integrate CSE 3.1 in VMware Cloud Director 10.3.

Read More »Deploy CSE 3.1 with TKGm Support in VMware Cloud Director 10.3

Direct Org Network to TKC Network Communication in Cloud Director 10.2

Since VMware has introduced vSphere with Tanzu support in VMware Cloud Director 10.2, I'm struggling to find a proper way to implement a solution that allows customers bidirectional communication between Virtual Machines and Pods. In earlier Kubernetes implementations using Container Service Extension (CSE) "Native Cluster", workers and the control plane were directly placed in Organization networks. Communication between Pods and Virtual Machines was quite easy, even if they were placed in different subnets because they could be routed through the Tier1 Gateway.

With Tanzu meeting VMware Cloud Director, Kubernetes Clusters have their own Tier1 Gateway. While it would be technically possible to implement routing between Tanzu and VCD Tier1s through Tier0, the typical Cloud Director Org Network is hidden behind a NAT. There is just no way to prevent overlapping networks when advertising Tier1 Routers to the upstream Tier0. The following diagram shows the VCD networking with Tanzu enabled.

With Cloud Director 10.2.2, VMware further optimized the implementation by automatically setting up Firewall Rules on the TKC Tier1 to only allow the tenants Org Networks to access Kubernetes services. They also published a guide on how customers could NAT their public IP addresses to TKC Ingress addressed to make them accessible from the Internet. The method is described here (see Publish Kubernetes Services using VCD Org Networks). Unfortunately, the need to communicate from Pods to Virtual Machines in VCD seems still not to be in VMware's scope.

While developing a decent solution by using Kubernetes Endpoints, I came up with a questionable workaround. While I highly doubt that these methods are supported and useful in production, I still want to share them, to show what actually could be possible.

Read More »Direct Org Network to TKC Network Communication in Cloud Director 10.2

Access Org Network Services from TKC Guest Cluster in VMware Cloud Director with Tanzu

Many applications running in container platforms still require external resources like databases. In the last article, I've explained how to access TKC resources from VMware Cloud Director Tenant Org Networks. In This article, I'm going to explain how to access a database running on a Virtual Machine in VMware Cloud Director from a Tanzu Kubernetes Cluster that was deployed using the latest Cloud Service Extension (CSE) in VMware Cloud Director 10.2.

If you are not familiar with the vSphere with Tanzu integration in VMware Cloud Director, the following diagram shows the communication. I have a single Org VCD that has a MySQL Server running in an Org network. When leaving the Org Network, the private IP address is translated (SNAT) to an public IP from the VCD external network (203.0.113.0/24). The Customer also has a Tanzu Kubernetes Cluster (TKC) deployed using VMware Cloud Director. This creates another Tier1 Gateway, which is connected to the same upstream Tier0 Router. When the TKC communicates, it is also translated on the Tier 1 using an address from the Egress Pool (10.99.200.0/24).

So, both Networks can not communicate with each other directly. As of VMware Cloud Director 10.2.2, communication is only implemented to work in one direction - Org Network -> TKC. This is done using automatically configuring a SNAT on the Org T1 to its primary public address. With this address, the Org Network can reach all Kubernetes services that are exposed using an address from the Ingress Pool, which is the default when exposing services in TKC.

Read More »Access Org Network Services from TKC Guest Cluster in VMware Cloud Director with Tanzu

VMware Cloud Director 10.2.2 and vSphere with Tanzu Enhancements

VMware Cloud Director 10.2.2 brings a couple of enhancements to the vSphere with Tanzu integration. While we are still waiting for VRF support in vSphere with Tanzu to fully separate Supervisor Namespaces, the implementation introduced in VCD 10.2.2 should be valid for production workloads.

This article explains new features and issues I had during the implementation:

  • VCD with Supervisor Control Plane communication
  • Tanzu Certificate Issues
  • Tanzu Kubernetes Cluster Tenant Network Isolation
  • Publish Kubernetes Services using VCD Org Networks

Read More »VMware Cloud Director 10.2.2 and vSphere with Tanzu Enhancements