Skip to content

VCD

VMware Cloud Director Quick Tip - API Explorer (Swagger) not visible for Org Admins

In VMware Cloud Director 10, the API Explorer (Swagger) is not visible for Organization Administrators.

When they try to access /api-explorer/tenant/[ORG]/, an HTTP ERROR 403 - Forbidden is shown.

The right to use the API Explorer is not part of the default rules set. To allow tenants to use the API Explorer, edit the Rights Bundle and default Role:

Read More »VMware Cloud Director Quick Tip - API Explorer (Swagger) not visible for Org Admins

Troubleshooting CSE 3.1 TKGm Integration with VMware Cloud Director 10.3

This article recaps Issues that I had during the integration of VMware Container Service Extension 3.1 to allow the deployment of Tanzu Kubernetes Grid Clusters (TKGm) in VMware Cloud Director 10.3.

If you are interested in an Implementation Guide, refer to Deploy CSE 3.1 with TKGm Support in VCD 10.3 and First Steps with TKGm Guest Clusters in VCD 10.3.

  • CSE Log File Location
  • DNS Issues during Photon Image Creation
  • Disable rollbackOnFailure to troubleshoot TKGm deployment errors
  • Template cookbook version 1.0.0 is incompatible with CSE running in non-legacy mode
  • https://[IP-ADDRESS] should have a https scheme and match CSE server config file
  • 403 Client Error: Forbidden for url: https://[VCD]/oauth/tenant/demo/register
  • NodeCreationError: failure on creating nodes ['mstr-xxxx']
  • Force Delete TKGm Clusters / Can't delete TKGm Cluster / Delete Stuck in DELETE:IN_PROGRESS

Read More »Troubleshooting CSE 3.1 TKGm Integration with VMware Cloud Director 10.3

First Steps with TKGm Guest Clusters in VMware Cloud Director 10.3

In the previous article, I've explained how to deploy Container Service Extension 3.1 with TKGm Support in VMware Cloud Director 10.3. In this article, I'm taking a look at how the Tanzu Kubernetes Grid Cluster is integrated into the Organization VDC and how the Tenant can access and work with the Kubernetes Cluster.

 

Read More »First Steps with TKGm Guest Clusters in VMware Cloud Director 10.3

Deploy CSE 3.1 with TKGm Support in VMware Cloud Director 10.3

With the release of Cloud Director 10.3 and Container Service Extension 3.1 (CSE), you have an additional option to deploy Kubernetes Clusters: "Tanzu Kubernetes Grid Multi-Cloud" aka. TKGm. With TKGm you now have 4 options to offer Kubernetes as a Service for your customers.

  • TKGm (Multi-Cloud)
  • TKGs (vSphere with Tanzu)
  • Native
  • TKG-I (Enterprise PKS)

Yes, there is a reason why TKGm and TKGs are in bold letters. If you are starting today, forget about "Native" and "TKG-I". "TKGm" works similar to "Native" but is far superior. TKG-I (TKG Integrated Edition, formerly known as VMware Enterprise PKS) is deprecated as of CSE 3.1 and will be removed in future releases.

This article explains how to integrate CSE 3.1 in VMware Cloud Director 10.3.

Read More »Deploy CSE 3.1 with TKGm Support in VMware Cloud Director 10.3

NSX-ALB Integration in VMware Cloud Director 10.3 with Terraform

In an earlier article, I've explained how to integrate VMware NSX-T Advanced Loadbalancer (formerly known as AVI) into VMware Cloud Director. Today, I want to show how to automate those steps with Terraform. Terraform is an open-source infrastructure as code software tool created by HashiCorp. The following steps are part of the automated configuration:

  • Create vCenter Content Library for SE Images
  • Create Service Engine Management Network with DHCP in NSX-T
  • Create NSX-T Cloud in NSX-ALB
  • Create Service Engine Group in NSX-ALB
  • Add NSX-ALB Controller to VMware Cloud Director
  • Import NSX-T Cloud to VMware Cloud Director
  • Import Service Engine Group to VMware Cloud Director

Read More »NSX-ALB Integration in VMware Cloud Director 10.3 with Terraform

SSL Load Balancer in VMware Cloud Director with NSX-ALB (AVI)

With the NSX Advanced Load Balancer integration in Cloud Director 10.2 or later, you can enable SSL offloading to secure your customer's websites. This article explains how to request a Let's Encrypt certificate, import it to VMware Cloud Director and enable SSL offloading in NSX-ALB. This allows tenants to publish websites in a secure manner.

Read More »SSL Load Balancer in VMware Cloud Director with NSX-ALB (AVI)

Shared Service Engine Groups in VMware Cloud Director with NSX Advanced Load Balancer

In the Getting Started with NSX Advanced Load Balancer Integration in VMware Cloud Director 10.3 Guide, I've explained how to enable "Load Balancing as a Service" in VCD with dedicated Service Engines. With this Service Engine deployment model, each Edge Gateway is statically assigned to a dedicated NSX-ALB Service Engine Group. That means, for each EGW you create in VCD, you have to create a Service Engine Groups, which consists of multiple Service Engines (Virtual Machines).

Service Engine Groups can also be deployed in a shared model. Shared Service Engine groups can be assigned to multiple Edge Gateways. In this deployment model, a single Service Engine (Virtual Machine) can handle traffic for multiple customers. For obvious security reasons, and to prevent problems with overlapping networks, VRFs are used inside the SE to fully separate the data traffic.

This article explains how to use Shared Service Engine Groups in VMware Cloud Director 10.3.

Read More »Shared Service Engine Groups in VMware Cloud Director with NSX Advanced Load Balancer

Getting Started with NSX Advanced Load Balancer Integration in VMware Cloud Director 10.3

When you are using NSX-T as network backend for VMware Cloud Director, you can't use the Native Load Balancer included in NSX-T. Since Cloud Director 10.2, the NSX Advanced Loadbalancer (ALB), previously known as AVI Vantage Platform, has been integrated to allow customers to create Self-Service Load Balancers.

This article explains all steps required to integrate NSX ALB into VMware Cloud Director.

Read More »Getting Started with NSX Advanced Load Balancer Integration in VMware Cloud Director 10.3

Direct Org Network to TKC Network Communication in Cloud Director 10.2

Since VMware has introduced vSphere with Tanzu support in VMware Cloud Director 10.2, I'm struggling to find a proper way to implement a solution that allows customers bidirectional communication between Virtual Machines and Pods. In earlier Kubernetes implementations using Container Service Extension (CSE) "Native Cluster", workers and the control plane were directly placed in Organization networks. Communication between Pods and Virtual Machines was quite easy, even if they were placed in different subnets because they could be routed through the Tier1 Gateway.

With Tanzu meeting VMware Cloud Director, Kubernetes Clusters have their own Tier1 Gateway. While it would be technically possible to implement routing between Tanzu and VCD Tier1s through Tier0, the typical Cloud Director Org Network is hidden behind a NAT. There is just no way to prevent overlapping networks when advertising Tier1 Routers to the upstream Tier0. The following diagram shows the VCD networking with Tanzu enabled.

With Cloud Director 10.2.2, VMware further optimized the implementation by automatically setting up Firewall Rules on the TKC Tier1 to only allow the tenants Org Networks to access Kubernetes services. They also published a guide on how customers could NAT their public IP addresses to TKC Ingress addressed to make them accessible from the Internet. The method is described here (see Publish Kubernetes Services using VCD Org Networks). Unfortunately, the need to communicate from Pods to Virtual Machines in VCD seems still not to be in VMware's scope.

While developing a decent solution by using Kubernetes Endpoints, I came up with a questionable workaround. While I highly doubt that these methods are supported and useful in production, I still want to share them, to show what actually could be possible.

Read More »Direct Org Network to TKC Network Communication in Cloud Director 10.2