Tag Archives: NSX-T

vSphere with Kubernetes Supports Multiple Tier-0 Gateways

During my first vSphere with Kubernetes tests, I had an issue where I was not able to activate Workload Management (Kubernetes) because it discovered multiple Tier-0 gateways. The configuration I used was vSphere 7.0 GA and an NSX-T 3.0 backed N-VDS. I had a previously configured Edge Cluster / Tier-0 Gateway for existing workloads and configured a new Edge Cluster / Tier-0 for Kubernetes.

In the Workload Management Wizard, no Cluster was compatible so I was forced to use the previously configured Tier-0 with some routing workarounds. The error message in wcpsvc.log stated "[...]has more than one tier0 gateway[...]".

Today I tried to find a solution and noticed that there was an update to the official Kubernetes Guide:

Read more »

VMware vSphere with Kubernetes Guide Part 6 - Tanzu Kubernetes Cluster

This is Part 6 of my "VMware vSphere with Kubernetes" Guide. In this article, I'm going to deploy a Tanzu Kubernetes Cluster (TKC). A TKC is a fully-featured version of the open-source Kubernetes container platform. You can provision and operate Tanzu Kubernetes clusters on top of the Supervisor Cluster.

Read more »

VMware vSphere with Kubernetes Guide Part 4 - Working with kubectl

This is Part 4 of my "VMware vSphere with Kubernetes" Guide. In the last article, I've explained how to install and configure the Kubernetes CLI Tool kubectl and how to deploy the first pod. In this article, I'm taking a deeper look at kubectl.

This is a part of a series. If you do not have a Kubernetes activated vSphere Cluster, refer to Part 1 to get started with the deployment.

Read more »

VMware vSphere with Kubernetes Guide Part 3 - kubectl Basics

This is Part 3 of my "VMware vSphere with Kubernetes" Guide. In the previous parts, I've explained how to enable Kubernetes in vSphere, deploy the Harbor Registry, and create a namespace in the Supervisor Cluster. Now it's time to get familiar with the Kubernetes CLI Tool kubectl and to deploy your first pod.

If you do not have a Kubernetes activated vSphere Cluster, refer to Part 1 and Part 2 for instructions.

Read more »

VMware vSphere with Kubernetes Guide Part 2 - Harbor, Namespaces and K8S Components

This is Part 2 of my "VMware vSphere with Kubernetes" Guide. In the last article, I've explained how to get "Workload Management" enabled in a vSphere cluster. At this point, the cluster is successfully enabled to support Kubernetes, but what's next? Before I start to deploy the first container I'm going to enable additional services, create a Kubernetes Namespace in the Supervisor Cluster, and explore the deployed components in vCenter and NSX-T.

Read more »

Getting Started Guide - VMware vSphere with Kubernetes

With the release of vSphere 7.0, the integration of Kubernetes, formerly known as Project Pacific, has been introduced. vSphere with Kubernetes enables you to directly run containers on your ESXi cluster. This article explains how to get your cluster enabled for the so-called "Workload Management".

The article covers evaluation options, licensing options, troubleshooting, and the initial configuration.

Read more »

NSX-T and VMKUSB NIC Fling - MTU Size Considerations

When configuring an NSX-T Overlay network you have to increase the default MTU size of 1500. It is critical that the MTU is configured across the whole platform (Network Adapters, Distributed Switches, Physical Switches, NSX-T Uplink profiles). A typical MTU size is 1600 or 9000. Be careful when using high MTU sizes, as most of the drivers used in the VMKUSB NIC Fling do not support MTU size 9000 and thus, the overlay communication will silently fail.

The relevant part is when you change the MTU Size in the dvSwitch configuration.

When you change the MTU size, the configuration is pushed to all connected network interfaces. Make sure to verify that the NIC has actually configured to the correct MTU:

[root@esx4:~] esxcfg-nics -l
Name    PCI          Driver      Link Speed      Duplex MAC Address       MTU    Description
vmnic0  0000:00:1f.6 ne1000      Up   1000Mbps   Full   00:1f:c6:9c:47:13 1500   Intel Corporation Ethernet Connection (2) I219-LM
vusb0   Pseudo       uether      Up   1000Mbps   Full   00:24:9b:1a:bd:18 1600   ASIX Elec. Corp. AX88179
vusb1   Pseudo       uether      Up   1000Mbps   Full   00:24:9b:1a:bd:19 1500   ASIX Elec. Corp. AX88179

In that case, the vusb0 adapter has been configured to 1600 which is fine and sufficient for NSX-T.

If you try to change the MTU size to 9000, you do not see any error messages on the dvSwitch, but the vmkernel.log reveals that the MTU could not be set:

2020-07-19T16:10:42.344Z cpu6:524356)WARNING: vmkusb: Set MTU 9000 is not supported: Failure
2020-07-19T16:10:42.344Z cpu6:524356)WARNING: Uplink: 16632: Failed to set MTU to 9000 on vusb0


Heads Up: Nested LDAP Groups Not Working in NSX-T 3.0

When using the new direct LDAP integration in NSX-T 3.0, authentication using nested groups is not working. Example:

  • User "John" is a member of the group "IT Department"
  • Group "IT Department" is member of Group "NSX Admin"
  • Group "NSX Admin" is assigned the Enterprise Admin Role in NSX-T

User "John" can't log in because NSX-T does not search inside nested groups. If you need nested groups to work and there is no workaround, use the vIDM (VMware Identity Manager) appliance.

How to enable LDAP Authentication in NSX-T 3.0

NSX-T 3.0 has added support for authentication using AD or LDAP sources. In previous versions, you had to deploy the vIDM (VMware Identity Manager) appliance to allow external authentication. You can still use vIDM but if you only need NSX-T authentication you can now do it without a sole purpose appliance.

This article explains how to enable LDAP authentication in NSX-T 3.0. Read more »

How to manually add NSX-T Managers to a Cluster

After deploying the first NSX-T Manager, additional managers can be deployed using the NSX-T GUI. This is a crucial step to create a redundant and reliable setup. To deploy an additional NSX-T Manager appliance you first have to add the target vCenter as "Compute Manager". In some cases, eg. when NSX-T Managers are to run in a dedicated management vCenter, you don't want to add the vCenter as Compute Manager.

A compute manager is required to deploy an appliance. To add a compute manager, visit the COMPUTE MANAGERS page.

This article explains how to manually add additional Managers to an NSX-T Cluster using the CLI, without configuring a compute manager.

Read more »