Which Intel NUC should I buy for VMware ESXi? (August 2020)

A question that comes up quite often is which NUC I would recommend to buy. The following article shows buying options based on Price/Performance value, performance, and special needs like Dual-NIC or vPro.

Read more »

VMware vSphere with Kubernetes Guide Part 7 - Octant and Lens

This is the last part of my "VMware vSphere with Kubernetes" Guide. In this article, I'm going to give you two tools that will help you to get a better understanding of Kubernetes features. Both tools, Octant and Lens, are free and Open Source. The main difference is that Octant is browser-based and Lens is an Application.

Read more »

VMware vSphere with Kubernetes Guide Part 6 - Tanzu Kubernetes Cluster

This is Part 6 of my "VMware vSphere with Kubernetes" Guide. In this article, I'm going to deploy a Tanzu Kubernetes Cluster (TKC). A TKC is a fully-featured version of the open-source Kubernetes container platform. You can provision and operate Tanzu Kubernetes clusters on top of the Supervisor Cluster.

Read more »

VMware vSphere with Kubernetes Guide Part 5 - Create and Deploy Private Images

This is Part 5 of my "VMware vSphere with Kubernetes" Guide. In this article, I'm going to create a custom Docker image, push it to the embedded Harbor Registry, and deploy it to the Supervisor Cluster.

This is a part of a series. If you do not have a Kubernetes activated vSphere Cluster, refer to Part 1 to get started with the deployment.

Read more »

VMware vSphere with Kubernetes Guide Part 4 - Working with kubectl

This is Part 4 of my "VMware vSphere with Kubernetes" Guide. In the last article, I've explained how to install and configure the Kubernetes CLI Tool kubectl and how to deploy the first pod. In this article, I'm taking a deeper look at kubectl.

This is a part of a series. If you do not have a Kubernetes activated vSphere Cluster, refer to Part 1 to get started with the deployment.

Read more »

VMware vSphere with Kubernetes Guide Part 3 - kubectl Basics

This is Part 3 of my "VMware vSphere with Kubernetes" Guide. In the previous parts, I've explained how to enable Kubernetes in vSphere, deploy the Harbor Registry, and create a namespace in the Supervisor Cluster. Now it's time to get familiar with the Kubernetes CLI Tool kubectl and to deploy your first pod.

If you do not have a Kubernetes activated vSphere Cluster, refer to Part 1 and Part 2 for instructions.

Read more »

VMware vSphere with Kubernetes Guide Part 2 - Harbor, Namespaces and K8S Components

This is Part 2 of my "VMware vSphere with Kubernetes" Guide. In the last article, I've explained how to get "Workload Management" enabled in a vSphere cluster. At this point, the cluster is successfully enabled to support Kubernetes, but what's next? Before I start to deploy the first container I'm going to enable additional services, create a Kubernetes Namespace in the Supervisor Cluster, and explore the deployed components in vCenter and NSX-T.

Read more »

Getting Started Guide - VMware vSphere with Kubernetes

With the release of vSphere 7.0, the integration of Kubernetes, formerly known as Project Pacific, has been introduced. vSphere with Kubernetes enables you to directly run containers on your ESXi cluster. This article explains how to get your cluster enabled for the so-called "Workload Management".

The article covers evaluation options, licensing options, troubleshooting, and the initial configuration.

Read more »

How to Configure LDAPS Authentication in vCenter 7.0

This article explains how to configure LDAPS authentication in vCenter 7.0.
Read more »

NSX-T and VMKUSB NIC Fling - MTU Size Considerations

When configuring an NSX-T Overlay network you have to increase the default MTU size of 1500. It is critical that the MTU is configured across the whole platform (Network Adapters, Distributed Switches, Physical Switches, NSX-T Uplink profiles). A typical MTU size is 1600 or 9000. Be careful when using high MTU sizes, as most of the drivers used in the VMKUSB NIC Fling do not support MTU size 9000 and thus, the overlay communication will silently fail.

The relevant part is when you change the MTU Size in the dvSwitch configuration.

When you change the MTU size, the configuration is pushed to all connected network interfaces. Make sure to verify that the NIC has actually configured to the correct MTU:

[root@esx4:~] esxcfg-nics -l
Name    PCI          Driver      Link Speed      Duplex MAC Address       MTU    Description
vmnic0  0000:00:1f.6 ne1000      Up   1000Mbps   Full   00:1f:c6:9c:47:13 1500   Intel Corporation Ethernet Connection (2) I219-LM
vusb0   Pseudo       uether      Up   1000Mbps   Full   00:24:9b:1a:bd:18 1600   ASIX Elec. Corp. AX88179
vusb1   Pseudo       uether      Up   1000Mbps   Full   00:24:9b:1a:bd:19 1500   ASIX Elec. Corp. AX88179

In that case, the vusb0 adapter has been configured to 1600 which is fine and sufficient for NSX-T.

If you try to change the MTU size to 9000, you do not see any error messages on the dvSwitch, but the vmkernel.log reveals that the MTU could not be set:

2020-07-19T16:10:42.344Z cpu6:524356)WARNING: vmkusb: Set MTU 9000 is not supported: Failure
2020-07-19T16:10:42.344Z cpu6:524356)WARNING: Uplink: 16632: Failed to set MTU to 9000 on vusb0