Black Screen when connecting a Monitor to Intel NUC running ESXi

When you are using an Intel NUC or other consumer hardware to run ESXi and connect a monitor to access the DCUI console, you see a black screen only. If you do not have a monitor connected during the boot process, you can't access the screen later. The screen will remain black, making troubleshooting impossible.

In Homelabs you usually do not have a monitor connected to all of your servers but in some cases (ESXi crashes or you need to reconfigure network settings) you want to connect a monitor to your system. A simple trick can help in that situation. Read more »

Realtek NIC and ESXi 7.0 - Use Passthrough to make use of the Adapter

Realtek Adapters are very common in consumer hardware and SFF systems. Using SFF systems to run ESXi is a good option for home labs as they are inexpensive and have a low power consumption. Unfortunately, the Realtek RTL8168, which is used in Asus PN50 or ZOTAC ZBOX Edge for example, is not supported in ESXi. The problem can be solved with a community created driver in ESXi 5.x and 6.x but not in ESXi 7.0, due to the VMKlinux driver stack deprecation.

You can work around the problem by using an USB based NIC to manage ESXi. Using USB NICs works fine and stable, but at this point the embedded NIC is useless. If you want to use it, you can use passthrough to add it to a virtual machine.

Read more »

USB Devices as VMFS Datastore in vSphere ESXi 7.0

This article explains how to add USB devices as Datastores in VMware ESXi 7.0. Adding USB devices as datastores was also possible in previous versions, but in vSphere 7 it has become even easier.

Please be aware that using USB Datastores is not supported by VMware so be careful when using this method with sensitive data.

In this example, I'm using a USB 3.0 to NGFF M.2 case.

Read more »

Mark USB Storage Devices as Flash fails with "The Disk is in use" Error in ESXi

When you try to mark USB-based Storage Devices as Flash in ESXi, the following error is displayed:

Cannot change the host configuration. Cannot mark disk mpx.vmhba33:C0:T0:L0 as "Flash". "Unable to configure the disk claim rules. The disk is in use."


The error message is misleading as the issue is not the disk being in use. You have to configure an advanced setting in ESXi to allow USB disks to be claimed as flash.

Read more »

ESXi on AMD Ryzen based ASUS PN50

The long-awaited AMD Ryzen based PN50 by ASUS is finally available. The ESXi Homelab community is constantly growing. When you want to run ESXi in home labs you typically want to have a system that is small, silent, and transportable. To keep costs at a minimum, the power consumption is also a very important factor. The portfolio of Small Form Factor (SFF) Systems, also known as Barebone, Nettop, SoC, or Mini-PC, is enormous. Intel's NUC series is currently the most used system in the homelab market, but I'm always keeping my eyes on its competitors.

Today I'm going to test the ASUS PN50, which is currently rolled out. The PN50 is available with 4 different embedded CPUs:

  • ASUS PN50 Ryzen 7 4800U (8 Core / 16 Threads, up to 4.2 GHz)
  • ASUS PN50 Ryzen 7 4700U (8 Core, up to 4.1 GHz)
  • ASUS PN50 Ryzen 5 4500U (6 Core, up to 4.0 GHz)
  • ASUS PN50 Ryzen 5 4300U (4 Core, up to 4.0 GHz)

Will ESXi run on the Asus PN50?
Yes. It is possible to install ESXi on the Asus PN50. Unfortunately, Asus is using a Realtek based RTL8168 Gigabit Network adapter for the PN50, which will not work with ESXi 7.0. To install ESXi 6.x, you have to use a community-based driver. If you want to use ESXi 7.0, you have to use a USB-based Network adapter.

Read more »

Tips for using USB Network Adapters with VMware ESXi

Running Intel NUCs and other SFF systems with ESXi is a proven standard for virtualization home labs. One major drawback is that most of the available SFF systems only have a single Gigabit network adapter. This might be sufficient for a standalone ESXi with a few VMs, but when you want to use shared Storage or VMware NSX, you totally want to have additional NICs.

This article explains some basics to consider when running USB-based network adapters with ESXi.

Read more »

ESXi VMKUSB NIC Fling adds support for 2.5GBASE-T Adapters

The USB Native Driver Fling, a popular ESXi driver by Songtao Zheng and William Lam that adds support for USB-based Network Adapters, has been updated to version 1.6. The new version has added support for RTL8156 based 2.5GBASE-T network adapters.

Multi-Gigabit network adapters with 5GBASE-T are available for a while, but those 5GbE adapters cost about $100 USD. The new driver allows the usage of 2.5GbE adapters that are available for as low as $25 USD. The driver was released yesterday, and luckily I already own a bunch of 2.5GbE adapters, so I could give it a test drive immediately.

CableCreation USB 3.0 to 2.5 Gigabit LAN Adapter (CD0673)

Read more »

vSphere with Kubernetes - Which Supervisor Cluster Settings can be edited?

When you want to deploy Kubernetes on vSphere 7 it is crucial to plan the configuration thoroughly prior to enabling Workload Management. Many of the configuration parameters entered during the Workload Management wizard can not be changed after the deployment.

The following table show which settings can be changed after the initial deployment:

Read more »

High CPU Usage Issue in vCenter Server 7.0c solved in 7.0d

VMware has released vCenter Server 7.0d that fixes the high CPU usage issue in vCenter Server 7.0c. The Workload Control Plane service (Part of vSphere with Kubernetes) causes the issue, even if you do not have Workload Management enabled in your environment.

VMware vCenter Server 7.0.0d [Release Notes] [Download]

vSphere with Kubernetes Supports Multiple Tier-0 Gateways

During my first vSphere with Kubernetes tests, I had an issue where I was not able to activate Workload Management (Kubernetes) because it discovered multiple Tier-0 gateways. The configuration I used was vSphere 7.0 GA and an NSX-T 3.0 backed N-VDS. I had a previously configured Edge Cluster / Tier-0 Gateway for existing workloads and configured a new Edge Cluster / Tier-0 for Kubernetes.

In the Workload Management Wizard, no Cluster was compatible so I was forced to use the previously configured Tier-0 with some routing workarounds. The error message in wcpsvc.log stated "[...]has more than one tier0 gateway[...]".

Today I tried to find a solution and noticed that there was an update to the official Kubernetes Guide:

Read more »