Skip to content

Florian Grehl

Introducing Simplified Deployment for VMware Cloud Director 10.2

With the release of Cloud Director 10.2, VMware aims to make the deployment easier and more robust with a new deployment UI that includes error-checking. In previous versions, you had to provide the initial configuration with vAPP options during the OVA deployment. When there was a problem, which was very common, especially with the NFS share, you had to redeploy the system. Redeploying the appliance multiple times was was very time-consuming.

In Cloud Director 10.2, the operation has been split into 2 stages, as you know it from the vCenter Server. You first deploy the OVA with some basic settings that are not error-prone and then log into a web interface to do the actual Cloud Director configuration like setting up the NFS share and create the Administrator Account.

This article does a quick review of the installation using my OVF Helper Scripts and the new two-stage appliance system setup.

Read More »Introducing Simplified Deployment for VMware Cloud Director 10.2

Will ESXi 7.0 Update 1 run on Intel NUC?

VMware vSphere ESXi 7.0 Update 1 is here. If you have Intel NUCs in your homelab you should always be very careful when updating to new ESXi releases as there might be issues. Please always keep in mind that this is not an officially supported platform.

Typically, you see problems with new major releases (eg. the Realtek problem in ESXi 7.0) but this time we seem to run into a problem with 8th Gen NUCs in the 7.0 U1 release. The Intel I219-V (6) network adapter fails to load after upgrading to ESXi 7.0 U1. When you try to do a fresh installation, it fails with the well known "No Network Adapters" error.

To be on the safe side, I'm doing a quick checkup on which NUCs are safe to update and where you have to implement a workaround.

In the meantime: Stay Calm, you can run ESXi 7.0 U1 on the 8th Gen NUC!

Read More »Will ESXi 7.0 Update 1 run on Intel NUC?

ESXi on Raspberry Pi - Scripted Installation with Kickstart

In the last article, I've explained how to automate the preparation that is required to install ESXi on a Raspberry Pi using the ESXi on ARM Fling. This article covers the last step, the ESXi installation itself.

I'm covering 2 installation examples:

  • Basic installation, resulting in an ESXi with DHCP enabled
  • Advanced installation including static IP address, Datastore, NTP, SSH key, and SSH enabled.

I highly recommend using the second example as the Pi needs an NTP Server to run with the correct time after reboots. You can just remove any configuration snippets that you don't need.

Read More »ESXi on Raspberry Pi - Scripted Installation with Kickstart

ESXi on RPi - Create EEPROM and Firmware SD Card with PowerShell

The installation of ESXi on a Raspberry Pi using the ESXi on ARM Fling basically involves 3 steps:

  1. Patch RPi EEPROM to the latest version using an SD card. You can reuse the SD card in Step 2.
  2. Write RPi Firmware and EFI to an SD Card. This is the bootloader for ESXi and needs to remain in your RPis SD slot, even after ESXi installation.
  3. Install ESXi on ARM using a USB flash drive (This step is identical to x86 Hardware)

I've tried to make this sprocess unattended so I've created two PowerShell functions that automate Step 1 and 2.

Read More »ESXi on RPi - Create EEPROM and Firmware SD Card with PowerShell

ESXi on Raspberry Pi - Quick way to update EEPROM

Step 3.2 from the official ESXi on ARM Fling guide describes the installation of Raspberry Pi Os just for the purpose of checking that the EEPROM is up-to-date. If you do not have a preinstalled Raspberry Pi Os, there is a quick alternative that allows you to speed up this step.

Instead of using Raspberry Pi Os to update the EEPROM you can use the Raspberry Pi 4 EEPROM boot recovery tool to get the latest version installed in less than 10 seconds.

There are two options to work with the recovery tool:

Read More »ESXi on Raspberry Pi - Quick way to update EEPROM

Playing Doom with VMware ESXi on Arm Fling running on Raspberry Pi

Yesterday I posted a screenshot on Twitter showing Doom that runs on top of the ESXi on ARM Fling, installed on a Raspberry Pi. I got a few requests to share instructions, so here is a quick article. It's not that complicated as there isn't much of a difference compared to running Doom on standard Linux. However, I think it's a good point to get started with your first ARM64 VM on running on top of ESXi.

Read More »Playing Doom with VMware ESXi on Arm Fling running on Raspberry Pi

iPerf3 for ESXi on ARM Fling

When you try to run iPerf3 on ESXi Arm Edition, the following message is displayed:

[root@esxipi1:~] /usr/lib/vmware/vsan/bin/iperf3
/usr/lib/vmware/vsan/bin/iperf3: line 2: syntax error: unexpected "("

The version of iPerf3 that comes with the Fling is not compiled for arm64. The solution is simple, just use an iPerf3 version + library that is compiled for arm64. If you don't want to compile it yourself, feel free to take this:Read More »iPerf3 for ESXi on ARM Fling

ESXi with USB NIC for vSAN and Storage - A Good Idea?

About a month ago, the USB Native Driver Fling has added support for 2.5 Gbit Network adapters. Until then it was clear that the embedded network interface is faster and more stable, compared to a 1Gbit USB Network adapter. I've never used a USB NIC for storage or management traffic.

With the support of 2.5GBASE-T network adapters, I wondered if it is a good idea to use them for accessing shared storage or for vSAN traffic.

It is obvious that a 2.5 Gbit adapter has a higher bandwidth, but due to the USB overhead, there should be a penalty to latency. But how bad will it be? To figure out the impact, I did some testing.

Read More »ESXi with USB NIC for vSAN and Storage - A Good Idea?

11th Gen NUC - First Details on Intels Tiger Canyon NUC

Details on the 11th Generation of Intels NUC have been revealed recently. Intel's NUC series is currently the most used system in the homelab market. They are small, silent, transportable, and have very low power consumption, making it a perfect system for labs or as a home server.

The 11th Gen is just around the corner and has, compared to its predecessor (Frost Canyon) which did not have outstanding innovations, a lot of cool new features.Read More »11th Gen NUC - First Details on Intels Tiger Canyon NUC

Black Screen when connecting a Monitor to Intel NUC running ESXi

When you are using an Intel NUC or other consumer hardware to run ESXi and connect a monitor to access the DCUI console, you see a black screen only. If you do not have a monitor connected during the boot process, you can't access the screen later. The screen will remain black, making troubleshooting impossible.

In Homelabs you usually do not have a monitor connected to all of your servers but in some cases (ESXi crashes or you need to reconfigure network settings) you want to connect a monitor to your system. A simple trick can help in that situation.Read More »Black Screen when connecting a Monitor to Intel NUC running ESXi