Additional USB NIC for Intel NUCs running ESXi

Intel NUCs with ESXi are a proven standard for virtualization home labs. I’m currently running a homelab consisting of 3 Intel NUCs with a FreeNAS based All-Flash Storage. If you are generally interested in running ESXi on Intel NUCs, read this post first. One major drawback is that they only have a single Gigabit network adapter. This might be sufficient for a standalone ESXi with a few VMs, but when you wand to use shared Storage or VMware NSX, you totally want to have additional NICs.


A few month ago, this problem has been solved by an unofficial driver that has been made available by VMware engineer William Lam.


This drivers are intended to be used with systems like the Intel NUC that does not have PCIe slots for addtional network adapters. They are not officially supported by VMware. Do not install them in production.

The drivers are made for USB NICs with the AX88179 chipset which are available for about $25. The following adapters have been verified to work:

  • Anker Uspeed USB 3.0 bis 10/100/1000 Gigabit Ethernet LAN Network Adapter
  • StarTech USB 3.0 to Gigabit Ethernet NIC Adapter
  • j5create USB 3.0 to Gigabit Ethernet NIC Adapter
  • Vantec CB-U300GNA USB 3.0 Ethernet Adapter

Make sure that the system supports USB 3.0 and network adapter are mapped to the USB 3.0 hub. Legacy BIOS Settings might prevent ESXi to correctly map devices as explained here.
Verify the USB configuration with lsusb -tv

# lsusb -tv
Bus# 2
`-Dev# 1 Vendor 0x1d6b Product 0x0003 Linux Foundation 3.0 root hub
  `-Dev# 2 Vendor 0x0b95 Product 0x1790 ASIX Electronics Corp. AX88179 Gigabit Ethernet



  1. Download Driver VIB from here.
  2. Upload the drivers to a Datastore
  3. Install the driver
    # esxcli software vib install -v /vmfs/volumes/datastore/vghetto-ax88179-esxi60u2.vib -f
  4. Verify that the drivers have been loaded successfully:
    # esxcli network nic list
    Name    PCI Device    Driver        Admin Status  Link Status  Speed  Duplex  MAC Address         MTU  Description
    ------  ------------  ------------  ------------  -----------  -----  ------  -----------------  ----  -------------------------------------------------
    vmnic0  0000:00:19.0  e1000e        Up            Up            1000  Full    b8:ae:ed:75:08:68  1500  Intel Corporation Ethernet Connection (3) I218-LM
    vusb0   Pseudo        ax88179_178a  Up            Up            1000  Full    00:23:54:8c:43:45  1600  Unknown Unknown
    # esxcfg-nics -l
    Name    PCI          Driver      Link Speed     Duplex MAC Address       MTU    Description
    vmnic0  0000:00:19.0 e1000e      Up   1000Mbps  Full   b8:ae:ed:75:08:68 1500   Intel Corporation Ethernet Connection (3) I218-LM
    vusb0   Pseudo       ax88179_178aUp   1000Mbps  Full   00:23:54:8c:43:45 1600   Unknown Unknown
  5. Add the USB uplink to a Standard Switch or dvSwitch. You can do that with:- vSphere Web Client
    – VMware Host Client
    – Command Line

    # esxcli network vswitch standard uplink add -u vusb0 -v vSwitch0

    Please note that this does not work with the vSphere Client. USB network adapters are not visible when adding adapters to vSwitches with the C# Client.


ESXi Installation with USB NIC

It is possible to create a customized ESXi Image including the AX88179 driver. This might be useful if you want to install ESXi on a system without any compatible network adapter.

Creating a custom ESXi Image that includes the driver is very easy with ESXi-Customizer by Andreas Peetz.


I have tested the Anker and Startech adapters on two different NUCs, a NUC5i5MYHE and the new NUC6i7KYK. The recieving end was my shared storage which has a Intel Quard Port Gigabit Adapter connected to a Cisco C2960G Switch. I’ve compared the performance with the NUCs onboard NIC.

I’ve measured the latency in both directions. The performance of both adapter is quite the same, and both are slightly slower, probably caused by the USB overhead. The results are not bad ad all:
Onboard NIC min/avg/max: 0.168/0.222/0.289 ms
AX88179 NIC min/avg/max: 0.193/0.310/0.483 ms

To measure the bandwidth I’ve used iPerf, which is available on ESXi by default.

RX Performance Onboard NIC: 938 Mbits/sec
RX Performance Startech AX88179: 829 Mbits/sec
RX Performance Anker AX88179: 839 Mbits/sec

TX Performance Onboard NIC: 927 Mbits/sec
TX Performance Startech AX88179: 511 Mbits/sec
TX Performance Anker AX88179: 527 Mbits/sec

You can use multiple adapters to stack the performance. The NUC has 4 USB 3.0 ports while the first is used for the flash drive where ESXi boots from. I’ve tested the performance of 3 Startech Adapters:

RX Performance 3x Startech AX88179: 2565 Mbits/sec
TX Performance 3x Startech AX88179: 1522 Mbits/sec

When 3 network adapters are not enough, where is the limit? USB 3.0 supports up to 5000 MBits/s. I’ve connected all 3 network adapters to the same port with a USB 3.0 Hub. Here are the results:

RX Performance 3x Startech AX88179: 1951 Mbits/sec
TX Performance 3x Startech AX88179: 1555 Mbits/sec


As you can see, the overall performance, especially TX Performance (That is sending data out of the NUC) can not saturate the full bandwidth, but for a homelab the performance is sufficient and can be extended with multiple adapters. USB Adapters can also be used with Jumbo Frames, so creating a NSX Lab is possible.

  1. Very nice, except that I got the driver running before William, and at full speed… But, I am not a VMware Engineer!

    Anyway, if you want a full driver for ASIX and Realtek based USB adapters, they are here
    Otherwise, just keep the slower speed drivers….

    Yep, I am bitter…

    • Thanks for your comment. Can you clarify that “just keep the slow speed drivers”?
      I’ve replaced Williams with your ASIX driver an the performance doesnt change at all:

      RX 775 Mbits/sec
      TX 333 Mbits/sec

      Any ideas?

      • Hi Florian,

        First of all, let me apologise for my comment — I had one too many glasses of wine last night…

        As for the throughput not changing for you I am not sure. The one thing we noticed whilst developing the drive was that specific BIOS settings impacted the USB 3.0 throughput. On some NUCs (and other servers) there are 4 options for the USB 3.0 ports: “Smart”, “Smart Auto”, “Enabled” and “Disabled”. The setting must be explicitly set to “Enabled” for the port to operate properly.

        Other than the above, the only other time I noticed such bad speeds, it was related to an issue with changes to MAC addresses not being updated. This I described in my post about the driver…

        On my own tests (and those of other people who are using the driver) the figures achieved are, consistently:

        TX ~940 Mbits/sec
        RX ~894 Mbtis/sec

        The other driver and adapters I use (Realtek) have just a tad faster RX figures; both TX and RX are the same at ~940 Mbits/sec

        • The USB 3.0 Smart Auto Problem was with the 5th gen NUC.Devices are then connected to the USB2 hub and slow. See

          The Skull Canyon NUC comes with a “USB Legacy” Setting which is disabled by default, so this should not be the problem. All devices are connected to the usb3 hub

          I’ve also checked that ARP problem you are describing but I cant reproduce that. These vmk ports should get a random 00:50:56 MAC Address, except the first during installation, and so they do.

          I’m trying to figure out what’s wrong…

          • After a “dont-known-where-to-troubleshoot-reboot”, performance is better:

            RX 785 Mbits/sec
            TX 892 Mbits/sec

          • Yes, the issue I mentioned is related to the 5th Gen NUC — I just didn’t know what settings are available on the Skull Canyon (don’t have one).

            Anyway, I can see there is a bit of an improvement after the reboot, but still much lower than I have seen. The main difference between William’s and my driver is that I have enabled TSO and SG, which improves TX tremendously. Your figures are still lower than I would have expected…

            The only other thing I can say, is to check if it is something related to VLANs. Just saying that because both I and Glen Kemp were seeing odd results when sending traffic from one VLAN to another.

            Also, can you check if SG and TSO are indeed enabled? The command would be “ethtool -k vusb0” and the result should be something like:

            Offload parameters for vusb0:
            Cannot get device udp large send offload settings: Function not implemented
            Cannot get device generic segmentation offload settings: Function not implemented
            rx-checksumming: on
            tx-checksumming: on
            scatter-gather: on
            tcp segmentation offload: on
            udp fragmentation offload: off
            generic segmentation offload: off

  2. Could you expand on whether your using ESXi as a type-1 or type-2 hypervisor? I’m planning to use ESXi on my intel nuc as a type-1 hypervisor, so I was wondering if the instructions on how to install the driver change slightly.

    It would be nice if you could write something up on doing this for a hypervisor type-1 (ESXi installed bare-metal).

    Thank you, great write up.

    • It’s for the bare-metal hypervisor (aka. type-1), but it shouldn’t make a difference. You can also use the driver when you have a virtual ESXi and use USB passthrough. (Not sure why anyone wants use a physical USB NIC in a virtual ESXi, but it sould work)

Leave a Comment

NOTE - You can use these HTML tags and attributes:
<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>

Trackbacks and Pingbacks: