Virtlet networking principles
Virtlet currently uses CNI-based
networking, which means you must use
--network-plugin=cni for kubelet on
Virtlet nodes. Virtlet should work with any CNI plugin which returns correct
according to the CNI protocol specification. The CNI spec version in use must
be 0.3.0 at least.
For each veth inside the network namespace Virtlet sets up a TAP interface that's passed to the hypervisor process (QEMU) and a bridge that's used to connect veth with the TAP interface. The bridge is also used by Virtlet's internal DHCP server which passes the IP configuration to the VM.
Each SR-IOV device available in the network namespace will be removed from host visibility and passed to the hypervisor via PCI host passthrough mechanism. Any VLAN IDs set up by the plugin still apply.
Supported CNI implementations
Virtlet is verified to work correctly with the following CNI implementations:
Virtlet may or may not work with CNI implementations that aren't listed here.
DHCP-based VM network configuration
The network namespace configuration provided by the CNI plugin(s) in use is passed to the VM using Virtlet's internal DHCP server. The DHCP server is started for each CNI-configured network interface, except for SR-IOV interfaces. The DHCP server can only talk to the VM and is not accessible from the cluster network.
DHCP server isn't used for SR-IOV devices as they're passed directly to the VM while their links are removed from host.
Configuring using Cloud-Init
Besides using DHCP server, Virtlet can also pass the network configuration as a part of Cloud-init data. This is the preferred way of setting up SR-IOV devices. The network configuration is added to the cloud-init user-data using Network Config Version 1format.
Note: Cloud-init network configuration is not supported for persistent rootfs for now.
Virtlet allows to configure multiple interfaces for VM when all of them are properly described in CNI Result. The supported way to achieve that using CNI plugins is by combining output of their chain using CNI-Genie.
Before you proceed, please read the CNI Genie documentation. There are two ways to tell CNI Genie which CNI networks to use:
- by setting
cni: <plugins list>in the pod annotation
- by setting the
default_pluginoption in the CNI Genie configuration file
Note: when using Calico plugin, you must specify it as the first one in the plugin chain, or, alternatively, disable the default route setup for other plugins that precede Calico. This is a Calico-specific limitation.
CNI plugins are expected to use 0.3.0 version of CNI Result spec or later.
Each CNI config must include
cniVersion field, with minimum version being
Please refer to the detailed documentation that contains an example of configuration files for CNI Genie with Calico being used for the primary interface and Flannel being used for the secondary one.
Any SR-IOV devices contained in the CNI result are passed to the VM using PCI host-passthrough. The hardware configuration which is set up by the CNI plugin (MAC address, VLAN tag) is preserved by Virtlet. If a VLAN ID is set it's configured on the host side, so it can't be changed from within the VM to gain unauthorised network access.