2. Getting started with VirtIO on host

2.1. OCTEON vDPA driver

OCTEON vDPA driver(octep_vdpa.ko) manages the virtio control plane over vDPA bus for OCTEON devices.

2.2. Setting up Host environment

2.2.1. Host requirements

Host needs Linux Kernel version of >= 6.5 (for example latest ubuntu version supports 6.5) IOMMU should always be on if we need to use VF’s with Guest. (x86 intel_iommu=on)

2.2.2. Host kernel patches to enable DAO on v6.1 kernel

Data accelerator Offload(DAO) requires additional patches for compatibility with the v6.1 kernel. These patches have been back-ported and can be found in DAO source directory under patches/kernel/v6.1/vdpa/. They are part of the Marvell SDK kernel.

2.2.2.1. Steps to apply patches and cross-compile kernel

  • Checkout v6.1 vanilla kernel and apply patches.

# git am DAO_SRC_DIR/patches/kernel/v6.1/vdpa/000*
  • Prepare build

# make ARCH=arm64 CROSS_COMPILE=aarch64-marvell-linux-gnu- O=build olddefconfig

Make sure the VDPA config options are enabled in build/.config file.

CONFIG_VIRTIO_VDPA=m
CONFIG_VDPA=m
CONFIG_VP_VDPA=m
CONFIG_VHOST_VDPA=m
  • Build kernel

# make ARCH=arm64 CROSS_COMPILE=aarch64-marvell-linux-gnu- O=build

2.2.3. Build KMOD specifically for Host with native compilation(For example x86)

Make sure following configs are enabled in the $kernel_dir/.config file for vDPA framework.

CONFIG_VDPA=y
CONFIG_VHOST_IOTLB=y
CONFIG_VHOST=y
CONFIG_VHOST_VDPA=y
CONFIG_VIRTIO_VDPA=y

Not providing ‘kernel_dir’ option would pick /lib/modules/uname -r/source as kernel source

git clone https://github.com/MarvellEmbeddedProcessors/dao.git
cd dao
git checkout dao-devel

meson build
ninja -C build

To compile modules for specific kernel sources kernel_dir option should be set

meson build
ninja -C build -Dkernel_dir=KERNEL_BUILD_DIR

2.2.4. Bind PEM PF and VF to Host Octeon VDPA driver

On Host, we need to bind host PF and VF devices provided by CN10K to octep_vdpa driver and then bind the VDPA devices to vhost_vdpa devices to be available for DPDK or guest.

modprobe vfio-pci
modprobe vdpa
modprobe vhost-vdpa

insmod octep_vdpa.ko

HOST_PF=`lspci -Dn -d :b900 | head -1 | cut -f 1 -d " "`
VF_CNT=1
VF_CNT_MAX=`cat /sys/bus/pci/devices/$HOST_PF/sriov_totalvfs`
VF_CNT=$((VF_CNT >VF_CNT_MAX ? VF_CNT_MAX : VF_CNT))

echo $HOST_PF > /sys/bus/pci/devices/$HOST_PF/driver/unbind
echo octep_vdpa > /sys/bus/pci/devices/$HOST_PF/driver_override
echo $HOST_PF > /sys/bus/pci/drivers_probe
echo $VF_CNT >/sys/bus/pci/devices/$HOST_PF/sriov_numvfs

sleep 1
# Get the list of management devices
mgmt_devices=$(vdpa mgmtdev show | awk '/pci\/0000:/{print $1}' | sed 's/:$//')
for mgmtdev in $mgmt_devices
do
    vdpa_name="vdpa${mgmtdev##*/}"
    vdpa dev add name "$vdpa_name" mgmtdev "$mgmtdev"
    sleep 1
done

SDP_VFS=`lspci -Dn -d :b903 | cut -f 1 -d " "`
for dev in $SDP_VFS
do
    vdev=`ls /sys/bus/pci/devices/$dev | grep vdpa`
    while [[ "$vdev" == "" ]]
    do
        echo "Waiting for vdpa device for $dev"
        sleep 1
        vdev=`ls /sys/bus/pci/devices/$dev | grep vdpa`
    done
 echo $vdev >/sys/bus/vdpa/drivers/virtio_vdpa/unbind
 echo $vdev > /sys/bus/vdpa/drivers/vhost_vdpa/bind
done

2.2.5. Tune MRRS and MPS of PEM PF/VF on Host for performance

Tune MRRS and MPS on Host PF in order to increase virtio performance. Example code to do the same if the PEM PF and its bridge devices seen on host are 0003:01:00.0 and 0003:00:00.0.

setpci -s 0003:00:00.0 78.w=$(printf %x $((0x$(setpci -s 0003:00:00.0 78.w)|0x20)))
setpci -s 0003:01:00.0 78.w=$(printf %x $((0x$(setpci -s 0003:01:00.0 78.w)|0x20)))

2.3. Running DPDK testpmd on Host virtio device

2.3.1. Setup huge pages for DPDK application

Need to enable sufficient enough hugepages for DPDK application to run.

2.3.2. Increase ulimit for ‘max locked memory’ to unlimited

DPDK application needs to be able to lock memory that is DMA mapped on host. So increase the ulimit to max for locked memory.

ulimit -l unlimited

2.3.3. Example command for DPDK testpmd on host with vhost-vdpa device

Below is example to launch dpdk-testpmd application on host using vhost-vdpa device.

./dpdk-testpmd -c 0xfff000 --socket-mem 1024 --proc-type auto --file-prefix=virtio-user0 --no-pci --vdev=net_virtio_user0,path=/dev/vhost-vdpa-0,mrg_rxbuf=1,packed_vq=1,in_order=1,queue_size=4096 -- -i --txq=1 --rxq=1 --nb-cores=1 --portmask 0x1 --port-topology=loop

2.4. Running DPDK testpmd on virtio-net device on guest

2.4.1. Host requirements for running Guest

Follow Setting up Host environment as a first step.

2.4.1.1. Build Qemu

wget https://download.qemu.org/qemu-8.1.1.tar.xz
tar xvJf qemu-8.1.1.tar.xz
cd qemu-8.1.1
/* Apply above mentioned patches */
./configure
make

2.4.1.2. Prepare the Ubuntu cloud image for guest

Below is the example to prepare ubuntu cloud image for ARM guest.

wget https://cloud-images.ubuntu.com/mantic/current/mantic-server-cloudimg-arm64.img
virt-customize -a mantic-server-cloudimg-arm64.img --root-password password:a
mkdir mnt_img

cat mount_img.sh
#!/bin/bash
modprobe nbd max_part=8
qemu-nbd --connect=/dev/nbd0 $1
sleep 2
fdisk /dev/nbd0 -l
mount /dev/nbd0p1 mnt_img

# Copy required files to mnt_img/root for example dpdk-testpmd and user tools from dpdk
cat unmount_img.sh
#!/bin/bash
umount mnt_img
qemu-nbd --disconnect /dev/nbd0
#rmmod nbd

2.4.2. Launch guest using Qemu

ulimit -l unlimited
cd qemu-8.1.1
./build/qemu-system-aarch64  -hda /home/cavium/ws/mantic-server-cloudimg-arm64_vm1.img -name vm1 \
-netdev type=vhost-vdpa,vhostdev=/dev/vhost-vdpa-0,id=vhost-vdpa1 -device \
virtio-net-pci,netdev=vhost-vdpa1,disable-modern=off,page-per-vq=on,packed=on,mrg_rxbuf=on,mq=on,rss=on,rx_queue_size=1024,tx_queue_size=1024,disable-legacy=on -enable-kvm -nographic -m 2G -cpu host -smp 3 -machine virt,gic_version=3 -bios /usr/share/qemu-efi-aarch64/QEMU_EFI.fd

2.4.3. Launch dpdk-testpmd on guest

Below code block shows method to bind device to vfio-pci to use with DPDK testpmd in guest.

modprobe vfio-pci
echo 1 > /sys/module/vfio/parameters/enable_unsafe_noiommu_mode
# On 106xx $VIRTIO_NETDEV_BDF would come as 0000:00:01.0
./usertools/dpdk-devbind.py -b vfio-pci $VIRTIO_NETDEV_BDF
echo 256 > /proc/sys/vm/nr_hugepages
./dpdk-testpmd -c 0x3 -a $VIRTIO_NETDEV_BDF -- -i --nb-cores=1 --txq=1 --rxq=1

2.5. Using VDPA device as Kernel virtio-net device on guest

2.6. Using VDPA device as Kernel virtio-net device on host

Run the code block below to create a virtio device on host for each VF using virtio_vdpa.

modprobe vfio-pci
modprobe vdpa
insmod octep_vdpa.ko
HOST_PF=`lspci -Dn -d :b900 | head -1 | cut -f 1 -d " "`
VF_CNT=1
VF_CNT_MAX=`cat /sys/bus/pci/devices/$HOST_PF/sriov_totalvfs`
VF_CNT=$((VF_CNT >VF_CNT_MAX ? VF_CNT_MAX : VF_CNT))

echo $HOST_PF > /sys/bus/pci/devices/$HOST_PF/driver/unbind
echo octep_vdpa > /sys/bus/pci/devices/$HOST_PF/driver_override
echo $HOST_PF > /sys/bus/pci/drivers_probe
echo $VF_CNT >/sys/bus/pci/devices/$HOST_PF/sriov_numvfs

modprobe virtio_vdpa
modprobe virtio_net