DPDK usage discussions
 help / color / mirror / Atom feed
From: Deepak Mohanty <dmohanty@gmail.com>
To: users@dpdk.org
Subject: [dpdk-users] IPv4 Checksum Offload with igb_uio/uio on guest VM and KVM/Qemu virtio-net-pci on host
Date: Tue, 13 Feb 2018 22:08:39 +0000	[thread overview]
Message-ID: <CAOeA1K=wmZ-OTjteRNQQRKpNQBqeoEv0WZO-pqdW74uHBd==WQ@mail.gmail.com> (raw)

Hi All,

I am trying out L2FWD and L3FWD samples on KVM guests.

This is my setup:

Host:
# uname -a
Linux scale01 4.13.0-32-generic #35-Ubuntu SMP Thu Jan 25 09:13:46 UTC 2018
x86_64 x86_64 x86_64 GNU/Linux

# lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description:    Ubuntu 17.10
Release:        17.10
Codename:       artful

VM deployment (u for ubuntu):

u01-- u02 -- u03

u02 is the L2/L3 forwarder. Here is the VM instantiation command line for
u02:
qemu-system-x86_64 -enable-kvm -name guest=u02,debug-threads=on -S -object
secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-26-u02/master-key.aes
-machine pc-i440fx-artful,accel=kvm,usb=off,dump-guest-core=off -cpu host
-m 16384 -realtime mlock=off -smp 8,sockets=1,cores=8,threads=1 -object
memory-backend-file,id=ram-node0,prealloc=yes,mem-path=/dev/hugepages/libvirt/qemu/26-u02,share=yes,size=17179869184,host-nodes=1,policy=bind
-numa node,nodeid=0,cpus=0-7,memdev=ram-node0 -uuid
de14a442-a595-4e96-b150-f284a90fb84a -no-user-config -nodefaults -chardev
socket,id=charmonitor,path=/var/lib/libvirt/qemu/domain-26-u02/monitor.sock,server,nowait
-mon chardev=charmonitor,id=monitor,mode=control -rtc
base=utc,driftfix=slew -global kvm-pit.lost_tick_policy=delay -no-hpet
-no-shutdown -global PIIX4_PM.disable_s3=1 -global PIIX4_PM.disable_s4=1
-boot strict=on -device ich9-usb-ehci1,id=usb,bus=pci.0,addr=0x4.0x7
-device
ich9-usb-uhci1,masterbus=usb.0,firstport=0,bus=pci.0,multifunction=on,addr=0x4
-device ich9-usb-uhci2,masterbus=usb.0,firstport=2,bus=pci.0,addr=0x4.0x1
-device ich9-usb-uhci3,masterbus=usb.0,firstport=4,bus=pci.0,addr=0x4.0x2
-drive file=/data/ssd0/u02.qcow2,format=qcow2,if=none,id=drive-virtio-disk0
-device
virtio-blk-pci,scsi=off,bus=pci.0,addr=0x5,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1
-drive if=none,id=drive-ide0-0-0,readonly=on -device
ide-cd,bus=ide.0,unit=0,drive=drive-ide0-0-0,id=ide0-0-0 -netdev
tap,fds=25:29:30:31,id=hostnet0,vhost=on,vhostfds=32:33:34:35 -device
virtio-net-pci,mrg_rxbuf=on,mq=on,vectors=10,netdev=hostnet0,id=net0,mac=52:54:00:7e:b7:d6,bus=pci.0,addr=0x7
-netdev tap,fds=36:37:38:39,id=hostnet1,vhost=on,vhostfds=40:41:42:43
-device
virtio-net-pci,mrg_rxbuf=on,mq=on,vectors=10,netdev=hostnet1,id=net1,mac=52:54:00:7e:b7:d7,bus=pci.0,addr=0x8
-netdev tap,fds=44:45:46:47,id=hostnet2,vhost=on,vhostfds=48:49:50:51
-device
virtio-net-pci,mrg_rxbuf=on,mq=on,vectors=10,netdev=hostnet2,id=net2,mac=52:54:00:7e:b7:d5,bus=pci.0,addr=0x9
-chardev pty,id=charserial0 -device
isa-serial,chardev=charserial0,id=serial0 -device
usb-tablet,id=input0,bus=usb.0,port=1 -vnc 0.0.0.0:0 -device
cirrus-vga,id=video0,bus=pci.0,addr=0x2 -device
virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x6 -msg timestamp=on

Linux bridge is used for the networks. Here is the output of brctl on the
host:

# brctl show
bridge name     bridge id               STP enabled     interfaces
br-cli          8000.52540072392f       no              br-cli-nic
                                                        vnet0
                                                        vnet4
br-srv          8000.5254006d0037       no              br-srv-nic
                                                        vnet1
                                                        vnet6
virbr0          8000.525400192db5       yes             virbr0-nic
                                                        vnet2
                                                        vnet3
                                                        vnet5

When I use the Linux bridge / routing on u02 (L2 or L3 forwarder inside
u02), I get about 20 Gbps unidirectional iperf throughput in both L2 and L3
modes. I get about 3 Gbps with DPDK. To get DPDK to work, I had to turn off
checksum offloading.

I have the following questions:

1. How may I use IPv4 checksum offload with igb_uio? Since the original
guest Linux driver - virtio_net can do this, it seems I have to only make
changes on the guest.
2. I am unable to use more than one CPU with igb_uio. Do I need to make
some configuration change?

Please note that I do not see these issues with ESXi / VMXNet3 virtual
NICs. The problem is seen only with virtio on KVM. It is not possible for
me to use PCI pass-through / SRIOV at this stage of our development. I need
to make it work on emulated NICs.

Regards,
Deepak Mohanty

                 reply	other threads:[~2018-02-13 22:08 UTC|newest]

Thread overview: [no followups] expand[flat|nested]  mbox.gz  Atom feed

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CAOeA1K=wmZ-OTjteRNQQRKpNQBqeoEv0WZO-pqdW74uHBd==WQ@mail.gmail.com' \
    --to=dmohanty@gmail.com \
    --cc=users@dpdk.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).