DPDK patches and discussions
 help / color / mirror / Atom feed
From: "Xu, Qian Q" <qian.q.xu@intel.com>
To: "Liu, Jijiang" <jijiang.liu@intel.com>, "dev@dpdk.org" <dev@dpdk.org>
Subject: Re: [dpdk-dev] [PATCH v5 0/4] add virtio offload support in us-vhost
Date: Fri, 13 Nov 2015 07:35:35 +0000	[thread overview]
Message-ID: <82F45D86ADE5454A95A89742C8D1410E0317BC80@shsmsx102.ccr.corp.intel.com> (raw)
In-Reply-To: <1447330026-16685-1-git-send-email-jijiang.liu@intel.com>

Tested-by: Qian Xu <qian.q.xu@intel.com>

- Test Commit: 6b6a94ee17d246a0078cc83257f522d0a6db5409
- OS/Kernel: Fedora 21/4.1.8
- GCC: gcc (GCC) 4.9.2 20141101 (Red Hat 4.9.2-1)
- CPU: Intel(R) Xeon(R) CPU E5-2699 v3 @ 2.30GHz
- NIC: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection (rev 01)
- Target: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection (rev 01)
- Total 2 cases, 2 passed. DPDK vhost + legacy virtio can work well with NIC TSO offload and VM2VM iperf forwards. 

Test Case1: DPDK vhost user + virtio-net one VM fwd tso
=======================================================

HW preparation: Connect 2 ports directly. In our case, connect 81:00.0(port1) and 81:00.1(port2) two ports directly. Port1 is binded to igb_uio for vhost-sample to use, while port2 is in kernel driver. 

SW preparation: Change one line of the vhost sample and rebuild::

    #In function virtio_tx_route(xxx)
    m->vlan_tci = vlan_tag; 
    #changed to 
    m->vlan_tci = 1000;

1. Launch the Vhost sample by below commands, socket-mem is set for the vhost sample to use, need ensure that the PCI port located socket has the memory. In our case, the PCI BDF is 81:00.0, so we need assign memory for socket1. For TSO/CSUM test, we need set "--mergeable 1--tso 1 --csum 1".::

    taskset -c 18-20 ./examples/vhost/build/vhost-switch -c 0x1c0000 -n 4 --huge-dir /mnt/huge --socket-mem 0,2048 -- -p 1 --mergeable 1 --zero-copy 0 --vm2vm 0 --tso 1 --csum 1

2. Launch VM1

    taskset -c 21-22 \
    qemu-system-x86_64 -name us-vhost-vm1 \
     -cpu host -enable-kvm -m 1024 -object memory-backend-file,id=mem,size=1024M,mem-path=/mnt/huge,share=on -numa node,memdev=mem -mem-prealloc \
     -smp cores=2,sockets=1 -drive file=/home/img/dpdk-vm1.img  \
     -chardev socket,id=char0,path=/home/qxu10/vhost-tso-test/dpdk/vhost-net -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce \
     -device virtio-net-pci,mac=52:54:00:00:00:01,netdev=mynet1,csum=on,gso=on,guest_csum=on,guest_tso4=on,guest_tso6=on,guest_ecn=on  \
     -netdev tap,id=ipvm1,ifname=tap3,script=/etc/qemu-ifup -device rtl8139,netdev=ipvm1,id=net0,mac=00:10:00:00:11:01 -nographic

3. On host,configure port2, then you can see there is a interface called ens260f1.1000.::
   
    ifconfig ens260f1
    vconfig add ens260f1 1000
    ifconfig ens260f1.1000 1.1.1.8

4. On the VM1, set the virtio IP and run iperf

    ifconfig ethX 1.1.1.2
    ping 1.1.1.8 # let virtio and port2 can ping each other successfully, then the arp table will be set up automatically. 
    
5. In host, run : `iperf -s -i 1` ; In guest, run `iperf -c 1.1.1.4 -i 1 -t 60`, check all the tcpdump packet has 65160 length packet. 

6. The iperf performance could be relatively stable at ~9.4Gbits/s. 

Test Case2: DPDK vhost user + virtio-net VM2VM=1 fwd tso
========================================================

HW preparation: No special setup needed. 

1. Launch the Vhost sample by below commands, socket-mem is set for the vhost sample to use, need ensure that the PCI port located socket has the memory. In our case, the PCI BDF is 81:00.0, so we need assign memory for socket1. For TSO/CSUM test, we need set "--mergeable 1--tso 1 --csum 1 --vm2vm 1".::

    taskset -c 18-20 ./examples/vhost/build/vhost-switch -c 0x1c0000 -n 4 --huge-dir /mnt/huge --socket-mem 0,2048 -- -p 1 --mergeable 1 --zero-copy 0 --vm2vm 1 --tso 1 --csum 1

2. Launch VM1 and VM2. ::

    taskset -c 21-22 \
    qemu-system-x86_64 -name us-vhost-vm1 \
     -cpu host -enable-kvm -m 1024 -object memory-backend-file,id=mem,size=1024M,mem-path=/mnt/huge,share=on -numa node,memdev=mem -mem-prealloc \
     -smp cores=2,sockets=1 -drive file=/home/img/dpdk-vm1.img  \
     -chardev socket,id=char0,path=/home/qxu10/vhost-tso-test/dpdk/vhost-net -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce \
     -device virtio-net-pci,mac=52:54:00:00:00:01,netdev=mynet1,csum=on,gso=on,guest_csum=on,guest_tso4=on,guest_tso6=on,guest_ecn=on  \
     -netdev tap,id=ipvm1,ifname=tap3,script=/etc/qemu-ifup -device rtl8139,netdev=ipvm1,id=net0,mac=00:10:00:00:11:01 -nographic

    taskset -c 23-24 \
    qemu-system-x86_64 -name us-vhost-vm1 \
     -cpu host -enable-kvm -m 1024 -object memory-backend-file,id=mem,size=1024M,mem-path=/mnt/huge,share=on -numa node,memdev=mem -mem-prealloc \
     -smp cores=2,sockets=1 -drive file=/home/img/vm1.img  \
     -chardev socket,id=char1,path=/home/qxu10/vhost-tso-test/dpdk/vhost-net -netdev type=vhost-user,id=mynet2,chardev=char1,vhostforce \
     -device virtio-net-pci,mac=52:54:00:00:00:02,netdev=mynet2  \
     -netdev tap,id=ipvm1,ifname=tap4,script=/etc/qemu-ifup -device rtl8139,netdev=ipvm1,id=net0,mac=00:10:00:00:11:02 -nographic

3. On VM1, set the virtio IP and run iperf

    ifconfig ethX 1.1.1.2
    arp -s 1.1.1.8 52:54:00:00:00:02
    arp # to check the arp table is complete and correct. 

4. On VM2, set the virtio IP and run iperf

    ifconfig ethX 1.1.1.8
    arp -s 1.1.1.2 52:54:00:00:00:01
    arp # to check the arp table is complete and correct. 
 
5. Ensure virtio1 can ping virtio2. Then in VM1, run : `iperf -s -i 1` ; In VM2, run `iperf -c 1.1.1.4 -i 1 -t 60`, check all the tcpdump packet has 65160 length packet.

Thanks
Qian


-----Original Message-----
From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Jijiang Liu
Sent: Thursday, November 12, 2015 8:07 PM
To: dev@dpdk.org
Subject: [dpdk-dev] [PATCH v5 0/4] add virtio offload support in us-vhost

Adds virtio offload support in us-vhost.
 
The patch set adds the feature negotiation of checksum and TSO between us-vhost and vanilla Linux virtio guest, and add these offload features support in the vhost lib, and change vhost sample to test them.

v5 changes:
  Add more clear descriptions to explain these changes.
  reset the 'virtio_net_hdr' value in the virtio_enqueue_offload() function.
  reorganize patches. 
  
 
v4 change:
  remove virtio-net change, only keep vhost changes.
  add guest TX offload capabilities to support VM to VM case.
  split the cleanup code as a separate patch.
 
v3 change:
  rebase latest codes.
 
v2 change:
  fill virtio device information for TX offloads.

*** BLURB HERE ***

Jijiang Liu (4):
  add vhost offload capabilities
  remove ipv4_hdr structure from vhost sample.
  add guest offload setting ln the vhost lib.
  change vhost application to test checksum and TSO for VM to NIC case

 examples/vhost/main.c         |  120 ++++++++++++++++++++++++++++-----
 lib/librte_vhost/vhost_rxtx.c |  150 ++++++++++++++++++++++++++++++++++++++++-
 lib/librte_vhost/virtio-net.c |    9 ++-
 3 files changed, 259 insertions(+), 20 deletions(-)

-- 
1.7.7.6

  parent reply	other threads:[~2015-11-13  7:36 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2015-11-12 12:07 Jijiang Liu
2015-11-12 12:07 ` [dpdk-dev] [PATCH v5 1/4] vhost/lib: add vhost TX offload capabilities in vhost lib Jijiang Liu
2015-11-13  7:01   ` Yuanhan Liu
2015-11-16  7:56     ` Liu, Jijiang
2015-11-12 12:07 ` [dpdk-dev] [PATCH v5 2/4] vhost/lib: add guest offload setting Jijiang Liu
2015-11-12 12:07 ` [dpdk-dev] [PATCH v5 3/4] sample/vhost: remove the ipv4_hdr structure defination Jijiang Liu
2015-11-12 12:07 ` [dpdk-dev] [PATCH v5 4/4] example/vhost: add virtio offload test in vhost sample Jijiang Liu
2015-11-13  6:43 ` [dpdk-dev] [PATCH v5 0/4] add virtio offload support in us-vhost Yuanhan Liu
2015-11-13  7:35 ` Xu, Qian Q [this message]
2015-12-18  1:19   ` Liu, Jijiang

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=82F45D86ADE5454A95A89742C8D1410E0317BC80@shsmsx102.ccr.corp.intel.com \
    --to=qian.q.xu@intel.com \
    --cc=dev@dpdk.org \
    --cc=jijiang.liu@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).