From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by dpdk.space (Postfix) with ESMTP id 398B1A05D3 for ; Wed, 22 May 2019 10:41:24 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id E99CB25A1; Wed, 22 May 2019 10:41:23 +0200 (CEST) Received: from mga12.intel.com (mga12.intel.com [192.55.52.136]) by dpdk.org (Postfix) with ESMTP id 3579A14EC for ; Wed, 22 May 2019 10:41:21 +0200 (CEST) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga001.jf.intel.com ([10.7.209.18]) by fmsmga106.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 22 May 2019 01:41:20 -0700 X-ExtLoop1: 1 Received: from fmsmsx104.amr.corp.intel.com ([10.18.124.202]) by orsmga001.jf.intel.com with ESMTP; 22 May 2019 01:41:19 -0700 Received: from fmsmsx158.amr.corp.intel.com (10.18.116.75) by fmsmsx104.amr.corp.intel.com (10.18.124.202) with Microsoft SMTP Server (TLS) id 14.3.408.0; Wed, 22 May 2019 01:41:19 -0700 Received: from shsmsx153.ccr.corp.intel.com (10.239.6.53) by fmsmsx158.amr.corp.intel.com (10.18.116.75) with Microsoft SMTP Server (TLS) id 14.3.408.0; Wed, 22 May 2019 01:41:19 -0700 Received: from shsmsx101.ccr.corp.intel.com ([169.254.1.129]) by SHSMSX153.ccr.corp.intel.com ([169.254.12.150]) with mapi id 14.03.0415.000; Wed, 22 May 2019 16:41:16 +0800 From: "Tu, Lijuan" To: "Wang, Yinan" , "dts@dpdk.org" CC: "Wang, Yinan" Thread-Topic: [dts] [PATCH v1] test_plans/vhost_dequeue_zero_copy: add test plan for vhost dequeue zero copy test Thread-Index: AQHVCvqy/Va/YtHMeky2FE8nO+mdh6Z23jtA Date: Wed, 22 May 2019 08:41:16 +0000 Message-ID: <8CE3E05A3F976642AAB0F4675D0AD20E0BA841E3@SHSMSX101.ccr.corp.intel.com> References: <20190515014330.52665-1-yinan.wang@intel.com> In-Reply-To: <20190515014330.52665-1-yinan.wang@intel.com> Accept-Language: zh-CN, en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: dlp-product: dlpe-windows dlp-version: 11.0.600.7 dlp-reaction: no-action x-ctpclassification: CTP_NT x-titus-metadata-40: eyJDYXRlZ29yeUxhYmVscyI6IiIsIk1ldGFkYXRhIjp7Im5zIjoiaHR0cDpcL1wvd3d3LnRpdHVzLmNvbVwvbnNcL0ludGVsMyIsImlkIjoiY2I4MzllNGItOTA3My00MTY4LWI1OWEtYmE1MzQyYzhiZmEzIiwicHJvcHMiOlt7Im4iOiJDVFBDbGFzc2lmaWNhdGlvbiIsInZhbHMiOlt7InZhbHVlIjoiQ1RQX05UIn1dfV19LCJTdWJqZWN0TGFiZWxzIjpbXSwiVE1DVmVyc2lvbiI6IjE3LjEwLjE4MDQuNDkiLCJUcnVzdGVkTGFiZWxIYXNoIjoib01uUWQ4TXloN0RVWUVJblwvZmt0ZlplbGZBNmxFbG1HVHRCb0h3eVNEM0Z5S0F3ZUdYY3pwRDlMbzFENlwvV09QIn0= x-originating-ip: [10.239.127.40] Content-Type: text/plain; charset="iso-8859-1" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Subject: Re: [dts] [PATCH v1] test_plans/vhost_dequeue_zero_copy: add test plan for vhost dequeue zero copy test X-BeenThere: dts@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: test suite reviews and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dts-bounces@dpdk.org Sender: "dts" Applied, thanks > -----Original Message----- > From: dts [mailto:dts-bounces@dpdk.org] On Behalf Of Yinan > Sent: Wednesday, May 15, 2019 9:44 AM > To: dts@dpdk.org > Cc: Wang, Yinan > Subject: [dts] [PATCH v1] test_plans/vhost_dequeue_zero_copy: add test > plan for vhost dequeue zero copy test >=20 > From: Wang Yinan >=20 > Signed-off-by: Wang Yinan > --- > .../vhost_dequeue_zero_copy_test_plan.rst | 352 ++++++++++++++++++ > 1 file changed, 352 insertions(+) > create mode 100644 test_plans/vhost_dequeue_zero_copy_test_plan.rst >=20 > diff --git a/test_plans/vhost_dequeue_zero_copy_test_plan.rst > b/test_plans/vhost_dequeue_zero_copy_test_plan.rst > new file mode 100644 > index 0000000..3ec5e53 > --- /dev/null > +++ b/test_plans/vhost_dequeue_zero_copy_test_plan.rst > @@ -0,0 +1,352 @@ > +.. Copyright (c) <2019>, Intel Corporation > + All rights reserved. > + > + Redistribution and use in source and binary forms, with or without > + modification, are permitted provided that the following conditions > + are met: > + > + - Redistributions of source code must retain the above copyright > + notice, this list of conditions and the following disclaimer. > + > + - Redistributions in binary form must reproduce the above copyright > + notice, this list of conditions and the following disclaimer in > + the documentation and/or other materials provided with the > + distribution. > + > + - Neither the name of Intel Corporation nor the names of its > + contributors may be used to endorse or promote products derived > + from this software without specific prior written permission. > + > + THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND > CONTRIBUTORS > + "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT > + LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND > FITNESS > + FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE > + COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, > INDIRECT, > + INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES > + (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS > OR > + SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) > + HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN > CONTRACT, > + STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) > + ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF > ADVISED > + OF THE POSSIBILITY OF SUCH DAMAGE. > + > +=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D > +vhost dequeue zero-copy test plan > +=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D > + > +Description > +=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D > + > +Vhost dequeue zero-copy is a performance optimization for vhost, the cop= y > in the dequeue path is avoided in order to improve the performance. > +There are three topology test (PVP/VM2VM/VM2NIC) for this feature, the > automation of different topology cases are in three different test suite. > +1. In the PVP case, when packet size is 1518B, 10G nic could be the > performance bottleneck, so we use 40G traffic genarator and 40G nic. > +Also as vhost zero copy mbufs should be consumed as soon as possible, > don't start send packets at vhost side before VM and virtio-pmd launched. > +2. In the VM2VM case, the boost is quite impressive. The bigger the pack= et > size, the bigger performance boost you may get. > +3. In the VM2NIC case, there are some limitations, so the boost is not a= s > impressive as the VM2VM case. It may even drop quite a bit for small > packets.For that reason, this feature is disabled by default, it can be e= nabled > when the RTE_VHOST_USER_DEQUEUE_ZERO_COPY flag is set. > + > +Test Case 1: pvp dequeue zero-copy test with different packet sizes > +=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D > =3D=3D=3D=3D > +Test topology: TG --> NIC --> Vhost --> Virtio --> Vhost --> NIC --> TG > + > +1. Bind one 40G port to igb_uio, then launch testpmd by below command:: > + > + rm -rf vhost-net* > + ./testpmd -c 0xe -n 4 --socket-mem 1024,1024 \ > + --vdev 'eth_vhost0,iface=3Dvhost-net,queues=3D1,dequeue-zero-copy=3D= 1' -- \ > + -i --nb-cores=3D1 --txd=3D1024 --rxd=3D1024 --txfreet=3D992 > + testpmd>set fwd mac > + > +2. Launch VM with mrg_rxbuf feature on, note that qemu_version need > > qemu_2.10 for support adjusting parameter rx_queue_size:: > + > + qemu-system-x86_64 -name vm1 \ > + -cpu host -enable-kvm -m 4096 -object memory-backend- > file,id=3Dmem,size=3D4096M,mem-path=3D/mnt/huge,share=3Don -numa > node,memdev=3Dmem -mem-prealloc \ > + -smp cores=3D5,sockets=3D1 -drive file=3D/home/osimg/ubuntu16.img = \ > + -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net > nic,vlan=3D2,macaddr=3D00:00:00:08:e8:aa,addr=3D1f \ > + -net user,vlan=3D2,hostfwd=3Dtcp:127.0.0.1:6002-:22 \ > + -chardev socket,id=3Dchar0,path=3D./vhost-net \ > + -netdev type=3Dvhost-user,id=3Dmynet1,chardev=3Dchar0,vhostforce \ > + -device virtio-net- > pci,mac=3D52:54:00:00:00:01,netdev=3Dmynet1,mrg_rxbuf=3Don,rx_queue_size= =3D102 > 4,tx_queue_size=3D1024 \ > + -vnc :10 > + > +3. On VM, bind virtio net to igb_uio and run testpmd:: > + > + ./dpdk-devbind.py --bind=3Digb_uio xx:xx.x > + ./testpmd -c 0x3 -n 4 -- -i --nb-cores=3D1 --txd=3D1024 --rxd=3D1024 > + testpmd>set fwd mac > + testpmd>start > + > +4. Start testpmd at host side after VM and virtio-pmd launched:: > + > + testpmd>start > + > +5. Send packets by packet generator with different packet sizes > (64,128,256,512,1024,1518), show throughput with below command:: > + > + testpmd>show port stats all > + > +6. Repeat the test with dequeue-zero-copy=3D0, compare the performance > gains or degradation. For small packet, we may expect ~20% performance > drop, but for big packet, we expect ~20% performance gains. > + > +Test Case 2: pvp dequeue zero-copy test with 2 queues > +=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D > +Test topology: TG --> NIC --> Vhost --> Virtio --> Vhost --> NIC --> TG > + > +1. Bind one 40G port to igb_uio, then launch testpmd by below command:: > + > + rm -rf vhost-net* > + ./testpmd -l 2-4 -n 4 --socket-mem 1024,1024 \ > + --vdev 'eth_vhost0,iface=3Dvhost-net,queues=3D2,dequeue-zero-copy=3D= 1' -- \ > + -i --nb-cores=3D2 --rxq=3D2 --txq=3D2 --txd=3D1024 --rxd=3D1024 --tx= freet=3D992 > + testpmd>set fwd mac > + > +2. Launch VM with vectors=3D2*queue_num+2 and mrg_rxbuf/mq feature on, > note that qemu_version need > qemu_2.10 for support adjusting parameter > rx_queue_size:: > + > + qemu-system-x86_64 -name vm1 \ > + -cpu host -enable-kvm -m 4096 -object memory-backend- > file,id=3Dmem,size=3D4096M,mem-path=3D/mnt/huge,share=3Don -numa > node,memdev=3Dmem -mem-prealloc \ > + -smp cores=3D5,sockets=3D1 -drive file=3D/home/osimg/ubuntu16.img = \ > + -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net > nic,vlan=3D2,macaddr=3D00:00:00:08:e8:aa,addr=3D1f \ > + -net user,vlan=3D2,hostfwd=3Dtcp:127.0.0.1:6002-:22 \ > + -chardev socket,id=3Dchar0,path=3D./vhost-net \ > + -netdev type=3Dvhost-user,id=3Dmynet1,chardev=3Dchar0,vhostforce,qu= eues=3D2 > \ > + -device virtio-net- > pci,mac=3D52:54:00:00:00:01,netdev=3Dmynet1,mrg_rxbuf=3Don,mq=3Don,vector= s=3D8,r > x_queue_size=3D1024,tx_queue_size=3D1024 \ > + -vnc :10 > + > +3. On VM, bind vdev to igb_uio and run testpmd:: > + > + ./usertools/dpdk-devbind.py --bind=3Digb_uio xx:xx.x > + ./testpmd -c 0x07 -n 4 -- -i \ > + --rxq=3D2 --txq=3D2 --txd=3D1024 --rxd=3D1024 --nb-cores=3D2 > + testpmd>set fwd mac > + testpmd>start > + > +4. Start testpmd at host side after VM and virtio-pmd launched:: > + > + testpmd>start > + > +5. Send packets by packet generator with different packet sizes > (64,128,256,512,1024,1518), show throughput with below command:: > + > + testpmd>show port stats all > + > +6. Check each queue's rx/tx packet numbers at vhost side:: > + > + testpmd>stop > + > +Test Case 3: pvp dequeue zero-copy test with driver unload test > +=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D > +Test topology: TG --> NIC --> Vhost --> Virtio --> Vhost --> NIC --> TG > + > +1. Bind one 40G port to igb_uio, then launch testpmd by below command:: > + > + rm -rf vhost-net* > + ./testpmd -l 1-5 -n 4 --socket-mem 1024,1024 \ > + --vdev 'eth_vhost0,iface=3Dvhost-net,queues=3D16,dequeue-zero- > copy=3D1,client=3D1' -- \ > + -i --nb-cores=3D4 --rxq=3D16 --txq=3D16 --txd=3D1024 --rxd=3D1024 --= txfreet=3D992 > + testpmd>set fwd mac > + > +2. Launch VM with vectors=3D2*queue_num+2 and mrg_rxbuf/mq feature on, > note that qemu_version need > qemu_2.10 for support adjusting parameter > rx_queue_size:: > + > + qemu-system-x86_64 -name vm1 \ > + -cpu host -enable-kvm -m 4096 -object memory-backend- > file,id=3Dmem,size=3D4096M,mem-path=3D/mnt/huge,share=3Don -numa > node,memdev=3Dmem -mem-prealloc \ > + -smp cores=3D5,sockets=3D1 -drive file=3D/home/osimg/ubuntu16.img = \ > + -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net > nic,vlan=3D2,macaddr=3D00:00:00:08:e8:aa,addr=3D1f \ > + -net user,vlan=3D2,hostfwd=3Dtcp:127.0.0.1:6002-:22 \ > + -chardev socket,id=3Dchar0,path=3D./vhost-net,server \ > + -netdev type=3Dvhost- > user,id=3Dmynet1,chardev=3Dchar0,vhostforce,queues=3D16 \ > + -device virtio-net- > pci,mac=3D52:54:00:00:00:01,netdev=3Dmynet1,mrg_rxbuf=3Don,mq=3Don,vector= s=3D40, > rx_queue_size=3D1024,tx_queue_size=3D1024 \ > + -vnc :10 > + > +3. On VM, bind virtio net to igb_uio and run testpmd:: > + > + ./usertools/dpdk-devbind.py --bind=3Digb_uio xx:xx.x > + ./testpmd -l 0-4 -n 4 --socket-mem 1024,0 -- -i --nb-cores=3D4 --rxq= =3D16 -- > txq=3D16 --txd=3D1024 --rxd=3D1024 > + testpmd>set fwd rxonly > + testpmd>start > + > +4. Start testpmd at host side after VM launched:: > + > + testpmd>start > + > +5. Send packets by packet generator with different packet > sizes(64,128,256,512,1024,1518), show throughput with below command:: > + > + testpmd>show port stats all > + > +6. Relaunch testpmd at virtio side in VM for driver reloading:: > + > + testpmd>quit > + ./testpmd -l 0-4 -n 4 --socket-mem 1024,0 -- -i --nb-cores=3D4 --rxq= =3D16 -- > txq=3D16 --txd=3D1024 --rxd=3D1024 > + testpmd>set fwd mac > + testpmd>start > + > +7. Send packets by packet generator with different packet sizes > (64,128,256,512,1024,1518), show throughput with below command:: > + > + testpmd>show port stats all > + > +8. Check each queue's rx/tx packet numbers at vhost side:: > + > + testpmd>stop > + > +Test Case 4: pvp dequeue zero-copy test with maximum txfreet > +=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D > +Test topology: TG --> NIC --> Vhost --> Virtio --> Vhost --> NIC --> TG > + > +1. Bind one 40G port to igb_uio, then launch testpmd by below command:: > + > + rm -rf vhost-net* > + ./testpmd -l 1-5 -n 4 --socket-mem 1024,1024 \ > + --vdev 'eth_vhost0,iface=3Dvhost-net,queues=3D16,dequeue-zero- > copy=3D1,client=3D1' -- \ > + -i --nb-cores=3D4 --rxq=3D16 --txq=3D16 --txd=3D1024 --rxd=3D1024 --= txfreet=3D1020 -- > txrs=3D4 > + testpmd>set fwd mac > + > +2. Launch VM with vectors=3D2*queue_num+2 and mrg_rxbuf/mq feature on, > note that qemu_version need > qemu_2.10 for support adjusting parameter > rx_queue_size:: > + > + qemu-system-x86_64 -name vm1 \ > + -cpu host -enable-kvm -m 4096 -object memory-backend- > file,id=3Dmem,size=3D4096M,mem-path=3D/mnt/huge,share=3Don -numa > node,memdev=3Dmem -mem-prealloc \ > + -smp cores=3D5,sockets=3D1 -drive file=3D/home/osimg/ubuntu16.img = \ > + -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net > nic,vlan=3D2,macaddr=3D00:00:00:08:e8:aa,addr=3D1f \ > + -net user,vlan=3D2,hostfwd=3Dtcp:127.0.0.1:6002-:22 \ > + -chardev socket,id=3Dchar0,path=3D./vhost-net,server \ > + -netdev type=3Dvhost- > user,id=3Dmynet1,chardev=3Dchar0,vhostforce,queues=3D16 \ > + -device virtio-net- > pci,mac=3D52:54:00:00:00:01,netdev=3Dmynet1,mrg_rxbuf=3Don,mq=3Don,vector= s=3D40, > rx_queue_size=3D1024,tx_queue_size=3D1024 \ > + -vnc :10 > + > +3. On VM, bind virtio net to igb_uio and run testpmd:: > + > + ./usertools/dpdk-devbind.py --bind=3Digb_uio xx:xx.x > + ./testpmd -l 0-4 -n 4 --socket-mem 1024,0 -- -i --nb-cores=3D4 --rxq= =3D16 -- > txq=3D16 --txd=3D1024 --rxd=3D1024 > + testpmd>set fwd mac > + testpmd>start > + > +4. Start testpmd at host side after VM launched:: > + > + testpmd>start > + > +5. Send packets by packet generator with different packet > sizes(64,128,256,512,1024,1518), show throughput with below command:: > + > + testpmd>show port stats all > + > +6. Check each queue's rx/tx packet numbers at vhost side:: > + > + testpmd>stop > + > +Test Case 5: vhost-user + virtio-net VM2VM dequeue zero-copy test > +=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D > =3D=3D > +Test topology: Virtio-net <-> Vhost <-> Testpmd <-> Vhost <-> > +Virtio-net > + > +1. Launch the Vhost sample by below commands:: > + > + rm -rf vhost-net* > + testpmd>./testpmd -c 0xF0000000 -n 4 --socket-mem 2048,2048 --legacy= - > mem --no-pci --file-prefix=3Dvhost --vdev 'net_vhost0,iface=3Dvhost- > net0,queues=3D1,dequeue-zero-copy=3D1' --vdev 'net_vhost1,iface=3Dvhost- > net1,queues=3D1,dequeue-zero-copy=3D1' -- -i --nb-cores=3D1 --txd=3D1024= -- > rxd=3D1024 --txfreet=3D992 > + testpmd>start > + > +2. Launch VM1 and VM2:: > + > + taskset -c 32-33 \ > + qemu-system-x86_64 -name us-vhost-vm1 \ > + -cpu host -enable-kvm -m 2048 -object memory-backend- > file,id=3Dmem,size=3D2048M,mem-path=3D/mnt/huge,share=3Don -numa > node,memdev=3Dmem -mem-prealloc \ > + -smp cores=3D2,sockets=3D1 -drive file=3D/home/osimg/ubuntu16-1.img= \ > + -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net > nic,vlan=3D2,macaddr=3D00:00:00:08:e8:aa,addr=3D1f -net > user,vlan=3D2,hostfwd=3Dtcp:127.0.0.1:6004-:22 \ > + -chardev socket,id=3Dchar0,path=3D./vhost-net0 -netdev type=3Dvhost= - > user,id=3Dmynet1,chardev=3Dchar0,vhostforce \ > + -device virtio-net- > pci,mac=3D52:54:00:00:00:01,netdev=3Dmynet1,mrg_rxbuf=3Don,csum=3Don,gso= =3Don,g > uest_csum=3Don,host_tso4=3Don,guest_tso4=3Don,guest_ecn=3Don \ > + -vnc :10 -daemonize > + > + taskset -c 34-35 \ > + qemu-system-x86_64 -name us-vhost-vm2 \ > + -cpu host -enable-kvm -m 2048 -object memory-backend- > file,id=3Dmem,size=3D2048M,mem-path=3D/mnt/huge,share=3Don -numa > node,memdev=3Dmem -mem-prealloc \ > + -smp cores=3D2,sockets=3D1 -drive file=3D/home/osimg/ubuntu16-2.img= \ > + -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net > nic,vlan=3D2,macaddr=3D00:00:00:08:e8:aa,addr=3D1f -net > user,vlan=3D2,hostfwd=3Dtcp:127.0.0.1:6005-:22 \ > + -chardev socket,id=3Dchar1,path=3D./vhost-net1 -netdev type=3Dvhost= - > user,id=3Dmynet2,chardev=3Dchar1,vhostforce \ > + -device virtio-net- > pci,mac=3D52:54:00:00:00:02,netdev=3Dmynet2,mrg_rxbuf=3Don,csum=3Don,gso= =3Don,g > uest_csum=3Don,host_tso4=3Don,guest_tso4=3Don,guest_ecn=3Don \ > + -vnc :11 -daemonize > + > +3. On VM1, set virtio device IP and run arp protocal:: > + > + ifconfig ens3 1.1.1.2 > + arp -s 1.1.1.8 52:54:00:00:00:02 > + > +4. On VM2, set virtio device IP and run arp protocal:: > + > + ifconfig ens3 1.1.1.8 > + arp -s 1.1.1.2 52:54:00:00:00:01 > + > +5. Check the iperf performance between two VMs by below commands:: > + > + Under VM1, run: `iperf -s -i 1` > + Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 30` > + > +6. Check both 2VM can receive and send big packets to each other:: > + > + testpmd>show port xstats all > + Port 0 should have tx packets above 1522 > + Port 1 should have rx packets above 1522 > + > +Prerequisites > +=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D > + > +Modify the testpmd code as following:: > + > + --- a/app/test-pmd/csumonly.c > + +++ b/app/test-pmd/csumonly.c > + @@ -693,10 +693,12 @@ pkt_burst_checksum_forward(struct > fwd_stream *fs) > + =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 * and inner headers */ > + =C2 > + =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0=C2=A0=C2=A0=C2=A0 eth_hdr =3D rte_pktmbuf_mtod(m, struct > ether_hdr *); > + +#if 0 > + =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0=C2=A0=C2=A0=C2=A0 ether_addr_copy(&peer_eth_addrs[fs- > >peer_addr], > + > =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 ð_hdr- > >d_addr); > + =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0=C2=A0=C2=A0=C2=A0 ether_addr_copy(&ports[fs- > >tx_port].eth_addr, > + > =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 ð_hdr- > >s_addr); > + +#endif > + =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0=C2=A0=C2=A0=C2=A0 parse_ethernet(eth_hdr, &info); > + =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0=C2=A0=C2=A0=C2=A0 l3_hdr =3D (char *)eth_hdr + > + info.l2_len; > + > +Test Case 6: VM2Nic dequeue zero copy test with tso offload enabled > +=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D > =3D=3D=3D=3D > +Test topology: NIC2(In kernel) <- NIC1(DPDK) <- testpmd(csum fwd) <- > +Vhost <- Virtio-net > + > +1. Connect two nic port directly, put nic2 into another namesapce and tu= rn > on the gro of this nic port by below cmds:: > + > + ip netns del ns1 > + ip netns add ns1 > + ip link set [enp216s0f0] netns ns1 # [enp216s0f0] = is the name > of nic2 > + ip netns exec ns1 ifconfig [enp216s0f0] 1.1.1.8 up > + ip netns exec ns1 ethtool -K [enp216s0f0] gro on > + > +2. Bind nic1 to igb_uio, launch vhost-user with testpmd:: > + > + ./dpdk-devbind.py -b igb_uio xx:xx.x # xx:xx.x is the pci addr= of nic1 > + ./testpmd -l 2-4 -n 4 --socket-mem 1024,1024 =C2=A0--legacy-mem \ > + --file-prefix=3Dvhost --vdev 'net_vhost0,iface=3Dvhost-net,queues=3D= 1,client=3D0' - > - -i --txd=3D1024 --rxd=3D1024 > + testpmd>set fwd csum > + testpmd>port stop 0 > + testpmd>csum set tcp hw 0 > + testpmd>csum set ip hw 0 > + testpmd>set port 0 gso off > + testpmd>tso set 1460 0 > + testpmd>port start 0 > + testpmd>start > + > +3. Set up vm with virto device and using kernel virtio-net driver:: > + > + taskset -c 13 \ > + qemu-system-x86_64 -name us-vhost-vm1 \ > + -cpu host -enable-kvm -m 2048 -object memory-backend- > file,id=3Dmem,size=3D2048M,mem-path=3D/mnt/huge,share=3Don \ > + -numa node,memdev=3Dmem \ > + -mem-prealloc -monitor unix:/tmp/vm2_monitor.sock,server,nowait - > net nic,vlan=3D2,macaddr=3D00:00:00:08:e8:aa,addr=3D1f -net > user,vlan=3D2,hostfwd=3Dtcp:127.0.0.1:6001-:22 \ > + -smp cores=3D1,sockets=3D1 -drive file=3D/home/osimg/ubuntu16.img= \ > + -chardev socket,id=3Dchar0,path=3D./vhost-net \ > + -netdev type=3Dvhost-user,id=3Dmynet1,chardev=3Dchar0,vhostforce = \ > + -device > + virtio-net-pci,mac=3D52:54:00:00:00:01,netdev=3Dmynet1,mrg_rxbuf=3Don,c= sum=3Do > + > n,gso=3Don,host_tso4=3Don,guest_tso4=3Don,rx_queue_size=3D1024,tx_queue_s= ize=3D1 > + 024 -vnc :10 -daemonize > + > +4. In vm, config the virtio-net device with ip:: > + > + ifconfig [ens3] 1.1.1.2 up # [ens3] is the name of virtio-net > + > +5. Start iperf test, run iperf server at host side and iperf client at v= m side, > check throughput in log:: > + > + Host side : ip netns exec ns1 iperf -s > + VM side: iperf -c 1.1.1.8 -i 1 -t 60 > + > +6. Start netperf test, run netperf server at host side and netperf clien= t at vm > side, check throughput in log:: > + > + Host side : ip netns exec ns1 netserver > + VM side: netperf -t TCP_STREAM -H 1.1.1.8 -- -m # bydefault > configuration > + netperf -t TCP_STREAM -H 1.1.1.8 -- -m 1440 # packet siz= e < mtu > + netperf -t TCP_STREAM -H 1.1.1.8 -- -m 2100 # chain mode > \ No newline at end of file > -- > 2.17.1