From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by dpdk.space (Postfix) with ESMTP id D77D9A05D3 for ; Tue, 23 Apr 2019 22:28:40 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id C6ADE1B471; Tue, 23 Apr 2019 22:28:40 +0200 (CEST) Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by dpdk.org (Postfix) with ESMTP id 8C1A91B1FD for ; Tue, 23 Apr 2019 22:28:38 +0200 (CEST) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga006.jf.intel.com ([10.7.209.51]) by fmsmga103.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 23 Apr 2019 13:28:37 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.60,387,1549958400"; d="scan'208";a="138196161" Received: from fmsmsx108.amr.corp.intel.com ([10.18.124.206]) by orsmga006.jf.intel.com with ESMTP; 23 Apr 2019 13:28:21 -0700 Received: from shsmsx104.ccr.corp.intel.com (10.239.4.70) by FMSMSX108.amr.corp.intel.com (10.18.124.206) with Microsoft SMTP Server (TLS) id 14.3.408.0; Tue, 23 Apr 2019 13:28:21 -0700 Received: from shsmsx101.ccr.corp.intel.com ([169.254.1.164]) by SHSMSX104.ccr.corp.intel.com ([169.254.5.92]) with mapi id 14.03.0415.000; Wed, 24 Apr 2019 04:28:19 +0800 From: "Tu, Lijuan" To: "Wang, Yinan" , "dts@dpdk.org" CC: "Wang, Yinan" Thread-Topic: [dts] [PATCH v2] test_plans/vm2vm_virtio_pmd: add test plan for vm2vm virtio-pmd test Thread-Index: AQHU+NBQk4LLZ2u6vEevAwtpyOE5g6ZKNIWg Date: Tue, 23 Apr 2019 20:28:18 +0000 Message-ID: <8CE3E05A3F976642AAB0F4675D0AD20E0BA658AF@SHSMSX101.ccr.corp.intel.com> References: <20190421225001.56798-1-yinan.wang@intel.com> In-Reply-To: <20190421225001.56798-1-yinan.wang@intel.com> Accept-Language: zh-CN, en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: dlp-product: dlpe-windows dlp-version: 11.0.600.7 dlp-reaction: no-action x-ctpclassification: CTP_NT x-titus-metadata-40: eyJDYXRlZ29yeUxhYmVscyI6IiIsIk1ldGFkYXRhIjp7Im5zIjoiaHR0cDpcL1wvd3d3LnRpdHVzLmNvbVwvbnNcL0ludGVsMyIsImlkIjoiNjljYjM2OTUtNDZlNS00MTA1LWIzMzUtODg5YzkzMzdhNDc1IiwicHJvcHMiOlt7Im4iOiJDVFBDbGFzc2lmaWNhdGlvbiIsInZhbHMiOlt7InZhbHVlIjoiQ1RQX05UIn1dfV19LCJTdWJqZWN0TGFiZWxzIjpbXSwiVE1DVmVyc2lvbiI6IjE3LjEwLjE4MDQuNDkiLCJUcnVzdGVkTGFiZWxIYXNoIjoiSThFM0RSZU1XSGRxTGZPc2hKTktUNGRNMjBtQ0cxWUFJK2dKSW45WTBhb1cralwvRXBreURqOFM0b2tjSW9lc1kifQ== x-originating-ip: [10.239.127.40] Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Subject: Re: [dts] [PATCH v2] test_plans/vm2vm_virtio_pmd: add test plan for vm2vm virtio-pmd test X-BeenThere: dts@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: test suite reviews and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dts-bounces@dpdk.org Sender: "dts" Applied, thanks > -----Original Message----- > From: dts [mailto:dts-bounces@dpdk.org] On Behalf Of Yinan > Sent: Sunday, April 21, 2019 3:50 PM > To: dts@dpdk.org > Cc: Wang, Yinan > Subject: [dts] [PATCH v2] test_plans/vm2vm_virtio_pmd: add test plan for > vm2vm virtio-pmd test >=20 > From: Wang Yinan >=20 > Signed-off-by: Wang Yinan > --- > test_plans/vm2vm_virtio_pmd_test_plan.rst | 249 > ++++++++++++++++++++++ > 1 file changed, 249 insertions(+) > create mode 100644 test_plans/vm2vm_virtio_pmd_test_plan.rst >=20 > diff --git a/test_plans/vm2vm_virtio_pmd_test_plan.rst > b/test_plans/vm2vm_virtio_pmd_test_plan.rst > new file mode 100644 > index 0000000..fd478ae > --- /dev/null > +++ b/test_plans/vm2vm_virtio_pmd_test_plan.rst > @@ -0,0 +1,249 @@ > +.. Copyright (c) <2019>, Intel Corporation > + All rights reserved. > + > + Redistribution and use in source and binary forms, with or without > + modification, are permitted provided that the following conditions > + are met: > + > + - Redistributions of source code must retain the above copyright > + notice, this list of conditions and the following disclaimer. > + > + - Redistributions in binary forim must reproduce the above copyright > + notice, this list of conditions and the following disclaimer in > + the documentation and/or other materials provided with the > + distribution. > + > + - Neither the name of Intel Corporation nor the names of its > + contributors may be used to endorse or promote products derived > + from this software without specific prior written permission. > + > + THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND > CONTRIBUTORS > + "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT > + LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND > FITNESS > + FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE > + COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, > INDIRECT, > + INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES > + (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS > OR > + SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) > + HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN > CONTRACT, > + STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) > + ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF > ADVISED > + OF THE POSSIBILITY OF SUCH DAMAGE. > + > +=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D > +vhost-user/virtio-pmd vm2vm test plan > +=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D > + > +Description > +=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D > + > +Test cases for vhost/virtio-pmd(0.95) VM2VM test with 3 rx/tx paths, > includes mergeable, normal, vector_rx. > +Also add vhost/virtio-pmd(1.0) vm2vm mergeable test for performance > comparsion with vhost/virtio-pmd vm2vm mergeable. > + > +Test flow > +=3D=3D=3D=3D=3D=3D=3D=3D=3D > +Virtio-pmd <-> Vhost <-> Testpmd <-> Vhost <-> Virtio-pmd > + > +Test Case 1: vhost-user + virtio-pmd with mergeable path > +=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D > + > +1. Launch the testpmd by below commands:: > + > + rm -rf vhost-net* > + ./testpmd -c 0xc0000 -n 4 --socket-mem 2048,2048 --legacy-mem --no-p= ci - > -file-prefix=3Dvhost --vdev 'net_vhost0,iface=3Dvhost-net0,queues=3D1' --= vdev > 'net_vhost1,iface=3Dvhost-net1,queues=3D1' -- -i --nb-cores=3D1 --txd=3D= 1024 -- > rxd=3D1024 > + testpmd>set fwd mac > + testpmd>start > + > +2. Launch VM1 and VM2, mrg_rxbuf=3Don to enable mergeable path:: > + > + taskset -c 32-33 \ > + qemu-system-x86_64 -name us-vhost-vm0 \ > + -cpu host -enable-kvm -m 2048 -object memory-backend- > file,id=3Dmem,size=3D2048M,mem-path=3D/mnt/huge,share=3Don -numa > node,memdev=3Dmem -mem-prealloc \ > + -smp cores=3D2,sockets=3D1 -drive file=3D/home/osimg/ubuntu16-1.img= \ > + -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net > nic,vlan=3D2,macaddr=3D00:00:00:08:e8:aa,addr=3D1f -net > user,vlan=3D2,hostfwd=3Dtcp:127.0.0.1:6004-:22 \ > + -chardev socket,id=3Dchar0,path=3D./vhost-net0 -netdev type=3Dvhost= - > user,id=3Dmynet1,chardev=3Dchar0,vhostforce \ > + -device virtio-net- > pci,mac=3D52:54:00:00:00:01,netdev=3Dmynet1,mrg_rxbuf=3Don,csum=3Don,gso= =3Don,g > uest_csum=3Don,host_tso4=3Don,guest_tso4=3Don,guest_ecn=3Don \ > + -vnc :12 -daemonize > + > + taskset -c 34-35 \ > + qemu-system-x86_64 -name us-vhost-vm1 \ > + -cpu host -enable-kvm -m 2048 -object memory-backend- > file,id=3Dmem,size=3D2048M,mem-path=3D/mnt/huge,share=3Don -numa > node,memdev=3Dmem -mem-prealloc \ > + -smp cores=3D2,sockets=3D1 -drive file=3D/home/osimg/ubuntu16-2.img= \ > + -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net > nic,vlan=3D2,macaddr=3D00:00:00:08:e8:aa,addr=3D1f -net > user,vlan=3D2,hostfwd=3Dtcp:127.0.0.1:6005-:22 \ > + -chardev socket,id=3Dchar1,path=3D./vhost-net1 -netdev type=3Dvhost= - > user,id=3Dmynet2,chardev=3Dchar1,vhostforce \ > + -device virtio-net- > pci,mac=3D52:54:00:00:00:02,netdev=3Dmynet2,mrg_rxbuf=3Don,csum=3Don,gso= =3Don,g > uest_csum=3Don,host_tso4=3Don,guest_tso4=3Don,guest_ecn=3Don \ > + -vnc :11 -daemonize > + > +3. On VM1, bind vdev with igb_uio driver,then run testpmd, set rxonly > mode for virtio1:: > + > + ./testpmd -c 0x3 -n 4 -- -i --tx-offloads=3D0x00 --enable-hw-vlan-st= rip -- > txd=3D1024 --rxd=3D1024 > + testpmd>set fwd rxonly > + testpmd>start > + > +4. On VM2, bind vdev with igb_uio driver,then run testpmd, set txonly fo= r > virtio2 and send 64B packets:: > + > + ./testpmd -c 0x3 -n 4 -- -i --tx-offloads=3D0x00 --enable-hw-vlan-st= rip -- > txd=3D1024 --rxd=3D1024 > + testpmd>set fwd txonly > + testpmd>set txpkts 64 > + testpmd>start tx_first 32 > + > +5. Check the performance at vhost testpmd to see the tx/rx rate with 64B > packet size:: > + > + testpmd>show port stats all > + xxxxx > + Throughput (since last show) > + RX-pps: xxx > + TX-pps: xxx > + > +Test Case 2: vhost-user + virtio-pmd with vector path > +=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D > + > +1. Bind one physical nic port to igb_uio, then launch the testpmd by bel= ow > commands:: > + > + rm -rf vhost-net* > + ./testpmd -c 0xc0000 -n 4 --socket-mem 2048,2048 --legacy-mem --no-p= ci - > -file-prefix=3Dvhost --vdev 'net_vhost0,iface=3Dvhost-net0,queues=3D1' --= vdev > 'net_vhost1,iface=3Dvhost-net1,queues=3D1' -- -i --nb-cores=3D1 --txd=3D= 1024 -- > rxd=3D1024 > + testpmd>set fwd mac > + testpmd>start > + > +2. Launch VM1 and VM2, mrg_rxbuf=3Doff to disable mergeable:: > + > + taskset -c 32-33 \ > + qemu-system-x86_64 -name us-vhost-vm0 \ > + -cpu host -enable-kvm -m 2048 -object memory-backend- > file,id=3Dmem,size=3D2048M,mem-path=3D/mnt/huge,share=3Don -numa > node,memdev=3Dmem -mem-prealloc \ > + -smp cores=3D2,sockets=3D1 -drive file=3D/home/osimg/ubuntu16-1.img= \ > + -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net > nic,vlan=3D2,macaddr=3D00:00:00:08:e8:aa,addr=3D1f -net > user,vlan=3D2,hostfwd=3Dtcp:127.0.0.1:6004-:22 \ > + -chardev socket,id=3Dchar0,path=3D./vhost-net0 -netdev type=3Dvhost= - > user,id=3Dmynet1,chardev=3Dchar0,vhostforce \ > + -device virtio-net- > pci,mac=3D52:54:00:00:00:01,netdev=3Dmynet1,mrg_rxbuf=3Doff,csum=3Don,gso= =3Don,g > uest_csum=3Don,host_tso4=3Don,guest_tso4=3Don,guest_ecn=3Don \ > + -vnc :12 -daemonize > + > + taskset -c 34-35 \ > + qemu-system-x86_64 -name us-vhost-vm1 \ > + -cpu host -enable-kvm -m 2048 -object memory-backend- > file,id=3Dmem,size=3D2048M,mem-path=3D/mnt/huge,share=3Don -numa > node,memdev=3Dmem -mem-prealloc \ > + -smp cores=3D2,sockets=3D1 -drive file=3D/home/osimg/ubuntu16-2.img= \ > + -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net > nic,vlan=3D2,macaddr=3D00:00:00:08:e8:aa,addr=3D1f -net > user,vlan=3D2,hostfwd=3Dtcp:127.0.0.1:6005-:22 \ > + -chardev socket,id=3Dchar1,path=3D./vhost-net1 -netdev type=3Dvhost= - > user,id=3Dmynet2,chardev=3Dchar1,vhostforce \ > + -device virtio-net- > pci,mac=3D52:54:00:00:00:02,netdev=3Dmynet2,mrg_rxbuf=3Doff,csum=3Don,gso= =3Don,g > uest_csum=3Don,host_tso4=3Don,guest_tso4=3Don,guest_ecn=3Don \ > + -vnc :11 -daemonize > + > +3. On VM1, bind vdev with igb_uio driver,then run testpmd, set rxonly fo= r > virtio1:: > + > + ./testpmd -c 0x3 -n 4 -- -i --txd=3D1024 --rxd=3D1024 > + testpmd>set fwd rxonly > + testpmd>start > + > +4. On VM2, bind vdev with igb_uio driver,then run testpmd, set txonly fo= r > virtio2 and send 64B packets:: > + > + ./testpmd -c 0x3 -n 4 -- -i --txd=3D1024 --rxd=3D1024 > + testpmd>set fwd txonly > + testpmd>set txpkts 64 > + testpmd>start tx_first 32 > + > +5. Check the performance at vhost testpmd to see the tx/rx rate with 64B > packet size:: > + > + testpmd>show port stats all > + xxxxx > + Throughput (since last show) > + RX-pps: xxx > + TX-pps: xxx > + > +Test Case 3: vhost-user + virtio-pmd with normal path > +=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D > + > +1. Bind one physical nic port to igb_uio, then launch the testpmd by bel= ow > commands:: > + > + rm -rf vhost-net* > + ./testpmd -c 0xc0000 -n 4 --socket-mem 2048,2048 --legacy-mem --no-p= ci - > -file-prefix=3Dvhost --vdev 'net_vhost0,iface=3Dvhost-net0,queues=3D1' --= vdev > 'net_vhost1,iface=3Dvhost-net1,queues=3D1' -- -i --nb-cores=3D1 --txd=3D= 1024 -- > rxd=3D1024 > + testpmd>set fwd mac > + testpmd>start > + > +2. Launch VM1 and VM2, mrg_rxbuf=3Doff to disable mergeable:: > + > + taskset -c 32-33 \ > + qemu-system-x86_64 -name us-vhost-vm0 \ > + -cpu host -enable-kvm -m 2048 -object memory-backend- > file,id=3Dmem,size=3D2048M,mem-path=3D/mnt/huge,share=3Don -numa > node,memdev=3Dmem -mem-prealloc \ > + -smp cores=3D2,sockets=3D1 -drive file=3D/home/osimg/ubuntu16-1.img= \ > + -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net > nic,vlan=3D2,macaddr=3D00:00:00:08:e8:aa,addr=3D1f -net > user,vlan=3D2,hostfwd=3Dtcp:127.0.0.1:6004-:22 \ > + -chardev socket,id=3Dchar0,path=3D./vhost-net0 -netdev type=3Dvhost= - > user,id=3Dmynet1,chardev=3Dchar0,vhostforce \ > + -device virtio-net- > pci,mac=3D52:54:00:00:00:01,netdev=3Dmynet1,mrg_rxbuf=3Doff,csum=3Don,gso= =3Don,g > uest_csum=3Don,host_tso4=3Don,guest_tso4=3Don,guest_ecn=3Don \ > + -vnc :12 -daemonize > + > + taskset -c 34-35 \ > + qemu-system-x86_64 -name us-vhost-vm1 \ > + -cpu host -enable-kvm -m 2048 -object memory-backend- > file,id=3Dmem,size=3D2048M,mem-path=3D/mnt/huge,share=3Don -numa > node,memdev=3Dmem -mem-prealloc \ > + -smp cores=3D2,sockets=3D1 -drive file=3D/home/osimg/ubuntu16-2.img= \ > + -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net > nic,vlan=3D2,macaddr=3D00:00:00:08:e8:aa,addr=3D1f -net > user,vlan=3D2,hostfwd=3Dtcp:127.0.0.1:6005-:22 \ > + -chardev socket,id=3Dchar1,path=3D./vhost-net1 -netdev type=3Dvhost= - > user,id=3Dmynet2,chardev=3Dchar1,vhostforce \ > + -device virtio-net- > pci,mac=3D52:54:00:00:00:02,netdev=3Dmynet2,mrg_rxbuf=3Doff,csum=3Don,gso= =3Don,g > uest_csum=3Don,host_tso4=3Don,guest_tso4=3Don,guest_ecn=3Don \ > + -vnc :11 -daemonize > + > +3. On VM1, bind vdev with igb_uio driver,then run testpmd, set rxonly fo= r > virtio1 :: > + > + ./testpmd -c 0x3 -n 4 -- -i --tx-offloads=3D0x00 --enable-hw-vlan-st= rip -- > txd=3D1024 --rxd=3D1024 > + testpmd>set fwd rxonly > + testpmd>start > + > +4. On VM2, bind vdev with igb_uio driver,then run testpmd, set rxonly fo= r > virtio2 and send 64B packets :: > + > + ./testpmd -c 0x3 -n 4 -- -i --tx-offloads=3D0x00 --enable-hw-vlan-st= rip -- > txd=3D1024 --rxd=3D1024 > + testpmd>set fwd txonly > + testpmd>set txpkts 64 > + testpmd>start tx_first 32 > + > +5. Check the performance at vhost testpmd to see the tx/rx rate with 64B > packet size:: > + > + testpmd>show port stats all > + xxxxx > + Throughput (since last show) > + RX-pps: xxx > + TX-pps: xxx > + > +Test Case 4: vhost-user + virtio1.0-pmd with mergeable path > +=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D > + > +1. Bind one physical nic port to igb_uio, then launch the testpmd by bel= ow > commands:: > + > + rm -rf vhost-net* > + ./testpmd -c 0xc0000 -n 4 --socket-mem 2048,2048 --legacy-mem --no-p= ci - > -file-prefix=3Dvhost --vdev 'net_vhost0,iface=3Dvhost-net0,queues=3D1' --= vdev > 'net_vhost1,iface=3Dvhost-net1,queues=3D1' -- -i --nb-cores=3D1 --txd=3D= 1024 -- > rxd=3D1024 > + testpmd>set fwd mac > + testpmd>start > + > +2. Launch VM1 and VM2, note add "disable-modern=3Dfalse" to enable virti= o > 1.0:: > + > + taskset -c 32-33 \ > + qemu-system-x86_64 -name us-vhost-vm1 \ > + -cpu host -enable-kvm -m 2048 -object memory-backend- > file,id=3Dmem,size=3D2048M,mem-path=3D/mnt/huge,share=3Don -numa > node,memdev=3Dmem -mem-prealloc \ > + -smp cores=3D2,sockets=3D1 -drive file=3D/home/osimg/ubuntu16-1.img= \ > + -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net > nic,vlan=3D2,macaddr=3D00:00:00:08:e8:aa,addr=3D1f -net > user,vlan=3D2,hostfwd=3Dtcp:127.0.0.1:6004-:22 \ > + -chardev socket,id=3Dchar0,path=3D./vhost-net0 -netdev type=3Dvhost= - > user,id=3Dmynet1,chardev=3Dchar0,vhostforce \ > + -device virtio-net-pci,mac=3D52:54:00:00:00:01,netdev=3Dmynet1,disa= ble- > modern=3Dfalse,mrg_rxbuf=3Don,csum=3Don,gso=3Don,guest_csum=3Don,host_tso= 4=3Don, > guest_tso4=3Don,guest_ecn=3Don \ > + -vnc :12 -daemonize > + > + taskset -c 34-35 \ > + qemu-system-x86_64 -name us-vhost-vm2 \ > + -cpu host -enable-kvm -m 2048 -object memory-backend- > file,id=3Dmem,size=3D2048M,mem-path=3D/mnt/huge,share=3Don -numa > node,memdev=3Dmem -mem-prealloc \ > + -smp cores=3D2,sockets=3D1 -drive file=3D/home/osimg/ubuntu16-2.img= \ > + -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net > nic,vlan=3D2,macaddr=3D00:00:00:08:e8:aa,addr=3D1f -net > user,vlan=3D2,hostfwd=3Dtcp:127.0.0.1:6005-:22 \ > + -chardev socket,id=3Dchar1,path=3D./vhost-net1 -netdev type=3Dvhost= - > user,id=3Dmynet2,chardev=3Dchar1,vhostforce \ > + -device virtio-net-pci,mac=3D52:54:00:00:00:02,netdev=3Dmynet2,disa= ble- > modern=3Dfalse,mrg_rxbuf=3Don,csum=3Don,gso=3Don,guest_csum=3Don,host_tso= 4=3Don, > guest_tso4=3Don,guest_ecn=3Don \ > + -vnc :11 -daemonize > + > +3. On VM1, bind vdev with igb_uio driver,then run testpmd, set rxonly fo= r > virtio1 :: > + > + ./testpmd -c 0x3 -n 4 -- -i --tx-offloads=3D0x00 --enable-hw-vlan-st= rip -- > txd=3D1024 --rxd=3D1024 > + testpmd>set fwd rxonly > + testpmd>start > + > +4. On VM2, bind vdev with igb_uio driver,then run testpmd, set txonly fo= r > virtio2 :: > + > + ./testpmd -c 0x3 -n 4 -- -i --tx-offloads=3D0x00 --enable-hw-vlan-st= rip -- > txd=3D1024 --rxd=3D1024 > + testpmd>set fwd txonly > + testpmd>set txpkts 64 > + testpmd>start tx_first 32 > + > +5. Check the performance at vhost testpmd to see the tx/rx rate with 64B > packet size:: > + > + testpmd>show port stats all > + xxxxx > + Throughput (since last show) > + RX-pps: xxx > + TX-pps: xxx > -- > 2.17.1