test suite reviews and discussions
 help / color / mirror / Atom feed
* [dts] [PATCH v1] test_plans: add packed virtqueue test in vm2vm topology
@ 2019-12-19 19:36 Yinan
  2019-12-20  8:12 ` Tu, Lijuan
  0 siblings, 1 reply; 4+ messages in thread
From: Yinan @ 2019-12-19 19:36 UTC (permalink / raw)
  To: dts; +Cc: Wang Yinan

From: Wang Yinan <yinan.wang@intel.com>

Add packed virtqueue and split virtqueue test in vm2vm_virtio_user_test_plan.rst

Signed-off-by: Wang Yinan <yinan.wang@intel.com>
---
 test_plans/vm2vm_virtio_user_test_plan.rst | 675 +++++++++++++++++++++
 1 file changed, 675 insertions(+)
 create mode 100644 test_plans/vm2vm_virtio_user_test_plan.rst

diff --git a/test_plans/vm2vm_virtio_user_test_plan.rst b/test_plans/vm2vm_virtio_user_test_plan.rst
new file mode 100644
index 0000000..a28ca92
--- /dev/null
+++ b/test_plans/vm2vm_virtio_user_test_plan.rst
@@ -0,0 +1,675 @@
+.. Copyright (c) <2019>, Intel Corporation
+         All rights reserved.
+
+   Redistribution and use in source and binary forms, with or without
+   modification, are permitted provided that the following conditions
+   are met:
+
+   - Redistributions of source code must retain the above copyright
+     notice, this list of conditions and the following disclaimer.
+
+   - Redistributions in binary form must reproduce the above copyright
+     notice, this list of conditions and the following disclaimer in
+     the documentation and/or other materials provided with the
+     distribution.
+
+   - Neither the name of Intel Corporation nor the names of its
+     contributors may be used to endorse or promote products derived
+     from this software without specific prior written permission.
+
+   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
+   FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
+   COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
+   INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
+   (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
+   SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
+   HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
+   STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+   ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED
+   OF THE POSSIBILITY OF SUCH DAMAGE.
+
+======================================
+vm2vm vhost-user/virtio-user test plan
+======================================
+
+Description
+===========
+
+This test plan includes split virtqueue vm2vm in-order mergeable, in-order non-mergeable, mergeable, non-mergeable, vector_rx path test, and packed virtqueue vm2vm in-order mergeable, in-order non-mergeable, mergeable, non-mergeable path test. This plan also check the payload of packets is accurate. For packed virtqueue test, need using qemu version > 4.2.0.
+
+Prerequisites
+=============
+
+Enable pcap lib in dpdk code and recompile::
+
+    --- a/config/common_base
+    +++ b/config/common_base
+    @@ -492,7 +492,7 @@ CONFIG_RTE_LIBRTE_PMD_NULL=y
+     #
+     # Compile software PMD backed by PCAP files
+     #
+    -CONFIG_RTE_LIBRTE_PMD_PCAP=n
+    +CONFIG_RTE_LIBRTE_PMD_PCAP=y
+
+Then build DPDK.
+
+Test flow
+=========
+Virtio-user <-> Vhost-user <-> Testpmd <-> Vhost-user <-> Virtio-user
+
+Test Case 1: packed virtqueue vm2vm mergeable path test
+=======================================================
+
+1. Launch vhost by below command::
+
+    ./x86_64-native-linuxapp-gcc/app/testpmd -l 1-2 -n 4 --socket-mem 1024,1024 --no-pci \
+    --vdev 'eth_vhost0,iface=vhost-net,queues=1' --vdev 'eth_vhost1,iface=vhost-net1,queues=1' -- \
+    -i --nb-cores=1 --no-flush-rx
+
+2. Launch virtio-user1 by below command::
+
+    ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -l 7-8 --socket-mem 1024,1024 \
+    --no-pci --file-prefix=virtio1 \
+    --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=1,packed_vq=1,mrg_rxbuf=1,in_order=0 \
+    -- -i --nb-cores=1 --txd=256 --rxd=256
+    testpmd>set fwd rxonly
+    testpmd>start
+
+3. Attach pdump secondary process to primary process by same file-prefix::
+
+    ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=virtio1 -- --pdump 'device_id=net_virtio_user1,queue=*,rx-dev=./pdump-virtio-rx.pcap,mbuf-size=8000'
+
+4. Launch virtio-user0 and send 8k length packets::
+
+    ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -l 5-6 --socket-mem 1024,1024 \
+    --no-pci --file-prefix=virtio \
+    --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=1,packed_vq=1,mrg_rxbuf=1,in_order=0 \
+    -- -i --nb-cores=1 --txd=256 --rxd=256
+    testpmd>set burst 1
+    testpmd>start tx_first 27
+    testpmd>stop
+    testpmd>set burst 32
+    testpmd>start tx_first 7
+    testpmd>stop
+    testpmd>set txpkts 2000,2000,2000,2000
+    testpmd>start tx_first 1
+
+5. Start vhost, then send 8k length packets by virtio-user0 again::
+
+    testpmd>stop
+    testpmd>set txpkts 2000
+    testpmd>start tx_first 1
+
+6. Quit pdump and three testpmd, get 284 packets received by virtio-user1 in pdump-virtio-rx.pcap.
+
+7. Launch testpmd by below command::
+
+    ./x86_64-native-linuxapp-gcc/app/testpmd -l 1-2 -n 4 --socket-mem 1024,1024 --no-pci --file-prefix=vhost  \
+    --vdev 'eth_vhost1,iface=vhost-net1,queues=1' -- \
+    -i --nb-cores=1 --no-flush-rx
+    testpmd>set fwd rxonly
+    testpmd>start
+
+8. Attach pdump secondary process to primary process by same file-prefix::
+
+    ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=vhost -- --pdump 'port=0,queue=*,rx-dev=/root/pdump-vhost-rx.pcap,mbuf-size=8000'
+
+9. Launch virtio-user1 by below command::
+
+    ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -l 7-8 --socket-mem 1024,1024 --no-pci \
+    --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=1,packed_vq=1,mrg_rxbuf=1,in_order=0 \
+    -- -i --nb-cores=1 --txd=256 --rxd=256
+    testpmd>set burst 1
+    testpmd>start tx_first 27
+    testpmd>stop
+    testpmd>set burst 32
+    testpmd>start tx_first 7
+    testpmd>stop
+    testpmd>set burst 1
+    testpmd>set txpkts 2000,2000,2000,2000
+    testpmd>start tx_first 1
+    testpmd>stop
+    testpmd>set burst 32
+    testpmd>set txpkts 2000
+    testpmd>start tx_first 1
+
+10. Quit pdump,vhost received packets in pdump-vhost-rx.pcap, check headers and payload of all packets in pdump-virtio-rx.pcap and pdump-vhost-rx.pcap and ensure the content are same.
+
+Test Case 2: packed virtqueue vm2vm inorder mergeable path test
+===============================================================
+
+1. Launch testpmd by below command::
+
+    ./testpmd -l 1-2 -n 4 --socket-mem 1024,1024 --no-pci \
+    --vdev 'eth_vhost0,iface=vhost-net,queues=1' --vdev 'eth_vhost1,iface=vhost-net1,queues=1' -- \
+    -i --nb-cores=1 --no-flush-rx
+
+2. Launch virtio-user1 by below command::
+
+    ./testpmd -n 4 -l 7-8 --socket-mem 1024,1024 \
+    --no-pci --file-prefix=virtio1 \
+    --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=1,packed_vq=1,mrg_rxbuf=1,in_order=1 \
+    -- -i --nb-cores=1 --txd=256 --rxd=256
+    testpmd>set fwd rxonly
+    testpmd>start
+
+3. Attach pdump secondary process to primary process by same file-prefix::
+
+    ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=virtio1 -- --pdump 'device_id=net_virtio_user1,queue=*,rx-dev=/root/pdump-rx.pcap,mbuf-size=8000'
+
+4. Launch virtio-user0 and send 8k length packets::
+
+    ./testpmd -n 4 -l 5-6 --socket-mem 1024,1024 \
+    --no-pci --file-prefix=virtio \
+    --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=1,packed_vq=1,mrg_rxbuf=1,in_order=1 \
+    -- -i --nb-cores=1 --txd=256 --rxd=256
+    testpmd>set burst 1
+    testpmd>start tx_first 27
+    testpmd>stop
+    testpmd>set burst 32
+    testpmd>start tx_first 7
+    testpmd>stop
+    testpmd>set burst 1
+    testpmd>set txpkts 2000,2000,2000,2000
+    testpmd>start tx_first 1
+
+5. Start vhost, then quit pdump and three testpmd, get 252 packets received by virtio-user1 in pdump-virtio-rx.pcap.
+
+6. Launch testpmd by below command::
+
+    ./x86_64-native-linuxapp-gcc/app/testpmd -l 1-2 -n 4 --socket-mem 1024,1024 --no-pci --file-prefix=vhost  \
+    --vdev 'eth_vhost1,iface=vhost-net1,queues=1' -- \
+    -i --nb-cores=1 --no-flush-rx
+    testpmd>set fwd rxonly
+    testpmd>start
+
+7. Attach pdump secondary process to primary process by same file-prefix::
+
+    ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=vhost -- --pdump 'port=0,queue=*,rx-dev=/root/pdump-vhost-rx.pcap,mbuf-size=8000'
+
+8. Launch virtio-user1 by below command::
+
+    ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -l 7-8 --socket-mem 1024,1024 \
+    --no-pci \
+    --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=1,packed_vq=1,mrg_rxbuf=1,in_order=1 \
+    -- -i --nb-cores=1 --txd=256 --rxd=256
+    testpmd>set burst 1
+    testpmd>start tx_first 27
+    testpmd>stop
+    testpmd>set burst 32
+    testpmd>start tx_first 7
+    testpmd>stop
+    testpmd>set burst 1
+    testpmd>set txpkts 2000,2000,2000,2000
+    testpmd>start tx_first 1
+
+9. Quit pdump,vhost received packets in pdump-vhost-rx.pcap, check headers and payload of all packets in pdump-virtio-rx.pcap and pdump-vhost-rx.pcap and ensure the content are same.
+
+Test Case 3: packed virtqueue vm2vm non-mergeable path test
+===========================================================
+
+1. Launch testpmd by below command::
+
+    ./testpmd -l 1-2 -n 4 --socket-mem 1024,1024 --no-pci \
+    --vdev 'eth_vhost0,iface=vhost-net,queues=1' --vdev 'eth_vhost1,iface=vhost-net1,queues=1' -- \
+    -i --nb-cores=1 --no-flush-rx
+
+2. Launch virtio-user1 by below command::
+
+    ./testpmd -n 4 -l 7-8 --socket-mem 1024,1024 \
+    --no-pci --file-prefix=virtio1 \
+    --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=1,packed_vq=1,mrg_rxbuf=0,in_order=0 \
+    -- -i --nb-cores=1 --txd=256 --rxd=256
+
+3. Attach pdump secondary process to primary process by same file-prefix::
+
+    ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=virtio1 -- --pdump 'device_id=net_virtio_user1,queue=*,rx-dev=/root/pdump-rx.pcap,mbuf-size=8000'
+
+4. Launch virtio-user0 and send 8k length packets::
+
+    ./testpmd -n 4 -l 5-6 --socket-mem 1024,1024 \
+    --no-pci --file-prefix=virtio \
+    --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=1,packed_vq=1,mrg_rxbuf=0,in_order=0 \
+    -- -i --nb-cores=1 --txd=256 --rxd=256
+    testpmd>set burst 1
+    testpmd>start tx_first 27
+    testpmd>stop
+    testpmd>set burst 32
+    testpmd>start tx_first 7
+    testpmd>stop
+    testpmd>set txpkts 2000,2000,2000,2000
+    testpmd>start tx_first 1
+
+5. Start vhost, then quit pdump and three testpmd, get 251 packets received by virtio-user1 in pdump-virtio-rx.pcap.
+
+6. Launch testpmd by below command::
+
+    ./x86_64-native-linuxapp-gcc/app/testpmd -l 1-2 -n 4 --socket-mem 1024,1024 --no-pci --file-prefix=vhost  \
+    --vdev 'eth_vhost1,iface=vhost-net1,queues=1' -- \
+    -i --nb-cores=1 --no-flush-rx
+    testpmd>set fwd rxonly
+    testpmd>start
+
+7. Attach pdump secondary process to primary process by same file-prefix::
+
+    ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=vhost -- --pdump 'port=0,queue=*,rx-dev=/root/pdump-vhost-rx.pcap,mbuf-size=8000'
+
+8. Launch virtio-user1 by below command::
+
+    ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -l 7-8 --socket-mem 1024,1024 \
+    --no-pci \
+    --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=1,packed_vq=1,mrg_rxbuf=0,in_order=0 \
+    -- -i --nb-cores=1 --txd=256 --rxd=256
+    testpmd>set burst 1
+    testpmd>start tx_first 27
+    testpmd>stop
+    testpmd>set burst 32
+    testpmd>start tx_first 7
+
+9. Quit pdump,vhost received packets in pdump-vhost-rx.pcap, check headers and payload of all packets in pdump-virtio-rx.pcap and pdump-vhost-rx.pcap and ensure the content are same.
+
+Test Case 4: packed virtqueue vm2vm inorder non-mergeable path test
+===================================================================
+
+1. Launch testpmd by below command::
+
+    ./testpmd -l 1-2 -n 4 --socket-mem 1024,1024 --no-pci \
+    --vdev 'eth_vhost0,iface=vhost-net,queues=1' --vdev 'eth_vhost1,iface=vhost-net1,queues=1' -- \
+    -i --nb-cores=1 --no-flush-rx
+
+2. Launch virtio-user1 by below command::
+
+    ./testpmd -n 4 -l 7-8 --socket-mem 1024,1024 \
+    --no-pci --file-prefix=virtio1 \
+    --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=1,packed_vq=1,mrg_rxbuf=0,in_order=1 \
+    -- -i --nb-cores=1 --txd=256 --rxd=256
+    testpmd>set fwd rxonly
+    testpmd>start
+
+3. Attach pdump secondary process to primary process by same file-prefix::
+
+    ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=virtio1 -- --pdump 'device_id=net_virtio_user1,queue=*,rx-dev=/root/pdump-rx.pcap,mbuf-size=8000'
+
+4. Launch virtio-user0 and send 8k length packets::
+
+    ./testpmd -n 4 -l 5-6 --socket-mem 1024,1024 \
+    --no-pci --file-prefix=virtio \
+    --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=1,packed_vq=1,mrg_rxbuf=0,in_order=1 \
+    -- -i --nb-cores=1 --txd=256 --rxd=256
+    testpmd>set burst 1
+    testpmd>start tx_first 27
+    testpmd>stop
+    testpmd>set burst 32
+    testpmd>start tx_first 7
+    testpmd>stop
+    testpmd>set txpkts 2000,2000,2000,2000
+    testpmd>start tx_first 1
+
+5. Start vhost, then quit pdump and three testpmd, get 251 packets received by virtio-user1 in pdump-virtio-rx.pcap.
+
+6. Launch testpmd by below command::
+
+    ./x86_64-native-linuxapp-gcc/app/testpmd -l 1-2 -n 4 --socket-mem 1024,1024 --no-pci --file-prefix=vhost  \
+    --vdev 'eth_vhost1,iface=vhost-net1,queues=1' -- \
+    -i --nb-cores=1 --no-flush-rx
+    testpmd>set fwd rxonly
+    testpmd>start
+
+7. Attach pdump secondary process to primary process by same file-prefix::
+
+    ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=vhost -- --pdump 'port=0,queue=*,rx-dev=/root/pdump-vhost-rx.pcap,mbuf-size=8000'
+
+8. Launch virtio-user1 by below command::
+
+    ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -l 7-8 --socket-mem 1024,1024 \
+    --no-pci \
+    --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=1,packed_vq=1,mrg_rxbuf=0,in_order=1 \
+    -- -i --nb-cores=1 --txd=256 --rxd=256
+    testpmd>set burst 1
+    testpmd>start tx_first 27
+    testpmd>stop
+    testpmd>set burst 32
+    testpmd>start tx_first 7
+
+9. Quit pdump,vhost received packets in pdump-vhost-rx.pcap,check headers and payload of all packets in pdump-virtio-rx.pcap and pdump-vhost-rx.pcap and ensure the content are same.
+
+Test Case 5: split virtqueue vm2vm mergeable path test
+======================================================
+
+1. Launch vhost by below command::
+
+    ./x86_64-native-linuxapp-gcc/app/testpmd -l 1-2 -n 4 --socket-mem 1024,1024 --no-pci \
+    --vdev 'eth_vhost0,iface=vhost-net,queues=1' --vdev 'eth_vhost1,iface=vhost-net1,queues=1' -- \
+    -i --nb-cores=1 --no-flush-rx
+
+2. Launch virtio-user1 by below command::
+
+    ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -l 7-8 --socket-mem 1024,1024 \
+    --no-pci --file-prefix=virtio1 \
+    --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=1,packed_vq=0,mrg_rxbuf=1,in_order=0 \
+    -- -i --nb-cores=1 --txd=256 --rxd=256
+    testpmd>set fwd rxonly
+    testpmd>start
+
+3. Attach pdump secondary process to primary process by same file-prefix::
+
+    ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=virtio1 -- --pdump 'device_id=net_virtio_user1,queue=*,rx-dev=./pdump-virtio-rx.pcap,mbuf-size=8000'
+
+4. Launch virtio-user0 and send 8k length packets::
+
+    ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -l 5-6 --socket-mem 1024,1024 \
+    --no-pci --file-prefix=virtio \
+    --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=1,packed_vq=0,mrg_rxbuf=1,in_order=0 \
+    -- -i --nb-cores=1 --txd=256 --rxd=256
+    testpmd>set burst 1
+    testpmd>start tx_first 27
+    testpmd>stop
+    testpmd>set burst 32
+    testpmd>start tx_first 7
+    testpmd>stop
+    testpmd>set txpkts 2000,2000,2000,2000
+    testpmd>start tx_first 1
+
+5. Start vhost, then send 8k length packets by virtio-user0 again::
+
+    testpmd>stop
+    testpmd>set txpkts 2000
+    testpmd>start tx_first 1
+
+6. Quit pdump and three testpmd, get 288 packets received by virtio-user1 in pdump-virtio-rx.pcap.
+
+7. Launch testpmd by below command::
+
+    ./x86_64-native-linuxapp-gcc/app/testpmd -l 1-2 -n 4 --socket-mem 1024,1024 --no-pci --file-prefix=vhost  \
+    --vdev 'eth_vhost1,iface=vhost-net1,queues=1' -- \
+    -i --nb-cores=1 --no-flush-rx
+    testpmd>set fwd rxonly
+    testpmd>start
+
+8. Attach pdump secondary process to primary process by same file-prefix::
+
+    ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=vhost -- --pdump 'port=0,queue=*,rx-dev=/root/pdump-vhost-rx.pcap,mbuf-size=8000'
+
+9. Launch virtio-user1 by below command::
+
+    ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -l 7-8 --socket-mem 1024,1024 \
+    --no-pci \
+    --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=1,packed_vq=0,mrg_rxbuf=1,in_order=0 \
+    -- -i --nb-cores=1 --txd=256 --rxd=256
+    testpmd>set burst 1
+    testpmd>start tx_first 27
+    testpmd>stop
+    testpmd>set burst 32
+    testpmd>start tx_first 7
+    testpmd>stop
+    testpmd>set burst 5
+    testpmd>set txpkts 2000,2000,2000,2000
+    testpmd>start tx_first 1
+    testpmd>stop
+    testpmd>set burst 32
+    testpmd>set txpkts 2000
+    testpmd>start tx_first 1
+
+9. Quit pdump,vhost received packets in pdump-vhost-rx.pcap, check headers and payload of all packets in pdump-virtio-rx.pcap and pdump-vhost-rx.pcap and ensure the content are same.
+
+Test Case 6: split virtqueue vm2vm inorder mergeable path test
+==============================================================
+
+1. Launch testpmd by below command::
+
+    ./testpmd -l 1-2 -n 4 --socket-mem 1024,1024 --no-pci \
+    --vdev 'eth_vhost0,iface=vhost-net,queues=1' --vdev 'eth_vhost1,iface=vhost-net1,queues=1' -- \
+    -i --nb-cores=1 --no-flush-rx
+
+2. Launch virtio-user1 by below command::
+
+    ./testpmd -n 4 -l 7-8 --socket-mem 1024,1024 \
+    --no-pci --file-prefix=virtio1 \
+    --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=1,packed_vq=0,mrg_rxbuf=1,in_order=1 \
+    -- -i --nb-cores=1 --txd=256 --rxd=256
+    testpmd>set fwd rxonly
+    testpmd>start
+
+3. Attach pdump secondary process to primary process by same file-prefix::
+
+    ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=virtio1 -- --pdump 'device_id=net_virtio_user1,queue=*,rx-dev=/root/pdump-rx.pcap,mbuf-size=8000'
+
+4. Launch virtio-user0 and send 8k length packets::
+
+    ./testpmd -n 4 -l 5-6 --socket-mem 1024,1024 \
+    --no-pci --file-prefix=virtio \
+    --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=1,packed_vq=0,mrg_rxbuf=1,in_order=1 \
+    -- -i --nb-cores=1 --txd=256 --rxd=256
+    testpmd>set burst 1
+    testpmd>start tx_first 27
+    testpmd>stop
+    testpmd>set burst 32
+    testpmd>start tx_first 7
+    testpmd>stop
+    testpmd>set txpkts 2000,2000,2000,2000
+    testpmd>start tx_first 1
+
+5. Start vhost, then quit pdump and three testpmd, get 256 packets received by virtio-user1 in pdump-virtio-rx.pcap.
+
+6. Launch testpmd by below command::
+
+    ./x86_64-native-linuxapp-gcc/app/testpmd -l 1-2 -n 4 --socket-mem 1024,1024 --no-pci --file-prefix=vhost  \
+    --vdev 'eth_vhost1,iface=vhost-net1,queues=1' -- \
+    -i --nb-cores=1 --no-flush-rx
+    testpmd>set fwd rxonly
+    testpmd>start
+
+7. Attach pdump secondary process to primary process by same file-prefix::
+
+    ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=vhost -- --pdump 'port=0,queue=*,rx-dev=/root/pdump-vhost-rx.pcap,mbuf-size=8000'
+
+8. Launch virtio-user1 by below command::
+
+    ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -l 7-8 --socket-mem 1024,1024 \
+    --no-pci \
+    --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=1,packed_vq=0,mrg_rxbuf=1,in_order=1 \
+    -- -i --nb-cores=1 --txd=256 --rxd=256
+    testpmd>set burst 1
+    testpmd>start tx_first 27
+    testpmd>stop
+    testpmd>set burst 32
+    testpmd>start tx_first 7
+    testpmd>stop
+    testpmd>set burst 5
+    testpmd>set txpkts 2000,2000,2000,2000
+    testpmd>start tx_first 1
+
+9. Quit pdump,vhost received packets in pdump-vhost-rx.pcap,check headers and payload of all packets in pdump-virtio-rx.pcap and pdump-vhost-rx.pcap and ensure the content are same.
+
+Test Case 7: split virtqueue vm2vm non-mergeable path test
+==========================================================
+
+1. Launch testpmd by below command::
+
+    ./testpmd -l 1-2 -n 4 --socket-mem 1024,1024 --no-pci \
+    --vdev 'eth_vhost0,iface=vhost-net,queues=1' --vdev 'eth_vhost1,iface=vhost-net1,queues=1' -- \
+    -i --nb-cores=1 --no-flush-rx
+
+2. Launch virtio-user1 by below command::
+
+    ./testpmd -n 4 -l 7-8 --socket-mem 1024,1024 \
+    --no-pci --file-prefix=virtio1 \
+    --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=1,packed_vq=0,mrg_rxbuf=0,in_order=0 \
+    -- -i --nb-cores=1 --txd=256 --rxd=256 --enable-hw-vlan-strip
+
+3. Attach pdump secondary process to primary process by same file-prefix::
+
+    ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=virtio1 -- --pdump 'device_id=net_virtio_user1,queue=*,rx-dev=/root/pdump-rx.pcap,mbuf-size=8000'
+
+4. Launch virtio-user0 and send 8k length packets::
+
+    ./testpmd -n 4 -l 5-6 --socket-mem 1024,1024 \
+    --no-pci --file-prefix=virtio \
+    --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=1,packed_vq=0,mrg_rxbuf=0,in_order=0 \
+    -- -i --nb-cores=1 --txd=256 --rxd=256 --enable-hw-vlan-strip
+    testpmd>set burst 1
+    testpmd>start tx_first 27
+    testpmd>stop
+    testpmd>set burst 32
+    testpmd>start tx_first 7
+    testpmd>stop
+    testpmd>set txpkts 2000,2000,2000,2000
+    testpmd>start tx_first 1
+
+5. Start vhost, then quit pdump and three testpmd, get 251 packets received by virtio-user1 in pdump-virtio-rx.pcap.
+
+6. Launch testpmd by below command::
+
+    ./x86_64-native-linuxapp-gcc/app/testpmd -l 1-2 -n 4 --socket-mem 1024,1024 --no-pci --file-prefix=vhost  \
+    --vdev 'eth_vhost1,iface=vhost-net1,queues=1' -- \
+    -i --nb-cores=1 --no-flush-rx
+    testpmd>set fwd rxonly
+    testpmd>start
+
+7. Attach pdump secondary process to primary process by same file-prefix::
+
+    ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=vhost -- --pdump 'port=0,queue=*,rx-dev=/root/pdump-vhost-rx.pcap,mbuf-size=8000'
+
+8. Launch virtio-user1 by below command::
+
+    ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -l 7-8 --socket-mem 1024,1024 \
+    --no-pci \
+    --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=1,packed_vq=0,mrg_rxbuf=0,in_order=0 \
+    -- -i --nb-cores=1 --txd=256 --rxd=256 --enable-hw-vlan-strip
+    testpmd>set burst 1
+    testpmd>start tx_first 27
+    testpmd>stop
+    testpmd>set burst 32
+    testpmd>start tx_first 7
+
+9. Quit pdump,vhost received packets in pdump-vhost-rx.pcap,check headers and payload of all packets in pdump-virtio-rx.pcap and pdump-vhost-rx.pcap and ensure the content are same.
+
+Test Case 8: split virtqueue vm2vm inorder non-mergeable path test
+==================================================================
+
+1. Launch testpmd by below command::
+
+    ./testpmd -l 1-2 -n 4 --socket-mem 1024,1024 --no-pci \
+    --vdev 'eth_vhost0,iface=vhost-net,queues=1' --vdev 'eth_vhost1,iface=vhost-net1,queues=1' -- \
+    -i --nb-cores=1 --no-flush-rx
+
+2. Launch virtio-user1 by below command::
+
+    ./testpmd -n 4 -l 7-8 --socket-mem 1024,1024 \
+    --no-pci --file-prefix=virtio1 \
+    --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=1,packed_vq=0,mrg_rxbuf=0,in_order=1 \
+    -- -i --nb-cores=1 --txd=256 --rxd=256
+    testpmd>set fwd rxonly
+    testpmd>start
+
+3. Attach pdump secondary process to primary process by same file-prefix::
+
+    ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=virtio1 -- --pdump 'device_id=net_virtio_user1,queue=*,rx-dev=/root/pdump-rx.pcap,mbuf-size=8000'
+
+4. Launch virtio-user0 and send 8k length packets::
+
+    ./testpmd -n 4 -l 5-6 --socket-mem 1024,1024 \
+    --no-pci --file-prefix=virtio \
+    --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=1,packed_vq=0,mrg_rxbuf=0,in_order=1 \
+    -- -i --nb-cores=1 --txd=256 --rxd=256
+    testpmd>set burst 1
+    testpmd>start tx_first 27
+    testpmd>stop
+    testpmd>set burst 32
+    testpmd>start tx_first 7
+    testpmd>stop
+    testpmd>set txpkts 2000,2000,2000,2000
+    testpmd>start tx_first 1
+
+5. Start vhost, then quit pdump and three testpmd, get 251 packets received by virtio-user1 in pdump-virtio-rx.pcap.
+
+6. Launch testpmd by below command::
+
+    ./x86_64-native-linuxapp-gcc/app/testpmd -l 1-2 -n 4 --socket-mem 1024,1024 --no-pci --file-prefix=vhost  \
+    --vdev 'eth_vhost1,iface=vhost-net1,queues=1' -- \
+    -i --nb-cores=1 --no-flush-rx
+    testpmd>set fwd rxonly
+    testpmd>start
+
+7. Attach pdump secondary process to primary process by same file-prefix::
+
+    ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=vhost -- --pdump 'port=0,queue=*,rx-dev=/root/pdump-vhost-rx.pcap,mbuf-size=8000'
+
+8. Launch virtio-user1 by below command::
+
+    ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -l 7-8 --socket-mem 1024,1024 \
+    --no-pci \
+    --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=1,packed_vq=0,mrg_rxbuf=0,in_order=1 \
+    -- -i --nb-cores=1 --txd=256 --rxd=256
+    testpmd>set burst 1
+    testpmd>start tx_first 27
+    testpmd>stop
+    testpmd>set burst 32
+    testpmd>start tx_first 7
+
+9. Quit pdump,vhost received packets in pdump-vhost-rx.pcap,check headers and payload of all packets in pdump-virtio-rx.pcap and pdump-vhost-rx.pcap and ensure the content are same.
+
+Test Case 9: split virtqueue vm2vm vector_rx path test
+======================================================
+
+1. Launch testpmd by below command::
+
+    ./testpmd -l 1-2 -n 4 --socket-mem 1024,1024 --no-pci \
+    --vdev 'eth_vhost0,iface=vhost-net,queues=1' --vdev 'eth_vhost1,iface=vhost-net1,queues=1' -- \
+    -i --nb-cores=1 --no-flush-rx
+
+2. Launch virtio-user1 by below command::
+
+    ./testpmd -n 4 -l 7-8 --socket-mem 1024,1024 \
+    --no-pci --file-prefix=virtio1 \
+    --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=1,packed_vq=0,mrg_rxbuf=0,in_order=0 \
+    -- -i --nb-cores=1 --txd=256 --rxd=256
+
+3. Attach pdump secondary process to primary process by same file-prefix::
+
+    ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=virtio1 -- --pdump 'device_id=net_virtio_user1,queue=*,rx-dev=/root/pdump-rx.pcap,mbuf-size=8000'
+
+4. Launch virtio-user0 and send 8k length packets::
+
+    ./testpmd -n 4 -l 5-6 --socket-mem 1024,1024 \
+    --no-pci --file-prefix=virtio \
+    --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=1,packed_vq=0,mrg_rxbuf=0,in_order=0 \
+    -- -i --nb-cores=1 --txd=256 --rxd=256
+    testpmd>set burst 1
+    testpmd>start tx_first 27
+    testpmd>stop
+    testpmd>set burst 32
+    testpmd>start tx_first 7
+    testpmd>stop
+    testpmd>set txpkts 2000,2000,2000,2000
+    testpmd>start tx_first 1
+
+5. Start vhost, then quit pdump and three testpmd, get 251 packets received by virtio-user1 in pdump-virtio-rx.pcap.
+
+6. Launch testpmd by below command::
+
+    ./x86_64-native-linuxapp-gcc/app/testpmd -l 1-2 -n 4 --socket-mem 1024,1024 --no-pci --file-prefix=vhost  \
+    --vdev 'eth_vhost1,iface=vhost-net1,queues=1' -- \
+    -i --nb-cores=1 --no-flush-rx
+    testpmd>set fwd rxonly
+    testpmd>start
+
+7. Attach pdump secondary process to primary process by same file-prefix::
+
+    ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=vhost -- --pdump 'port=0,queue=*,rx-dev=/root/pdump-vhost-rx.pcap,mbuf-size=8000'
+
+8. Launch virtio-user1 by below command::
+
+    ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -l 7-8 --socket-mem 1024,1024 \
+    --no-pci \
+    --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=1,packed_vq=0,mrg_rxbuf=0,in_order=0 \
+    -- -i --nb-cores=1 --txd=256 --rxd=256
+    testpmd>set burst 1
+    testpmd>start tx_first 27
+    testpmd>stop
+    testpmd>set burst 32
+    testpmd>start tx_first 7
+
+9. Quit pdump,vhost received packets in pdump-vhost-rx.pcap,check headers and payload of all packets in pdump-virtio-rx.pcap and pdump-vhost-rx.pcap and ensure the content are same.
\ No newline at end of file
-- 
2.17.1


^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [dts] [PATCH v1] test_plans: add packed virtqueue test in vm2vm topology
  2019-12-19 19:36 [dts] [PATCH v1] test_plans: add packed virtqueue test in vm2vm topology Yinan
@ 2019-12-20  8:12 ` Tu, Lijuan
  0 siblings, 0 replies; 4+ messages in thread
From: Tu, Lijuan @ 2019-12-20  8:12 UTC (permalink / raw)
  To: Wang, Yinan, dts; +Cc: Wang, Yinan

applied

> -----Original Message-----
> From: dts [mailto:dts-bounces@dpdk.org] On Behalf Of Yinan
> Sent: Friday, December 20, 2019 3:37 AM
> To: dts@dpdk.org
> Cc: Wang, Yinan <yinan.wang@intel.com>
> Subject: [dts] [PATCH v1] test_plans: add packed virtqueue test in vm2vm
> topology
> 
> From: Wang Yinan <yinan.wang@intel.com>
> 
> Add packed virtqueue and split virtqueue test in
> vm2vm_virtio_user_test_plan.rst
> 
> Signed-off-by: Wang Yinan <yinan.wang@intel.com>
> ---
>  test_plans/vm2vm_virtio_user_test_plan.rst | 675 +++++++++++++++++++++
>  1 file changed, 675 insertions(+)
>  create mode 100644 test_plans/vm2vm_virtio_user_test_plan.rst
> 
> diff --git a/test_plans/vm2vm_virtio_user_test_plan.rst
> b/test_plans/vm2vm_virtio_user_test_plan.rst
> new file mode 100644
> index 0000000..a28ca92
> --- /dev/null
> +++ b/test_plans/vm2vm_virtio_user_test_plan.rst
> @@ -0,0 +1,675 @@
> +.. Copyright (c) <2019>, Intel Corporation
> +         All rights reserved.
> +
> +   Redistribution and use in source and binary forms, with or without
> +   modification, are permitted provided that the following conditions
> +   are met:
> +
> +   - Redistributions of source code must retain the above copyright
> +     notice, this list of conditions and the following disclaimer.
> +
> +   - Redistributions in binary form must reproduce the above copyright
> +     notice, this list of conditions and the following disclaimer in
> +     the documentation and/or other materials provided with the
> +     distribution.
> +
> +   - Neither the name of Intel Corporation nor the names of its
> +     contributors may be used to endorse or promote products derived
> +     from this software without specific prior written permission.
> +
> +   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND
> CONTRIBUTORS
> +   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
> +   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND
> FITNESS
> +   FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
> +   COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT,
> INDIRECT,
> +   INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
> +   (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE
> GOODS OR
> +   SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
> +   HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
> CONTRACT,
> +   STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
> +   ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF
> ADVISED
> +   OF THE POSSIBILITY OF SUCH DAMAGE.
> +
> +======================================
> +vm2vm vhost-user/virtio-user test plan
> +======================================
> +
> +Description
> +===========
> +
> +This test plan includes split virtqueue vm2vm in-order mergeable, in-order
> non-mergeable, mergeable, non-mergeable, vector_rx path test, and packed
> virtqueue vm2vm in-order mergeable, in-order non-mergeable, mergeable,
> non-mergeable path test. This plan also check the payload of packets is
> accurate. For packed virtqueue test, need using qemu version > 4.2.0.
> +
> +Prerequisites
> +=============
> +
> +Enable pcap lib in dpdk code and recompile::
> +
> +    --- a/config/common_base
> +    +++ b/config/common_base
> +    @@ -492,7 +492,7 @@ CONFIG_RTE_LIBRTE_PMD_NULL=y
> +     #
> +     # Compile software PMD backed by PCAP files
> +     #
> +    -CONFIG_RTE_LIBRTE_PMD_PCAP=n
> +    +CONFIG_RTE_LIBRTE_PMD_PCAP=y
> +
> +Then build DPDK.
> +
> +Test flow
> +=========
> +Virtio-user <-> Vhost-user <-> Testpmd <-> Vhost-user <-> Virtio-user
> +
> +Test Case 1: packed virtqueue vm2vm mergeable path test
> +=======================================================
> +
> +1. Launch vhost by below command::
> +
> +    ./x86_64-native-linuxapp-gcc/app/testpmd -l 1-2 -n 4 --socket-mem
> 1024,1024 --no-pci \
> +    --vdev 'eth_vhost0,iface=vhost-net,queues=1' --vdev
> 'eth_vhost1,iface=vhost-net1,queues=1' -- \
> +    -i --nb-cores=1 --no-flush-rx
> +
> +2. Launch virtio-user1 by below command::
> +
> +    ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -l 7-8 --socket-mem
> 1024,1024 \
> +    --no-pci --file-prefix=virtio1 \
> +    --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-
> net1,queues=1,packed_vq=1,mrg_rxbuf=1,in_order=0 \
> +    -- -i --nb-cores=1 --txd=256 --rxd=256
> +    testpmd>set fwd rxonly
> +    testpmd>start
> +
> +3. Attach pdump secondary process to primary process by same file-prefix::
> +
> +    ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=virtio1 -- --
> pdump 'device_id=net_virtio_user1,queue=*,rx-dev=./pdump-virtio-
> rx.pcap,mbuf-size=8000'
> +
> +4. Launch virtio-user0 and send 8k length packets::
> +
> +    ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -l 5-6 --socket-mem
> 1024,1024 \
> +    --no-pci --file-prefix=virtio \
> +    --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-
> net,queues=1,packed_vq=1,mrg_rxbuf=1,in_order=0 \
> +    -- -i --nb-cores=1 --txd=256 --rxd=256
> +    testpmd>set burst 1
> +    testpmd>start tx_first 27
> +    testpmd>stop
> +    testpmd>set burst 32
> +    testpmd>start tx_first 7
> +    testpmd>stop
> +    testpmd>set txpkts 2000,2000,2000,2000
> +    testpmd>start tx_first 1
> +
> +5. Start vhost, then send 8k length packets by virtio-user0 again::
> +
> +    testpmd>stop
> +    testpmd>set txpkts 2000
> +    testpmd>start tx_first 1
> +
> +6. Quit pdump and three testpmd, get 284 packets received by virtio-user1
> in pdump-virtio-rx.pcap.
> +
> +7. Launch testpmd by below command::
> +
> +    ./x86_64-native-linuxapp-gcc/app/testpmd -l 1-2 -n 4 --socket-mem
> 1024,1024 --no-pci --file-prefix=vhost  \
> +    --vdev 'eth_vhost1,iface=vhost-net1,queues=1' -- \
> +    -i --nb-cores=1 --no-flush-rx
> +    testpmd>set fwd rxonly
> +    testpmd>start
> +
> +8. Attach pdump secondary process to primary process by same file-prefix::
> +
> +    ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=vhost -- --
> pdump 'port=0,queue=*,rx-dev=/root/pdump-vhost-rx.pcap,mbuf-
> size=8000'
> +
> +9. Launch virtio-user1 by below command::
> +
> +    ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -l 7-8 --socket-mem
> 1024,1024 --no-pci \
> +    --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-
> net1,queues=1,packed_vq=1,mrg_rxbuf=1,in_order=0 \
> +    -- -i --nb-cores=1 --txd=256 --rxd=256
> +    testpmd>set burst 1
> +    testpmd>start tx_first 27
> +    testpmd>stop
> +    testpmd>set burst 32
> +    testpmd>start tx_first 7
> +    testpmd>stop
> +    testpmd>set burst 1
> +    testpmd>set txpkts 2000,2000,2000,2000
> +    testpmd>start tx_first 1
> +    testpmd>stop
> +    testpmd>set burst 32
> +    testpmd>set txpkts 2000
> +    testpmd>start tx_first 1
> +
> +10. Quit pdump,vhost received packets in pdump-vhost-rx.pcap, check
> headers and payload of all packets in pdump-virtio-rx.pcap and pdump-
> vhost-rx.pcap and ensure the content are same.
> +
> +Test Case 2: packed virtqueue vm2vm inorder mergeable path test
> +===============================================================
> +
> +1. Launch testpmd by below command::
> +
> +    ./testpmd -l 1-2 -n 4 --socket-mem 1024,1024 --no-pci \
> +    --vdev 'eth_vhost0,iface=vhost-net,queues=1' --vdev
> 'eth_vhost1,iface=vhost-net1,queues=1' -- \
> +    -i --nb-cores=1 --no-flush-rx
> +
> +2. Launch virtio-user1 by below command::
> +
> +    ./testpmd -n 4 -l 7-8 --socket-mem 1024,1024 \
> +    --no-pci --file-prefix=virtio1 \
> +    --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-
> net1,queues=1,packed_vq=1,mrg_rxbuf=1,in_order=1 \
> +    -- -i --nb-cores=1 --txd=256 --rxd=256
> +    testpmd>set fwd rxonly
> +    testpmd>start
> +
> +3. Attach pdump secondary process to primary process by same file-prefix::
> +
> +    ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=virtio1 -- --
> pdump 'device_id=net_virtio_user1,queue=*,rx-dev=/root/pdump-
> rx.pcap,mbuf-size=8000'
> +
> +4. Launch virtio-user0 and send 8k length packets::
> +
> +    ./testpmd -n 4 -l 5-6 --socket-mem 1024,1024 \
> +    --no-pci --file-prefix=virtio \
> +    --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-
> net,queues=1,packed_vq=1,mrg_rxbuf=1,in_order=1 \
> +    -- -i --nb-cores=1 --txd=256 --rxd=256
> +    testpmd>set burst 1
> +    testpmd>start tx_first 27
> +    testpmd>stop
> +    testpmd>set burst 32
> +    testpmd>start tx_first 7
> +    testpmd>stop
> +    testpmd>set burst 1
> +    testpmd>set txpkts 2000,2000,2000,2000
> +    testpmd>start tx_first 1
> +
> +5. Start vhost, then quit pdump and three testpmd, get 252 packets received
> by virtio-user1 in pdump-virtio-rx.pcap.
> +
> +6. Launch testpmd by below command::
> +
> +    ./x86_64-native-linuxapp-gcc/app/testpmd -l 1-2 -n 4 --socket-mem
> 1024,1024 --no-pci --file-prefix=vhost  \
> +    --vdev 'eth_vhost1,iface=vhost-net1,queues=1' -- \
> +    -i --nb-cores=1 --no-flush-rx
> +    testpmd>set fwd rxonly
> +    testpmd>start
> +
> +7. Attach pdump secondary process to primary process by same file-prefix::
> +
> +    ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=vhost -- --
> pdump 'port=0,queue=*,rx-dev=/root/pdump-vhost-rx.pcap,mbuf-
> size=8000'
> +
> +8. Launch virtio-user1 by below command::
> +
> +    ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -l 7-8 --socket-mem
> 1024,1024 \
> +    --no-pci \
> +    --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-
> net1,queues=1,packed_vq=1,mrg_rxbuf=1,in_order=1 \
> +    -- -i --nb-cores=1 --txd=256 --rxd=256
> +    testpmd>set burst 1
> +    testpmd>start tx_first 27
> +    testpmd>stop
> +    testpmd>set burst 32
> +    testpmd>start tx_first 7
> +    testpmd>stop
> +    testpmd>set burst 1
> +    testpmd>set txpkts 2000,2000,2000,2000
> +    testpmd>start tx_first 1
> +
> +9. Quit pdump,vhost received packets in pdump-vhost-rx.pcap, check
> headers and payload of all packets in pdump-virtio-rx.pcap and pdump-
> vhost-rx.pcap and ensure the content are same.
> +
> +Test Case 3: packed virtqueue vm2vm non-mergeable path test
> +===========================================================
> +
> +1. Launch testpmd by below command::
> +
> +    ./testpmd -l 1-2 -n 4 --socket-mem 1024,1024 --no-pci \
> +    --vdev 'eth_vhost0,iface=vhost-net,queues=1' --vdev
> 'eth_vhost1,iface=vhost-net1,queues=1' -- \
> +    -i --nb-cores=1 --no-flush-rx
> +
> +2. Launch virtio-user1 by below command::
> +
> +    ./testpmd -n 4 -l 7-8 --socket-mem 1024,1024 \
> +    --no-pci --file-prefix=virtio1 \
> +    --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-
> net1,queues=1,packed_vq=1,mrg_rxbuf=0,in_order=0 \
> +    -- -i --nb-cores=1 --txd=256 --rxd=256
> +
> +3. Attach pdump secondary process to primary process by same file-prefix::
> +
> +    ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=virtio1 -- --
> pdump 'device_id=net_virtio_user1,queue=*,rx-dev=/root/pdump-
> rx.pcap,mbuf-size=8000'
> +
> +4. Launch virtio-user0 and send 8k length packets::
> +
> +    ./testpmd -n 4 -l 5-6 --socket-mem 1024,1024 \
> +    --no-pci --file-prefix=virtio \
> +    --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-
> net,queues=1,packed_vq=1,mrg_rxbuf=0,in_order=0 \
> +    -- -i --nb-cores=1 --txd=256 --rxd=256
> +    testpmd>set burst 1
> +    testpmd>start tx_first 27
> +    testpmd>stop
> +    testpmd>set burst 32
> +    testpmd>start tx_first 7
> +    testpmd>stop
> +    testpmd>set txpkts 2000,2000,2000,2000
> +    testpmd>start tx_first 1
> +
> +5. Start vhost, then quit pdump and three testpmd, get 251 packets received
> by virtio-user1 in pdump-virtio-rx.pcap.
> +
> +6. Launch testpmd by below command::
> +
> +    ./x86_64-native-linuxapp-gcc/app/testpmd -l 1-2 -n 4 --socket-mem
> 1024,1024 --no-pci --file-prefix=vhost  \
> +    --vdev 'eth_vhost1,iface=vhost-net1,queues=1' -- \
> +    -i --nb-cores=1 --no-flush-rx
> +    testpmd>set fwd rxonly
> +    testpmd>start
> +
> +7. Attach pdump secondary process to primary process by same file-prefix::
> +
> +    ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=vhost -- --
> pdump 'port=0,queue=*,rx-dev=/root/pdump-vhost-rx.pcap,mbuf-
> size=8000'
> +
> +8. Launch virtio-user1 by below command::
> +
> +    ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -l 7-8 --socket-mem
> 1024,1024 \
> +    --no-pci \
> +    --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-
> net1,queues=1,packed_vq=1,mrg_rxbuf=0,in_order=0 \
> +    -- -i --nb-cores=1 --txd=256 --rxd=256
> +    testpmd>set burst 1
> +    testpmd>start tx_first 27
> +    testpmd>stop
> +    testpmd>set burst 32
> +    testpmd>start tx_first 7
> +
> +9. Quit pdump,vhost received packets in pdump-vhost-rx.pcap, check
> headers and payload of all packets in pdump-virtio-rx.pcap and pdump-
> vhost-rx.pcap and ensure the content are same.
> +
> +Test Case 4: packed virtqueue vm2vm inorder non-mergeable path test
> +===============================================================
> ====
> +
> +1. Launch testpmd by below command::
> +
> +    ./testpmd -l 1-2 -n 4 --socket-mem 1024,1024 --no-pci \
> +    --vdev 'eth_vhost0,iface=vhost-net,queues=1' --vdev
> 'eth_vhost1,iface=vhost-net1,queues=1' -- \
> +    -i --nb-cores=1 --no-flush-rx
> +
> +2. Launch virtio-user1 by below command::
> +
> +    ./testpmd -n 4 -l 7-8 --socket-mem 1024,1024 \
> +    --no-pci --file-prefix=virtio1 \
> +    --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-
> net1,queues=1,packed_vq=1,mrg_rxbuf=0,in_order=1 \
> +    -- -i --nb-cores=1 --txd=256 --rxd=256
> +    testpmd>set fwd rxonly
> +    testpmd>start
> +
> +3. Attach pdump secondary process to primary process by same file-prefix::
> +
> +    ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=virtio1 -- --
> pdump 'device_id=net_virtio_user1,queue=*,rx-dev=/root/pdump-
> rx.pcap,mbuf-size=8000'
> +
> +4. Launch virtio-user0 and send 8k length packets::
> +
> +    ./testpmd -n 4 -l 5-6 --socket-mem 1024,1024 \
> +    --no-pci --file-prefix=virtio \
> +    --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-
> net,queues=1,packed_vq=1,mrg_rxbuf=0,in_order=1 \
> +    -- -i --nb-cores=1 --txd=256 --rxd=256
> +    testpmd>set burst 1
> +    testpmd>start tx_first 27
> +    testpmd>stop
> +    testpmd>set burst 32
> +    testpmd>start tx_first 7
> +    testpmd>stop
> +    testpmd>set txpkts 2000,2000,2000,2000
> +    testpmd>start tx_first 1
> +
> +5. Start vhost, then quit pdump and three testpmd, get 251 packets received
> by virtio-user1 in pdump-virtio-rx.pcap.
> +
> +6. Launch testpmd by below command::
> +
> +    ./x86_64-native-linuxapp-gcc/app/testpmd -l 1-2 -n 4 --socket-mem
> 1024,1024 --no-pci --file-prefix=vhost  \
> +    --vdev 'eth_vhost1,iface=vhost-net1,queues=1' -- \
> +    -i --nb-cores=1 --no-flush-rx
> +    testpmd>set fwd rxonly
> +    testpmd>start
> +
> +7. Attach pdump secondary process to primary process by same file-prefix::
> +
> +    ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=vhost -- --
> pdump 'port=0,queue=*,rx-dev=/root/pdump-vhost-rx.pcap,mbuf-
> size=8000'
> +
> +8. Launch virtio-user1 by below command::
> +
> +    ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -l 7-8 --socket-mem
> 1024,1024 \
> +    --no-pci \
> +    --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-
> net1,queues=1,packed_vq=1,mrg_rxbuf=0,in_order=1 \
> +    -- -i --nb-cores=1 --txd=256 --rxd=256
> +    testpmd>set burst 1
> +    testpmd>start tx_first 27
> +    testpmd>stop
> +    testpmd>set burst 32
> +    testpmd>start tx_first 7
> +
> +9. Quit pdump,vhost received packets in pdump-vhost-rx.pcap,check
> headers and payload of all packets in pdump-virtio-rx.pcap and pdump-
> vhost-rx.pcap and ensure the content are same.
> +
> +Test Case 5: split virtqueue vm2vm mergeable path test
> +======================================================
> +
> +1. Launch vhost by below command::
> +
> +    ./x86_64-native-linuxapp-gcc/app/testpmd -l 1-2 -n 4 --socket-mem
> 1024,1024 --no-pci \
> +    --vdev 'eth_vhost0,iface=vhost-net,queues=1' --vdev
> 'eth_vhost1,iface=vhost-net1,queues=1' -- \
> +    -i --nb-cores=1 --no-flush-rx
> +
> +2. Launch virtio-user1 by below command::
> +
> +    ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -l 7-8 --socket-mem
> 1024,1024 \
> +    --no-pci --file-prefix=virtio1 \
> +    --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-
> net1,queues=1,packed_vq=0,mrg_rxbuf=1,in_order=0 \
> +    -- -i --nb-cores=1 --txd=256 --rxd=256
> +    testpmd>set fwd rxonly
> +    testpmd>start
> +
> +3. Attach pdump secondary process to primary process by same file-prefix::
> +
> +    ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=virtio1 -- --
> pdump 'device_id=net_virtio_user1,queue=*,rx-dev=./pdump-virtio-
> rx.pcap,mbuf-size=8000'
> +
> +4. Launch virtio-user0 and send 8k length packets::
> +
> +    ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -l 5-6 --socket-mem
> 1024,1024 \
> +    --no-pci --file-prefix=virtio \
> +    --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-
> net,queues=1,packed_vq=0,mrg_rxbuf=1,in_order=0 \
> +    -- -i --nb-cores=1 --txd=256 --rxd=256
> +    testpmd>set burst 1
> +    testpmd>start tx_first 27
> +    testpmd>stop
> +    testpmd>set burst 32
> +    testpmd>start tx_first 7
> +    testpmd>stop
> +    testpmd>set txpkts 2000,2000,2000,2000
> +    testpmd>start tx_first 1
> +
> +5. Start vhost, then send 8k length packets by virtio-user0 again::
> +
> +    testpmd>stop
> +    testpmd>set txpkts 2000
> +    testpmd>start tx_first 1
> +
> +6. Quit pdump and three testpmd, get 288 packets received by virtio-user1
> in pdump-virtio-rx.pcap.
> +
> +7. Launch testpmd by below command::
> +
> +    ./x86_64-native-linuxapp-gcc/app/testpmd -l 1-2 -n 4 --socket-mem
> 1024,1024 --no-pci --file-prefix=vhost  \
> +    --vdev 'eth_vhost1,iface=vhost-net1,queues=1' -- \
> +    -i --nb-cores=1 --no-flush-rx
> +    testpmd>set fwd rxonly
> +    testpmd>start
> +
> +8. Attach pdump secondary process to primary process by same file-prefix::
> +
> +    ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=vhost -- --
> pdump 'port=0,queue=*,rx-dev=/root/pdump-vhost-rx.pcap,mbuf-
> size=8000'
> +
> +9. Launch virtio-user1 by below command::
> +
> +    ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -l 7-8 --socket-mem
> 1024,1024 \
> +    --no-pci \
> +    --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-
> net1,queues=1,packed_vq=0,mrg_rxbuf=1,in_order=0 \
> +    -- -i --nb-cores=1 --txd=256 --rxd=256
> +    testpmd>set burst 1
> +    testpmd>start tx_first 27
> +    testpmd>stop
> +    testpmd>set burst 32
> +    testpmd>start tx_first 7
> +    testpmd>stop
> +    testpmd>set burst 5
> +    testpmd>set txpkts 2000,2000,2000,2000
> +    testpmd>start tx_first 1
> +    testpmd>stop
> +    testpmd>set burst 32
> +    testpmd>set txpkts 2000
> +    testpmd>start tx_first 1
> +
> +9. Quit pdump,vhost received packets in pdump-vhost-rx.pcap, check
> headers and payload of all packets in pdump-virtio-rx.pcap and pdump-
> vhost-rx.pcap and ensure the content are same.
> +
> +Test Case 6: split virtqueue vm2vm inorder mergeable path test
> +==============================================================
> +
> +1. Launch testpmd by below command::
> +
> +    ./testpmd -l 1-2 -n 4 --socket-mem 1024,1024 --no-pci \
> +    --vdev 'eth_vhost0,iface=vhost-net,queues=1' --vdev
> 'eth_vhost1,iface=vhost-net1,queues=1' -- \
> +    -i --nb-cores=1 --no-flush-rx
> +
> +2. Launch virtio-user1 by below command::
> +
> +    ./testpmd -n 4 -l 7-8 --socket-mem 1024,1024 \
> +    --no-pci --file-prefix=virtio1 \
> +    --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-
> net1,queues=1,packed_vq=0,mrg_rxbuf=1,in_order=1 \
> +    -- -i --nb-cores=1 --txd=256 --rxd=256
> +    testpmd>set fwd rxonly
> +    testpmd>start
> +
> +3. Attach pdump secondary process to primary process by same file-prefix::
> +
> +    ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=virtio1 -- --
> pdump 'device_id=net_virtio_user1,queue=*,rx-dev=/root/pdump-
> rx.pcap,mbuf-size=8000'
> +
> +4. Launch virtio-user0 and send 8k length packets::
> +
> +    ./testpmd -n 4 -l 5-6 --socket-mem 1024,1024 \
> +    --no-pci --file-prefix=virtio \
> +    --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-
> net,queues=1,packed_vq=0,mrg_rxbuf=1,in_order=1 \
> +    -- -i --nb-cores=1 --txd=256 --rxd=256
> +    testpmd>set burst 1
> +    testpmd>start tx_first 27
> +    testpmd>stop
> +    testpmd>set burst 32
> +    testpmd>start tx_first 7
> +    testpmd>stop
> +    testpmd>set txpkts 2000,2000,2000,2000
> +    testpmd>start tx_first 1
> +
> +5. Start vhost, then quit pdump and three testpmd, get 256 packets received
> by virtio-user1 in pdump-virtio-rx.pcap.
> +
> +6. Launch testpmd by below command::
> +
> +    ./x86_64-native-linuxapp-gcc/app/testpmd -l 1-2 -n 4 --socket-mem
> 1024,1024 --no-pci --file-prefix=vhost  \
> +    --vdev 'eth_vhost1,iface=vhost-net1,queues=1' -- \
> +    -i --nb-cores=1 --no-flush-rx
> +    testpmd>set fwd rxonly
> +    testpmd>start
> +
> +7. Attach pdump secondary process to primary process by same file-prefix::
> +
> +    ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=vhost -- --
> pdump 'port=0,queue=*,rx-dev=/root/pdump-vhost-rx.pcap,mbuf-
> size=8000'
> +
> +8. Launch virtio-user1 by below command::
> +
> +    ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -l 7-8 --socket-mem
> 1024,1024 \
> +    --no-pci \
> +    --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-
> net1,queues=1,packed_vq=0,mrg_rxbuf=1,in_order=1 \
> +    -- -i --nb-cores=1 --txd=256 --rxd=256
> +    testpmd>set burst 1
> +    testpmd>start tx_first 27
> +    testpmd>stop
> +    testpmd>set burst 32
> +    testpmd>start tx_first 7
> +    testpmd>stop
> +    testpmd>set burst 5
> +    testpmd>set txpkts 2000,2000,2000,2000
> +    testpmd>start tx_first 1
> +
> +9. Quit pdump,vhost received packets in pdump-vhost-rx.pcap,check
> headers and payload of all packets in pdump-virtio-rx.pcap and pdump-
> vhost-rx.pcap and ensure the content are same.
> +
> +Test Case 7: split virtqueue vm2vm non-mergeable path test
> +==========================================================
> +
> +1. Launch testpmd by below command::
> +
> +    ./testpmd -l 1-2 -n 4 --socket-mem 1024,1024 --no-pci \
> +    --vdev 'eth_vhost0,iface=vhost-net,queues=1' --vdev
> 'eth_vhost1,iface=vhost-net1,queues=1' -- \
> +    -i --nb-cores=1 --no-flush-rx
> +
> +2. Launch virtio-user1 by below command::
> +
> +    ./testpmd -n 4 -l 7-8 --socket-mem 1024,1024 \
> +    --no-pci --file-prefix=virtio1 \
> +    --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-
> net1,queues=1,packed_vq=0,mrg_rxbuf=0,in_order=0 \
> +    -- -i --nb-cores=1 --txd=256 --rxd=256 --enable-hw-vlan-strip
> +
> +3. Attach pdump secondary process to primary process by same file-prefix::
> +
> +    ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=virtio1 -- --
> pdump 'device_id=net_virtio_user1,queue=*,rx-dev=/root/pdump-
> rx.pcap,mbuf-size=8000'
> +
> +4. Launch virtio-user0 and send 8k length packets::
> +
> +    ./testpmd -n 4 -l 5-6 --socket-mem 1024,1024 \
> +    --no-pci --file-prefix=virtio \
> +    --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-
> net,queues=1,packed_vq=0,mrg_rxbuf=0,in_order=0 \
> +    -- -i --nb-cores=1 --txd=256 --rxd=256 --enable-hw-vlan-strip
> +    testpmd>set burst 1
> +    testpmd>start tx_first 27
> +    testpmd>stop
> +    testpmd>set burst 32
> +    testpmd>start tx_first 7
> +    testpmd>stop
> +    testpmd>set txpkts 2000,2000,2000,2000
> +    testpmd>start tx_first 1
> +
> +5. Start vhost, then quit pdump and three testpmd, get 251 packets received
> by virtio-user1 in pdump-virtio-rx.pcap.
> +
> +6. Launch testpmd by below command::
> +
> +    ./x86_64-native-linuxapp-gcc/app/testpmd -l 1-2 -n 4 --socket-mem
> 1024,1024 --no-pci --file-prefix=vhost  \
> +    --vdev 'eth_vhost1,iface=vhost-net1,queues=1' -- \
> +    -i --nb-cores=1 --no-flush-rx
> +    testpmd>set fwd rxonly
> +    testpmd>start
> +
> +7. Attach pdump secondary process to primary process by same file-prefix::
> +
> +    ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=vhost -- --
> pdump 'port=0,queue=*,rx-dev=/root/pdump-vhost-rx.pcap,mbuf-
> size=8000'
> +
> +8. Launch virtio-user1 by below command::
> +
> +    ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -l 7-8 --socket-mem
> 1024,1024 \
> +    --no-pci \
> +    --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-
> net1,queues=1,packed_vq=0,mrg_rxbuf=0,in_order=0 \
> +    -- -i --nb-cores=1 --txd=256 --rxd=256 --enable-hw-vlan-strip
> +    testpmd>set burst 1
> +    testpmd>start tx_first 27
> +    testpmd>stop
> +    testpmd>set burst 32
> +    testpmd>start tx_first 7
> +
> +9. Quit pdump,vhost received packets in pdump-vhost-rx.pcap,check
> headers and payload of all packets in pdump-virtio-rx.pcap and pdump-
> vhost-rx.pcap and ensure the content are same.
> +
> +Test Case 8: split virtqueue vm2vm inorder non-mergeable path test
> +===============================================================
> ===
> +
> +1. Launch testpmd by below command::
> +
> +    ./testpmd -l 1-2 -n 4 --socket-mem 1024,1024 --no-pci \
> +    --vdev 'eth_vhost0,iface=vhost-net,queues=1' --vdev
> 'eth_vhost1,iface=vhost-net1,queues=1' -- \
> +    -i --nb-cores=1 --no-flush-rx
> +
> +2. Launch virtio-user1 by below command::
> +
> +    ./testpmd -n 4 -l 7-8 --socket-mem 1024,1024 \
> +    --no-pci --file-prefix=virtio1 \
> +    --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-
> net1,queues=1,packed_vq=0,mrg_rxbuf=0,in_order=1 \
> +    -- -i --nb-cores=1 --txd=256 --rxd=256
> +    testpmd>set fwd rxonly
> +    testpmd>start
> +
> +3. Attach pdump secondary process to primary process by same file-prefix::
> +
> +    ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=virtio1 -- --
> pdump 'device_id=net_virtio_user1,queue=*,rx-dev=/root/pdump-
> rx.pcap,mbuf-size=8000'
> +
> +4. Launch virtio-user0 and send 8k length packets::
> +
> +    ./testpmd -n 4 -l 5-6 --socket-mem 1024,1024 \
> +    --no-pci --file-prefix=virtio \
> +    --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-
> net,queues=1,packed_vq=0,mrg_rxbuf=0,in_order=1 \
> +    -- -i --nb-cores=1 --txd=256 --rxd=256
> +    testpmd>set burst 1
> +    testpmd>start tx_first 27
> +    testpmd>stop
> +    testpmd>set burst 32
> +    testpmd>start tx_first 7
> +    testpmd>stop
> +    testpmd>set txpkts 2000,2000,2000,2000
> +    testpmd>start tx_first 1
> +
> +5. Start vhost, then quit pdump and three testpmd, get 251 packets received
> by virtio-user1 in pdump-virtio-rx.pcap.
> +
> +6. Launch testpmd by below command::
> +
> +    ./x86_64-native-linuxapp-gcc/app/testpmd -l 1-2 -n 4 --socket-mem
> 1024,1024 --no-pci --file-prefix=vhost  \
> +    --vdev 'eth_vhost1,iface=vhost-net1,queues=1' -- \
> +    -i --nb-cores=1 --no-flush-rx
> +    testpmd>set fwd rxonly
> +    testpmd>start
> +
> +7. Attach pdump secondary process to primary process by same file-prefix::
> +
> +    ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=vhost -- --
> pdump 'port=0,queue=*,rx-dev=/root/pdump-vhost-rx.pcap,mbuf-
> size=8000'
> +
> +8. Launch virtio-user1 by below command::
> +
> +    ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -l 7-8 --socket-mem
> 1024,1024 \
> +    --no-pci \
> +    --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-
> net1,queues=1,packed_vq=0,mrg_rxbuf=0,in_order=1 \
> +    -- -i --nb-cores=1 --txd=256 --rxd=256
> +    testpmd>set burst 1
> +    testpmd>start tx_first 27
> +    testpmd>stop
> +    testpmd>set burst 32
> +    testpmd>start tx_first 7
> +
> +9. Quit pdump,vhost received packets in pdump-vhost-rx.pcap,check
> headers and payload of all packets in pdump-virtio-rx.pcap and pdump-
> vhost-rx.pcap and ensure the content are same.
> +
> +Test Case 9: split virtqueue vm2vm vector_rx path test
> +======================================================
> +
> +1. Launch testpmd by below command::
> +
> +    ./testpmd -l 1-2 -n 4 --socket-mem 1024,1024 --no-pci \
> +    --vdev 'eth_vhost0,iface=vhost-net,queues=1' --vdev
> 'eth_vhost1,iface=vhost-net1,queues=1' -- \
> +    -i --nb-cores=1 --no-flush-rx
> +
> +2. Launch virtio-user1 by below command::
> +
> +    ./testpmd -n 4 -l 7-8 --socket-mem 1024,1024 \
> +    --no-pci --file-prefix=virtio1 \
> +    --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-
> net1,queues=1,packed_vq=0,mrg_rxbuf=0,in_order=0 \
> +    -- -i --nb-cores=1 --txd=256 --rxd=256
> +
> +3. Attach pdump secondary process to primary process by same file-prefix::
> +
> +    ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=virtio1 -- --
> pdump 'device_id=net_virtio_user1,queue=*,rx-dev=/root/pdump-
> rx.pcap,mbuf-size=8000'
> +
> +4. Launch virtio-user0 and send 8k length packets::
> +
> +    ./testpmd -n 4 -l 5-6 --socket-mem 1024,1024 \
> +    --no-pci --file-prefix=virtio \
> +    --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-
> net,queues=1,packed_vq=0,mrg_rxbuf=0,in_order=0 \
> +    -- -i --nb-cores=1 --txd=256 --rxd=256
> +    testpmd>set burst 1
> +    testpmd>start tx_first 27
> +    testpmd>stop
> +    testpmd>set burst 32
> +    testpmd>start tx_first 7
> +    testpmd>stop
> +    testpmd>set txpkts 2000,2000,2000,2000
> +    testpmd>start tx_first 1
> +
> +5. Start vhost, then quit pdump and three testpmd, get 251 packets received
> by virtio-user1 in pdump-virtio-rx.pcap.
> +
> +6. Launch testpmd by below command::
> +
> +    ./x86_64-native-linuxapp-gcc/app/testpmd -l 1-2 -n 4 --socket-mem
> 1024,1024 --no-pci --file-prefix=vhost  \
> +    --vdev 'eth_vhost1,iface=vhost-net1,queues=1' -- \
> +    -i --nb-cores=1 --no-flush-rx
> +    testpmd>set fwd rxonly
> +    testpmd>start
> +
> +7. Attach pdump secondary process to primary process by same file-prefix::
> +
> +    ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=vhost -- --
> pdump 'port=0,queue=*,rx-dev=/root/pdump-vhost-rx.pcap,mbuf-
> size=8000'
> +
> +8. Launch virtio-user1 by below command::
> +
> +    ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -l 7-8 --socket-mem
> 1024,1024 \
> +    --no-pci \
> +    --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-
> net1,queues=1,packed_vq=0,mrg_rxbuf=0,in_order=0 \
> +    -- -i --nb-cores=1 --txd=256 --rxd=256
> +    testpmd>set burst 1
> +    testpmd>start tx_first 27
> +    testpmd>stop
> +    testpmd>set burst 32
> +    testpmd>start tx_first 7
> +
> +9. Quit pdump,vhost received packets in pdump-vhost-rx.pcap,check
> headers and payload of all packets in pdump-virtio-rx.pcap and pdump-
> vhost-rx.pcap and ensure the content are same.
> \ No newline at end of file
> --
> 2.17.1


^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [dts] [PATCH v1] test_plans: add packed virtqueue test in vm2vm topology
  2019-12-19 19:41 Yinan
@ 2019-12-20  8:13 ` Tu, Lijuan
  0 siblings, 0 replies; 4+ messages in thread
From: Tu, Lijuan @ 2019-12-20  8:13 UTC (permalink / raw)
  To: Wang, Yinan, dts; +Cc: Wang, Yinan

applied

> -----Original Message-----
> From: dts [mailto:dts-bounces@dpdk.org] On Behalf Of Yinan
> Sent: Friday, December 20, 2019 3:42 AM
> To: dts@dpdk.org
> Cc: Wang, Yinan <yinan.wang@intel.com>
> Subject: [dts] [PATCH v1] test_plans: add packed virtqueue test in vm2vm
> topology
> 
> From: Wang Yinan <yinan.wang@intel.com>
> 
> Add packed virtqueue test in vm2vm_virtio_net_perf_test_plan.rst
> 
> Signed-off-by: Wang Yinan <yinan.wang@intel.com>
> ---
>  .../vm2vm_virtio_net_perf_test_plan.rst       | 285 +++++++++++++++---
>  1 file changed, 242 insertions(+), 43 deletions(-)
> 
> diff --git a/test_plans/vm2vm_virtio_net_perf_test_plan.rst
> b/test_plans/vm2vm_virtio_net_perf_test_plan.rst
> index 96f0cc4..0fe8400 100644
> --- a/test_plans/vm2vm_virtio_net_perf_test_plan.rst
> +++ b/test_plans/vm2vm_virtio_net_perf_test_plan.rst
> @@ -8,7 +8,7 @@
>     - Redistributions of source code must retain the above copyright
>       notice, this list of conditions and the following disclaimer.
> 
> -   - Redistributions in binary form must reproduce the above copyright
> +   - Redistributions in binary forim must reproduce the above copyright
>       notice, this list of conditions and the following disclaimer in
>       the documentation and/or other materials provided with the
>       distribution.
> @@ -30,7 +30,6 @@
>     ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF
> ADVISED
>     OF THE POSSIBILITY OF SUCH DAMAGE.
> 
> -
>  =====================================
>  vm2vm vhost-user/virtio-net test plan
>  =====================================
> @@ -38,15 +37,15 @@ vm2vm vhost-user/virtio-net test plan  Description
> ===========
> 
> -The feature enabled the DPDK Vhost TX offload (TSO and UFO). The feature
> added the negotiation between DPDK user space vhost and virtio-net, so we
> will verify the TSO/cksum in the TCP/IP stack enabled environment and
> UFO/cksum in the UDP/IP stack enabled environment with vm2vm vhost-
> user/virtio-net normal path. Also add case to check the payload of large
> packet is valid by scp with vm2vm vhost-user/virtio-net mergeable path test.
> +This test plan test vhost tx offload (TSO and UFO) function by verifing the
> TSO/cksum in the TCP/IP stack enabled environment and UFO/cksum in the
> UDP/IP stack enabled environment with vm2vm split ring and packed ring
> vhost-user/virtio-net non-mergeable path. Also add case to check the
> payload of large packet is valid with vm2vm split ring and packed ring vhost-
> user/virtio-net mergeable and non-mergeable dequeue zero copy test. For
> packed virtqueue test, need using qemu version > 4.2.0.
> 
>  Test flow
>  =========
> 
>  Virtio-net <-> Vhost <-> Testpmd <-> Vhost <-> Virtio-net
> 
> -Test Case 1: VM2VM vhost-user/virtio-net test with tcp traffic -
> ==============================================================
> +Test Case 1: VM2VM split ring vhost-user/virtio-net test with tcp
> +traffic
> +===============================================================
> ========
> +==
> 
>  1. Launch the Vhost sample by below commands::
> 
> @@ -56,7 +55,6 @@ Test Case 1: VM2VM vhost-user/virtio-net test with tcp
> traffic
> 
>  2. Launch VM1 and VM2::
> 
> -    taskset -c 32-33 \
>      qemu-system-x86_64 -name us-vhost-vm1 \
>       -cpu host -enable-kvm -m 2048 -object memory-backend-
> file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa
> node,memdev=mem -mem-prealloc \
>       -smp cores=2,sockets=1 -drive file=/home/osimg/ubuntu16-1.img  \ @@ -
> 65,7 +63,6 @@ Test Case 1: VM2VM vhost-user/virtio-net test with tcp traffic
>       -device virtio-net-
> pci,mac=52:54:00:00:00:01,netdev=mynet1,mrg_rxbuf=off,csum=on,guest_cs
> um=on,host_tso4=on,guest_tso4=on,guest_ecn=on \
>       -vnc :12 -daemonize
> 
> -    taskset -c 34-35 \
>      qemu-system-x86_64 -name us-vhost-vm2 \
>       -cpu host -enable-kvm -m 2048 -object memory-backend-
> file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa
> node,memdev=mem -mem-prealloc \
>       -smp cores=2,sockets=1 -drive file=/home/osimg/ubuntu16-2.img \ @@ -
> 74,12 +71,12 @@ Test Case 1: VM2VM vhost-user/virtio-net test with tcp
> traffic
>       -device virtio-net-
> pci,mac=52:54:00:00:00:02,netdev=mynet2,mrg_rxbuf=off,csum=on,guest_cs
> um=on,host_tso4=on,guest_tso4=on,guest_ecn=on \
>       -vnc :11 -daemonize
> 
> -3. On VM1, set virtio device IP and run arp protocol::
> +3. On VM1, set virtio device IP and run arp protocal::
> 
>      ifconfig ens3 1.1.1.2
>      arp -s 1.1.1.8 52:54:00:00:00:02
> 
> -4. On VM2, set virtio device IP and run arp protocol::
> +4. On VM2, set virtio device IP and run arp protocal::
> 
>      ifconfig ens3 1.1.1.8
>      arp -s 1.1.1.2 52:54:00:00:00:01
> @@ -95,41 +92,206 @@ Test Case 1: VM2VM vhost-user/virtio-net test with
> tcp traffic
>      Port 0 should have tx packets above 1522
>      Port 1 should have rx packets above 1522
> 
> -Test Case 2: VM2VM vhost-user/virtio-net zero-copy test with tcp traffic -
> ================================================================
> ========
> +Test Case 2: VM2VM split ring vhost-user/virtio-net test with udp
> +traffic
> +===============================================================
> ========
> +==
> 
>  1. Launch the Vhost sample by below commands::
> 
>      rm -rf vhost-net*
> -    testpmd>./testpmd -c 0xF0000000 -n 4 --socket-mem 2048,2048 --legacy-
> mem --no-pci --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-
> net0,queues=1,dequeue-zero-copy=1' --vdev 'net_vhost1,iface=vhost-
> net1,queues=1,dequeue-zero-copy=1'  -- -i --nb-cores=1 --txd=1024 --
> rxd=1024 --txfreet=992
> +    testpmd>./testpmd -c 0xF0000000 -n 4 --socket-mem 2048,2048
> + --legacy-mem --no-pci --file-prefix=vhost --vdev
> + 'net_vhost0,iface=vhost-net0,queues=1' --vdev
> + 'net_vhost1,iface=vhost-net1,queues=1'  -- -i --nb-cores=1 --txd=1024
> + --rxd=1024
>      testpmd>start
> 
>  2. Launch VM1 and VM2::
> 
> -    taskset -c 32-33 \
>      qemu-system-x86_64 -name us-vhost-vm1 \
>       -cpu host -enable-kvm -m 2048 -object memory-backend-
> file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa
> node,memdev=mem -mem-prealloc \
>       -smp cores=2,sockets=1 -drive file=/home/osimg/ubuntu16-1.img  \
>       -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net
> nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f -net
> user,vlan=2,hostfwd=tcp:127.0.0.1:6004-:22 \
>       -chardev socket,id=char0,path=./vhost-net0 -netdev type=vhost-
> user,id=mynet1,chardev=char0,vhostforce \
> -     -device virtio-net-
> pci,mac=52:54:00:00:00:01,netdev=mynet1,mrg_rxbuf=off,csum=on,guest_cs
> um=on,host_tso4=on,guest_tso4=on,guest_ecn=on \
> -     -vnc :10 -daemonize
> +     -device
> + virtio-net-pci,mac=52:54:00:00:00:01,netdev=mynet1,mrg_rxbuf=off,csum=
> +
> on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=o
> n,
> + host_ufo=on -vnc :10 -daemonize
> 
> -    taskset -c 34-35 \
>      qemu-system-x86_64 -name us-vhost-vm2 \
>       -cpu host -enable-kvm -m 2048 -object memory-backend-
> file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa
> node,memdev=mem -mem-prealloc \
>       -smp cores=2,sockets=1 -drive file=/home/osimg/ubuntu16-2.img \
>       -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net
> nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f -net
> user,vlan=2,hostfwd=tcp:127.0.0.1:6005-:22 \
>       -chardev socket,id=char1,path=./vhost-net1 -netdev type=vhost-
> user,id=mynet2,chardev=char1,vhostforce \
> -     -device virtio-net-
> pci,mac=52:54:00:00:00:02,netdev=mynet2,mrg_rxbuf=off,csum=on,guest_cs
> um=on,host_tso4=on,guest_tso4=on,guest_ecn=on \
> +     -device
> + virtio-net-pci,mac=52:54:00:00:00:02,netdev=mynet2,mrg_rxbuf=off,csum=
> +
> on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=o
> n,
> + host_ufo=on -vnc :11 -daemonize
> +
> +3. On VM1, set virtio device IP and run arp protocal::
> +
> +    ifconfig ens3 1.1.1.2
> +    arp -s 1.1.1.8 52:54:00:00:00:02
> +
> +4. On VM2, set virtio device IP and run arp protocal::
> +
> +    ifconfig ens3 1.1.1.8
> +    arp -s 1.1.1.2 52:54:00:00:00:01
> +
> +5. Check the iperf performance between two VMs by below commands::
> +
> +    Under VM1, run: `iperf -s -u -i 1`
> +    Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 30 -P 4 -u -b 1G -l 9000`
> +
> +6. Check both 2VMs can receive and send big packets to each other::
> +
> +    testpmd>show port xstats all
> +    Port 0 should have tx packets above 1522
> +    Port 1 should have rx packets above 1522
> +
> +Test Case 3: Check split ring virtio-net device capability
> +==========================================================
> +
> +1. Launch the Vhost sample by below commands::
> +
> +    rm -rf vhost-net*
> +    testpmd>./testpmd -c 0xF0000000 -n 4 --socket-mem 2048,2048 --legacy-
> mem --no-pci --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-
> net0,queues=1' --vdev 'net_vhost1,iface=vhost-net1,queues=1'  -- -i --nb-
> cores=1 --txd=1024 --rxd=1024
> +    testpmd>start
> +
> +2. Launch VM1 and VM2,set TSO and UFO on in qemu command::
> +
> +    qemu-system-x86_64 -name us-vhost-vm1 \
> +     -cpu host -enable-kvm -m 2048 -object memory-backend-
> file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa
> node,memdev=mem -mem-prealloc \
> +     -smp cores=2,sockets=1 -drive file=/home/osimg/ubuntu16-1.img  \
> +     -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net
> nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f -net
> user,vlan=2,hostfwd=tcp:127.0.0.1:6004-:22 \
> +     -chardev socket,id=char0,path=./vhost-net0 -netdev type=vhost-
> user,id=mynet1,chardev=char0,vhostforce \
> +     -device
> + virtio-net-pci,mac=52:54:00:00:00:01,netdev=mynet1,mrg_rxbuf=off,csum=
> +
> on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=o
> n,
> + host_ufo=on -vnc :10 -daemonize
> +
> +    qemu-system-x86_64 -name us-vhost-vm2 \
> +     -cpu host -enable-kvm -m 2048 -object memory-backend-
> file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa
> node,memdev=mem -mem-prealloc \
> +     -smp cores=2,sockets=1 -drive file=/home/osimg/ubuntu16-2.img \
> +     -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net
> nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f -net
> user,vlan=2,hostfwd=tcp:127.0.0.1:6005-:22 \
> +     -chardev socket,id=char1,path=./vhost-net1 -netdev type=vhost-
> user,id=mynet2,chardev=char1,vhostforce \
> +     -device
> + virtio-net-pci,mac=52:54:00:00:00:02,netdev=mynet2,mrg_rxbuf=off,csum=
> +
> on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=o
> n,
> + host_ufo=on -vnc :11 -daemonize
> +
> +3. Check UFO and TSO offload status on for the Virtio-net driver on VM1
> and VM2::
> +
> +    Under VM1, run: `run ethtool -k ens3`
> +    udp-fragmentation-offload: on
> +    tx-tcp-segmentation: on
> +    tx-tcp-ecn-segmentation: on
> +    tx-tcp6-segmentation: on
> +
> +    Under VM2, run: `run ethtool -k ens3`
> +    udp-fragmentation-offload: on
> +    tx-tcp-segmentation: on
> +    tx-tcp-ecn-segmentation: on
> +    tx-tcp6-segmentation: on
> +
> +Test Case 4: VM2VM virtio-net split ring mergeable zero copy test with
> +large packet payload valid check
> +===============================================================
> ========
> +================================
> +
> +1. Launch the Vhost sample by below commands::
> +
> +    rm -rf vhost-net*
> +    ./testpmd -c 0xF0000000 -n 4 --socket-mem 2048,2048 --legacy-mem --no-
> pci --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-
> net0,queues=1,dequeue-zero-copy=1' --vdev 'net_vhost1,iface=vhost-
> net1,queues=1,dequeue-zero-copy=1'  -- -i --nb-cores=1 --txd=1024 --
> rxd=1024
> +    testpmd>start
> +
> +2. Launch VM1 and VM2::
> +
> +    qemu-system-x86_64 -name us-vhost-vm1 \
> +     -cpu host -enable-kvm -m 2048 -object memory-backend-
> file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa
> node,memdev=mem -mem-prealloc \
> +     -smp cores=2,sockets=1 -drive file=/home/osimg/ubuntu16-1.img  \
> +     -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net
> nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f -net
> user,vlan=2,hostfwd=tcp:127.0.0.1:6004-:22 \
> +     -chardev socket,id=char0,path=./vhost-net0 -netdev type=vhost-
> user,id=mynet1,chardev=char0,vhostforce \
> +     -device virtio-net-
> pci,mac=52:54:00:00:00:01,netdev=mynet1,mrg_rxbuf=on \
> +     -vnc :12 -daemonize
> +
> +    qemu-system-x86_64 -name us-vhost-vm2 \
> +     -cpu host -enable-kvm -m 2048 -object memory-backend-
> file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa
> node,memdev=mem -mem-prealloc \
> +     -smp cores=2,sockets=1 -drive file=/home/osimg/ubuntu16-2.img \
> +     -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net
> nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f -net
> user,vlan=2,hostfwd=tcp:127.0.0.1:6005-:22 \
> +     -chardev socket,id=char1,path=./vhost-net1 -netdev type=vhost-
> user,id=mynet2,chardev=char1,vhostforce \
> +     -device
> + virtio-net-pci,mac=52:54:00:00:00:02,netdev=mynet2,mrg_rxbuf=on  \
>       -vnc :11 -daemonize
> 
> -3. On VM1, set virtio device IP and run arp protocol::
> +3. On VM1, set virtio device IP and run arp protocal::
> 
>      ifconfig ens3 1.1.1.2
>      arp -s 1.1.1.8 52:54:00:00:00:02
> 
> -4. On VM2, set virtio device IP and run arp protocol::
> +4. On VM2, set virtio device IP and run arp protocal::
> +
> +    ifconfig ens3 1.1.1.8
> +    arp -s 1.1.1.2 52:54:00:00:00:01
> +
> +5. Scp 64KB file form VM1 to VM2::
> +
> +    Under VM1, run: `scp [xxx] root@1.1.1.8:/`   [xxx] is the file name
> +
> +Test Case 5: VM2VM virtio-net split ring non-mergeable zero copy test
> +with large packet payload valid check
> +===============================================================
> ========
> +====================================
> +
> +1. Launch the Vhost sample by below commands::
> +
> +    rm -rf vhost-net*
> +    ./testpmd -c 0xF0000000 -n 4 --socket-mem 2048,2048 --legacy-mem --no-
> pci --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-
> net0,queues=1,dequeue-zero-copy=1' --vdev 'net_vhost1,iface=vhost-
> net1,queues=1,dequeue-zero-copy=1'  -- -i --nb-cores=1 --txd=1024 --
> rxd=1024
> +    testpmd>start
> +
> +2. Launch VM1 and VM2::
> +
> +    qemu-system-x86_64 -name us-vhost-vm1 \
> +     -cpu host -enable-kvm -m 2048 -object memory-backend-
> file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa
> node,memdev=mem -mem-prealloc \
> +     -smp cores=2,sockets=1 -drive file=/home/osimg/ubuntu16-1.img  \
> +     -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net
> nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f -net
> user,vlan=2,hostfwd=tcp:127.0.0.1:6004-:22 \
> +     -chardev socket,id=char0,path=./vhost-net0 -netdev type=vhost-
> user,id=mynet1,chardev=char0,vhostforce \
> +     -device virtio-net-
> pci,mac=52:54:00:00:00:01,netdev=mynet1,mrg_rxbuf=off \
> +     -vnc :12 -daemonize
> +
> +    qemu-system-x86_64 -name us-vhost-vm2 \
> +     -cpu host -enable-kvm -m 2048 -object memory-backend-
> file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa
> node,memdev=mem -mem-prealloc \
> +     -smp cores=2,sockets=1 -drive file=/home/osimg/ubuntu16-2.img \
> +     -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net
> nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f -net
> user,vlan=2,hostfwd=tcp:127.0.0.1:6005-:22 \
> +     -chardev socket,id=char1,path=./vhost-net1 -netdev type=vhost-
> user,id=mynet2,chardev=char1,vhostforce \
> +     -device virtio-net-
> pci,mac=52:54:00:00:00:02,netdev=mynet2,mrg_rxbuf=off  \
> +     -vnc :11 -daemonize
> +
> +3. On VM1, set virtio device IP and run arp protocal::
> +
> +    ifconfig ens3 1.1.1.2
> +    arp -s 1.1.1.8 52:54:00:00:00:02
> +
> +4. On VM2, set virtio device IP and run arp protocal::
> +
> +    ifconfig ens3 1.1.1.8
> +    arp -s 1.1.1.2 52:54:00:00:00:01
> +
> +5. Scp 64KB file form VM1 to VM2::
> +
> +    Under VM1, run: `scp [xxx] root@1.1.1.8:/`   [xxx] is the file name
> +
> +Test Case 6: VM2VM packed ring vhost-user/virtio-net test with tcp
> +traffic
> +===============================================================
> ========
> +===
> +
> +1. Launch the Vhost sample by below commands::
> +
> +    rm -rf vhost-net*
> +    ./testpmd -c 0xF0000000 -n 4 --socket-mem 2048,2048 --legacy-mem --no-
> pci --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,queues=1' --vdev
> 'net_vhost1,iface=vhost-net1,queues=1'  -- -i --nb-cores=1 --txd=1024 --
> rxd=1024
> +    testpmd>start
> +
> +2. Launch VM1 and VM2::
> +
> +    qemu-system-x86_64 -name us-vhost-vm1 \
> +     -cpu host -enable-kvm -m 2048 -object memory-backend-
> file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa
> node,memdev=mem -mem-prealloc \
> +     -smp cores=2,sockets=1 -drive file=/home/osimg/ubuntu16-1.img  \
> +     -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net
> nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f -net
> user,vlan=2,hostfwd=tcp:127.0.0.1:6004-:22 \
> +     -chardev socket,id=char0,path=./vhost-net0 -netdev type=vhost-
> user,id=mynet1,chardev=char0,vhostforce \
> +     -device virtio-net-
> pci,mac=52:54:00:00:00:01,netdev=mynet1,mrg_rxbuf=off,csum=on,guest_cs
> um=on,host_tso4=on,guest_tso4=on,guest_ecn=on,packed=on \
> +     -vnc :12 -daemonize
> +
> +    qemu-system-x86_64 -name us-vhost-vm2 \
> +     -cpu host -enable-kvm -m 2048 -object memory-backend-
> file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa
> node,memdev=mem -mem-prealloc \
> +     -smp cores=2,sockets=1 -drive file=/home/osimg/ubuntu16-2.img \
> +     -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net
> nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f -net
> user,vlan=2,hostfwd=tcp:127.0.0.1:6005-:22 \
> +     -chardev socket,id=char1,path=./vhost-net1 -netdev type=vhost-
> user,id=mynet2,chardev=char1,vhostforce \
> +     -device virtio-net-
> pci,mac=52:54:00:00:00:02,netdev=mynet2,mrg_rxbuf=off,csum=on,guest_cs
> um=on,host_tso4=on,guest_tso4=on,guest_ecn=on,packed=on \
> +     -vnc :11 -daemonize
> +
> +3. On VM1, set virtio device IP and run arp protocal::
> +
> +    ifconfig ens3 1.1.1.2
> +    arp -s 1.1.1.8 52:54:00:00:00:02
> +
> +4. On VM2, set virtio device IP and run arp protocal::
> 
>      ifconfig ens3 1.1.1.8
>      arp -s 1.1.1.2 52:54:00:00:00:01
> @@ -139,14 +301,15 @@ Test Case 2: VM2VM vhost-user/virtio-net zero-
> copy test with tcp traffic
>      Under VM1, run: `iperf -s -i 1`
>      Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 30`
> 
> -6. Check both 2VM can receive and send big packets to each other::
> +6. Check both 2VMs can receive and send big packets to each other::
> 
>      testpmd>show port xstats all
>      Port 0 should have tx packets above 1522
>      Port 1 should have rx packets above 1522
> 
> -Test Case 3: VM2VM vhost-user/virtio-net test with udp traffic -
> ==============================================================
> +Test Case 7: VM2VM packed ring vhost-user/virtio-net test with udp
> +traffic
> +===============================================================
> ========
> +===
> +
>  1. Launch the Vhost sample by below commands::
> 
>      rm -rf vhost-net*
> @@ -155,28 +318,26 @@ Test Case 3: VM2VM vhost-user/virtio-net test
> with udp traffic
> 
>  2. Launch VM1 and VM2::
> 
> -    taskset -c 32-33 \
>      qemu-system-x86_64 -name us-vhost-vm1 \
>       -cpu host -enable-kvm -m 2048 -object memory-backend-
> file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa
> node,memdev=mem -mem-prealloc \
>       -smp cores=2,sockets=1 -drive file=/home/osimg/ubuntu16-1.img  \
>       -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net
> nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f -net
> user,vlan=2,hostfwd=tcp:127.0.0.1:6004-:22 \
>       -chardev socket,id=char0,path=./vhost-net0 -netdev type=vhost-
> user,id=mynet1,chardev=char0,vhostforce \
> -     -device virtio-net-
> pci,mac=52:54:00:00:00:01,netdev=mynet1,mrg_rxbuf=off,csum=on,guest_cs
> um=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=
> on -vnc :10 -daemonize
> +     -device
> + virtio-net-pci,mac=52:54:00:00:00:01,netdev=mynet1,mrg_rxbuf=off,csum=
> +
> on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=o
> n,
> + host_ufo=on,packed=on -vnc :10 -daemonize
> 
> -    taskset -c 34-35 \
>      qemu-system-x86_64 -name us-vhost-vm2 \
>       -cpu host -enable-kvm -m 2048 -object memory-backend-
> file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa
> node,memdev=mem -mem-prealloc \
>       -smp cores=2,sockets=1 -drive file=/home/osimg/ubuntu16-2.img \
>       -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net
> nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f -net
> user,vlan=2,hostfwd=tcp:127.0.0.1:6005-:22 \
>       -chardev socket,id=char1,path=./vhost-net1 -netdev type=vhost-
> user,id=mynet2,chardev=char1,vhostforce \
> -     -device virtio-net-
> pci,mac=52:54:00:00:00:02,netdev=mynet2,mrg_rxbuf=off,csum=on,guest_cs
> um=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=
> on -vnc :11 -daemonize
> +     -device
> + virtio-net-pci,mac=52:54:00:00:00:02,netdev=mynet2,mrg_rxbuf=off,csum=
> +
> on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=o
> n,
> + host_ufo=on,packed=on -vnc :11 -daemonize
> 
> -3. On VM1, set virtio device IP and run arp protocol::
> +3. On VM1, set virtio device IP and run arp protocal::
> 
>      ifconfig ens3 1.1.1.2
>      arp -s 1.1.1.8 52:54:00:00:00:02
> 
> -4. On VM2, set virtio device IP and run arp protocol::
> +4. On VM2, set virtio device IP and run arp protocal::
> 
>      ifconfig ens3 1.1.1.8
>      arp -s 1.1.1.2 52:54:00:00:00:01
> @@ -192,9 +353,8 @@ Test Case 3: VM2VM vhost-user/virtio-net test with
> udp traffic
>      Port 0 should have tx packets above 1522
>      Port 1 should have rx packets above 1522
> 
> -
> -Test Case 4: Check virtio-net device capability -
> ===============================================
> +Test Case 8: Check packed ring virtio-net device capability
> +===========================================================
> 
>  1. Launch the Vhost sample by below commands::
> 
> @@ -204,21 +364,19 @@ Test Case 4: Check virtio-net device capability
> 
>  2. Launch VM1 and VM2,set TSO and UFO on in qemu command::
> 
> -    taskset -c 32-33 \
>      qemu-system-x86_64 -name us-vhost-vm1 \
>       -cpu host -enable-kvm -m 2048 -object memory-backend-
> file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa
> node,memdev=mem -mem-prealloc \
>       -smp cores=2,sockets=1 -drive file=/home/osimg/ubuntu16-1.img  \
>       -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net
> nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f -net
> user,vlan=2,hostfwd=tcp:127.0.0.1:6004-:22 \
>       -chardev socket,id=char0,path=./vhost-net0 -netdev type=vhost-
> user,id=mynet1,chardev=char0,vhostforce \
> -     -device virtio-net-
> pci,mac=52:54:00:00:00:01,netdev=mynet1,mrg_rxbuf=off,csum=on,guest_cs
> um=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=
> on -vnc :10 -daemonize
> +     -device
> + virtio-net-pci,mac=52:54:00:00:00:01,netdev=mynet1,mrg_rxbuf=off,csum=
> +
> on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=o
> n,
> + host_ufo=on,packed=on -vnc :10 -daemonize
> 
> -    taskset -c 34-35 \
>      qemu-system-x86_64 -name us-vhost-vm2 \
>       -cpu host -enable-kvm -m 2048 -object memory-backend-
> file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa
> node,memdev=mem -mem-prealloc \
>       -smp cores=2,sockets=1 -drive file=/home/osimg/ubuntu16-2.img \
>       -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net
> nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f -net
> user,vlan=2,hostfwd=tcp:127.0.0.1:6005-:22 \
>       -chardev socket,id=char1,path=./vhost-net1 -netdev type=vhost-
> user,id=mynet2,chardev=char1,vhostforce \
> -     -device virtio-net-
> pci,mac=52:54:00:00:00:02,netdev=mynet2,mrg_rxbuf=off,csum=on,guest_cs
> um=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=
> on -vnc :11 -daemonize
> +     -device
> + virtio-net-pci,mac=52:54:00:00:00:02,netdev=mynet2,mrg_rxbuf=off,csum=
> +
> on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=o
> n,
> + host_ufo=on,packed=on -vnc :11 -daemonize
> 
>  3. Check UFO and TSO offload status on for the Virtio-net driver on VM1 and
> VM2::
> 
> @@ -234,13 +392,13 @@ Test Case 4: Check virtio-net device capability
>      tx-tcp-ecn-segmentation: on
>      tx-tcp6-segmentation: on
> 
> -Test Case 5: VM2VM vhost-user/virtio-net test with large packet payload
> valid check -
> ================================================================
> ===================
> +Test Case 9: VM2VM packed ring virtio-net mergeable dequeue zero copy
> +test with large packet payload valid check
> +===============================================================
> ========
> +=========================================
> 
>  1. Launch the Vhost sample by below commands::
> 
>      rm -rf vhost-net*
> -    ./testpmd -c 0xF0000000 -n 4 --socket-mem 2048,2048 --legacy-mem --no-
> pci --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,queues=1' --vdev
> 'net_vhost1,iface=vhost-net1,queues=1'  -- -i --nb-cores=1 --txd=1024 --
> rxd=1024
> +    ./testpmd -c 0xF0000000 -n 4 --socket-mem 2048,2048 --legacy-mem
> + --no-pci --file-prefix=vhost --vdev
> + 'net_vhost0,iface=vhost-net0,queues=1,dequeue-zero-copy=1' --vdev
> + 'net_vhost1,iface=vhost-net1,queues=1,dequeue-zero-copy=1'  -- -i
> + --nb-cores=1 --txd=1024 --rxd=1024
>      testpmd>start
> 
>  2. Launch VM1 and VM2::
> @@ -250,7 +408,7 @@ Test Case 5: VM2VM vhost-user/virtio-net test with
> large packet payload valid ch
>       -smp cores=2,sockets=1 -drive file=/home/osimg/ubuntu16-1.img  \
>       -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net
> nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f -net
> user,vlan=2,hostfwd=tcp:127.0.0.1:6004-:22 \
>       -chardev socket,id=char0,path=./vhost-net0 -netdev type=vhost-
> user,id=mynet1,chardev=char0,vhostforce \
> -     -device virtio-net-
> pci,mac=52:54:00:00:00:01,netdev=mynet1,mrg_rxbuf=on \
> +     -device
> + virtio-net-
> pci,mac=52:54:00:00:00:01,netdev=mynet1,mrg_rxbuf=on,packed
> + =on \
>       -vnc :12 -daemonize
> 
>      qemu-system-x86_64 -name us-vhost-vm2 \ @@ -258,19 +416,60 @@
> Test Case 5: VM2VM vhost-user/virtio-net test with large packet payload
> valid ch
>       -smp cores=2,sockets=1 -drive file=/home/osimg/ubuntu16-2.img \
>       -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net
> nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f -net
> user,vlan=2,hostfwd=tcp:127.0.0.1:6005-:22 \
>       -chardev socket,id=char1,path=./vhost-net1 -netdev type=vhost-
> user,id=mynet2,chardev=char1,vhostforce \
> -     -device virtio-net-
> pci,mac=52:54:00:00:00:02,netdev=mynet2,mrg_rxbuf=on  \
> +     -device virtio-net-
> pci,mac=52:54:00:00:00:02,netdev=mynet2,mrg_rxbuf=on,packed=on \
> +     -vnc :11 -daemonize
> +
> +3. On VM1, set virtio device IP and run arp protocal::
> +
> +    ifconfig ens3 1.1.1.2
> +    arp -s 1.1.1.8 52:54:00:00:00:02
> +
> +4. On VM2, set virtio device IP and run arp protocal::
> +
> +    ifconfig ens3 1.1.1.8
> +    arp -s 1.1.1.2 52:54:00:00:00:01
> +
> +5. Scp 64KB file form VM1 to VM2::
> +
> +    Under VM1, run: `scp [xxx] root@1.1.1.8:/`   [xxx] is the file name
> +
> +Test Case 10: VM2VM packed ring virtio-net non-mergeable dequeue zero
> +copy test with large packet payload valid check
> +===============================================================
> ========
> +==============================================
> +
> +1. Launch the Vhost sample by below commands::
> +
> +    rm -rf vhost-net*
> +    ./testpmd -c 0xF0000000 -n 4 --socket-mem 2048,2048 --legacy-mem --no-
> pci --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-
> net0,queues=1,dequeue-zero-copy=1' --vdev 'net_vhost1,iface=vhost-
> net1,queues=1,dequeue-zero-copy=1'  -- -i --nb-cores=1 --txd=1024 --
> rxd=1024
> +    testpmd>start
> +
> +2. Launch VM1 and VM2::
> +
> +    qemu-system-x86_64 -name us-vhost-vm1 \
> +     -cpu host -enable-kvm -m 2048 -object memory-backend-
> file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa
> node,memdev=mem -mem-prealloc \
> +     -smp cores=2,sockets=1 -drive file=/home/osimg/ubuntu16-1.img  \
> +     -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net
> nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f -net
> user,vlan=2,hostfwd=tcp:127.0.0.1:6004-:22 \
> +     -chardev socket,id=char0,path=./vhost-net0 -netdev type=vhost-
> user,id=mynet1,chardev=char0,vhostforce \
> +     -device virtio-net-
> pci,mac=52:54:00:00:00:01,netdev=mynet1,mrg_rxbuf=off,packed=on \
> +     -vnc :12 -daemonize
> +
> +    qemu-system-x86_64 -name us-vhost-vm2 \
> +     -cpu host -enable-kvm -m 2048 -object memory-backend-
> file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa
> node,memdev=mem -mem-prealloc \
> +     -smp cores=2,sockets=1 -drive file=/home/osimg/ubuntu16-2.img \
> +     -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net
> nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f -net
> user,vlan=2,hostfwd=tcp:127.0.0.1:6005-:22 \
> +     -chardev socket,id=char1,path=./vhost-net1 -netdev type=vhost-
> user,id=mynet2,chardev=char1,vhostforce \
> +     -device
> + virtio-net-pci,mac=52:54:00:00:00:02,netdev=mynet2,mrg_rxbuf=off,packe
> + d=on  \
>       -vnc :11 -daemonize
> 
> -3. On VM1, set virtio device IP and run arp protocol::
> +3. On VM1, set virtio device IP and run arp protocal::
> 
>      ifconfig ens3 1.1.1.2
>      arp -s 1.1.1.8 52:54:00:00:00:02
> 
> -4. On VM2, set virtio device IP and run arp protocol::
> +4. On VM2, set virtio device IP and run arp protocal::
> 
>      ifconfig ens3 1.1.1.8
>      arp -s 1.1.1.2 52:54:00:00:00:01
> 
>  5. Scp 64KB file form VM1 to VM2::
> 
> -    Under VM1, run: `scp [xxx] root@1.1.1.8:/`   [xxx] is the file name
> \ No newline at end of file
> +    Under VM1, run: `scp [xxx] root@1.1.1.8:/`   [xxx] is the file name
> --
> 2.17.1


^ permalink raw reply	[flat|nested] 4+ messages in thread

* [dts] [PATCH v1] test_plans: add packed virtqueue test in vm2vm topology
@ 2019-12-19 19:41 Yinan
  2019-12-20  8:13 ` Tu, Lijuan
  0 siblings, 1 reply; 4+ messages in thread
From: Yinan @ 2019-12-19 19:41 UTC (permalink / raw)
  To: dts; +Cc: Wang Yinan

From: Wang Yinan <yinan.wang@intel.com>

Add packed virtqueue test in vm2vm_virtio_net_perf_test_plan.rst

Signed-off-by: Wang Yinan <yinan.wang@intel.com>
---
 .../vm2vm_virtio_net_perf_test_plan.rst       | 285 +++++++++++++++---
 1 file changed, 242 insertions(+), 43 deletions(-)

diff --git a/test_plans/vm2vm_virtio_net_perf_test_plan.rst b/test_plans/vm2vm_virtio_net_perf_test_plan.rst
index 96f0cc4..0fe8400 100644
--- a/test_plans/vm2vm_virtio_net_perf_test_plan.rst
+++ b/test_plans/vm2vm_virtio_net_perf_test_plan.rst
@@ -8,7 +8,7 @@
    - Redistributions of source code must retain the above copyright
      notice, this list of conditions and the following disclaimer.
 
-   - Redistributions in binary form must reproduce the above copyright
+   - Redistributions in binary forim must reproduce the above copyright
      notice, this list of conditions and the following disclaimer in
      the documentation and/or other materials provided with the
      distribution.
@@ -30,7 +30,6 @@
    ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED
    OF THE POSSIBILITY OF SUCH DAMAGE.
 
-
 =====================================
 vm2vm vhost-user/virtio-net test plan
 =====================================
@@ -38,15 +37,15 @@ vm2vm vhost-user/virtio-net test plan
 Description
 ===========
 
-The feature enabled the DPDK Vhost TX offload (TSO and UFO). The feature added the negotiation between DPDK user space vhost and virtio-net, so we will verify the TSO/cksum in the TCP/IP stack enabled environment and UFO/cksum in the UDP/IP stack enabled environment with vm2vm vhost-user/virtio-net normal path. Also add case to check the payload of large packet is valid by scp with vm2vm vhost-user/virtio-net mergeable path test.
+This test plan test vhost tx offload (TSO and UFO) function by verifing the TSO/cksum in the TCP/IP stack enabled environment and UFO/cksum in the UDP/IP stack enabled environment with vm2vm split ring and packed ring vhost-user/virtio-net non-mergeable path. Also add case to check the payload of large packet is valid with vm2vm split ring and packed ring vhost-user/virtio-net mergeable and non-mergeable dequeue zero copy test. For packed virtqueue test, need using qemu version > 4.2.0.
 
 Test flow
 =========
 
 Virtio-net <-> Vhost <-> Testpmd <-> Vhost <-> Virtio-net
 
-Test Case 1: VM2VM vhost-user/virtio-net test with tcp traffic
-==============================================================
+Test Case 1: VM2VM split ring vhost-user/virtio-net test with tcp traffic
+=========================================================================
 
 1. Launch the Vhost sample by below commands::
 
@@ -56,7 +55,6 @@ Test Case 1: VM2VM vhost-user/virtio-net test with tcp traffic
 
 2. Launch VM1 and VM2::
 
-    taskset -c 32-33 \
     qemu-system-x86_64 -name us-vhost-vm1 \
      -cpu host -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa node,memdev=mem -mem-prealloc \
      -smp cores=2,sockets=1 -drive file=/home/osimg/ubuntu16-1.img  \
@@ -65,7 +63,6 @@ Test Case 1: VM2VM vhost-user/virtio-net test with tcp traffic
      -device virtio-net-pci,mac=52:54:00:00:00:01,netdev=mynet1,mrg_rxbuf=off,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on \
      -vnc :12 -daemonize
 
-    taskset -c 34-35 \
     qemu-system-x86_64 -name us-vhost-vm2 \
      -cpu host -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa node,memdev=mem -mem-prealloc \
      -smp cores=2,sockets=1 -drive file=/home/osimg/ubuntu16-2.img \
@@ -74,12 +71,12 @@ Test Case 1: VM2VM vhost-user/virtio-net test with tcp traffic
      -device virtio-net-pci,mac=52:54:00:00:00:02,netdev=mynet2,mrg_rxbuf=off,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on \
      -vnc :11 -daemonize
 
-3. On VM1, set virtio device IP and run arp protocol::
+3. On VM1, set virtio device IP and run arp protocal::
 
     ifconfig ens3 1.1.1.2
     arp -s 1.1.1.8 52:54:00:00:00:02
 
-4. On VM2, set virtio device IP and run arp protocol::
+4. On VM2, set virtio device IP and run arp protocal::
 
     ifconfig ens3 1.1.1.8
     arp -s 1.1.1.2 52:54:00:00:00:01
@@ -95,41 +92,206 @@ Test Case 1: VM2VM vhost-user/virtio-net test with tcp traffic
     Port 0 should have tx packets above 1522
     Port 1 should have rx packets above 1522
 
-Test Case 2: VM2VM vhost-user/virtio-net zero-copy test with tcp traffic
-========================================================================
+Test Case 2: VM2VM split ring vhost-user/virtio-net test with udp traffic
+=========================================================================
 
 1. Launch the Vhost sample by below commands::
 
     rm -rf vhost-net*
-    testpmd>./testpmd -c 0xF0000000 -n 4 --socket-mem 2048,2048 --legacy-mem --no-pci --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,queues=1,dequeue-zero-copy=1' --vdev 'net_vhost1,iface=vhost-net1,queues=1,dequeue-zero-copy=1'  -- -i --nb-cores=1 --txd=1024 --rxd=1024 --txfreet=992
+    testpmd>./testpmd -c 0xF0000000 -n 4 --socket-mem 2048,2048 --legacy-mem --no-pci --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,queues=1' --vdev 'net_vhost1,iface=vhost-net1,queues=1'  -- -i --nb-cores=1 --txd=1024 --rxd=1024
     testpmd>start
 
 2. Launch VM1 and VM2::
 
-    taskset -c 32-33 \
     qemu-system-x86_64 -name us-vhost-vm1 \
      -cpu host -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa node,memdev=mem -mem-prealloc \
      -smp cores=2,sockets=1 -drive file=/home/osimg/ubuntu16-1.img  \
      -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f -net user,vlan=2,hostfwd=tcp:127.0.0.1:6004-:22 \
      -chardev socket,id=char0,path=./vhost-net0 -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce \
-     -device virtio-net-pci,mac=52:54:00:00:00:01,netdev=mynet1,mrg_rxbuf=off,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on \
-     -vnc :10 -daemonize
+     -device virtio-net-pci,mac=52:54:00:00:00:01,netdev=mynet1,mrg_rxbuf=off,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on -vnc :10 -daemonize
 
-    taskset -c 34-35 \
     qemu-system-x86_64 -name us-vhost-vm2 \
      -cpu host -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa node,memdev=mem -mem-prealloc \
      -smp cores=2,sockets=1 -drive file=/home/osimg/ubuntu16-2.img \
      -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f -net user,vlan=2,hostfwd=tcp:127.0.0.1:6005-:22 \
      -chardev socket,id=char1,path=./vhost-net1 -netdev type=vhost-user,id=mynet2,chardev=char1,vhostforce \
-     -device virtio-net-pci,mac=52:54:00:00:00:02,netdev=mynet2,mrg_rxbuf=off,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on \
+     -device virtio-net-pci,mac=52:54:00:00:00:02,netdev=mynet2,mrg_rxbuf=off,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on -vnc :11 -daemonize
+
+3. On VM1, set virtio device IP and run arp protocal::
+
+    ifconfig ens3 1.1.1.2
+    arp -s 1.1.1.8 52:54:00:00:00:02
+
+4. On VM2, set virtio device IP and run arp protocal::
+
+    ifconfig ens3 1.1.1.8
+    arp -s 1.1.1.2 52:54:00:00:00:01
+
+5. Check the iperf performance between two VMs by below commands::
+
+    Under VM1, run: `iperf -s -u -i 1`
+    Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 30 -P 4 -u -b 1G -l 9000`
+
+6. Check both 2VMs can receive and send big packets to each other::
+
+    testpmd>show port xstats all
+    Port 0 should have tx packets above 1522
+    Port 1 should have rx packets above 1522
+
+Test Case 3: Check split ring virtio-net device capability
+==========================================================
+
+1. Launch the Vhost sample by below commands::
+
+    rm -rf vhost-net*
+    testpmd>./testpmd -c 0xF0000000 -n 4 --socket-mem 2048,2048 --legacy-mem --no-pci --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,queues=1' --vdev 'net_vhost1,iface=vhost-net1,queues=1'  -- -i --nb-cores=1 --txd=1024 --rxd=1024
+    testpmd>start
+
+2. Launch VM1 and VM2,set TSO and UFO on in qemu command::
+
+    qemu-system-x86_64 -name us-vhost-vm1 \
+     -cpu host -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa node,memdev=mem -mem-prealloc \
+     -smp cores=2,sockets=1 -drive file=/home/osimg/ubuntu16-1.img  \
+     -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f -net user,vlan=2,hostfwd=tcp:127.0.0.1:6004-:22 \
+     -chardev socket,id=char0,path=./vhost-net0 -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce \
+     -device virtio-net-pci,mac=52:54:00:00:00:01,netdev=mynet1,mrg_rxbuf=off,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on -vnc :10 -daemonize
+
+    qemu-system-x86_64 -name us-vhost-vm2 \
+     -cpu host -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa node,memdev=mem -mem-prealloc \
+     -smp cores=2,sockets=1 -drive file=/home/osimg/ubuntu16-2.img \
+     -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f -net user,vlan=2,hostfwd=tcp:127.0.0.1:6005-:22 \
+     -chardev socket,id=char1,path=./vhost-net1 -netdev type=vhost-user,id=mynet2,chardev=char1,vhostforce \
+     -device virtio-net-pci,mac=52:54:00:00:00:02,netdev=mynet2,mrg_rxbuf=off,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on -vnc :11 -daemonize
+
+3. Check UFO and TSO offload status on for the Virtio-net driver on VM1 and VM2::
+
+    Under VM1, run: `run ethtool -k ens3`
+    udp-fragmentation-offload: on
+    tx-tcp-segmentation: on
+    tx-tcp-ecn-segmentation: on
+    tx-tcp6-segmentation: on
+
+    Under VM2, run: `run ethtool -k ens3`
+    udp-fragmentation-offload: on
+    tx-tcp-segmentation: on
+    tx-tcp-ecn-segmentation: on
+    tx-tcp6-segmentation: on
+
+Test Case 4: VM2VM virtio-net split ring mergeable zero copy test with large packet payload valid check
+=======================================================================================================
+
+1. Launch the Vhost sample by below commands::
+
+    rm -rf vhost-net*
+    ./testpmd -c 0xF0000000 -n 4 --socket-mem 2048,2048 --legacy-mem --no-pci --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,queues=1,dequeue-zero-copy=1' --vdev 'net_vhost1,iface=vhost-net1,queues=1,dequeue-zero-copy=1'  -- -i --nb-cores=1 --txd=1024 --rxd=1024
+    testpmd>start
+
+2. Launch VM1 and VM2::
+
+    qemu-system-x86_64 -name us-vhost-vm1 \
+     -cpu host -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa node,memdev=mem -mem-prealloc \
+     -smp cores=2,sockets=1 -drive file=/home/osimg/ubuntu16-1.img  \
+     -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f -net user,vlan=2,hostfwd=tcp:127.0.0.1:6004-:22 \
+     -chardev socket,id=char0,path=./vhost-net0 -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce \
+     -device virtio-net-pci,mac=52:54:00:00:00:01,netdev=mynet1,mrg_rxbuf=on \
+     -vnc :12 -daemonize
+
+    qemu-system-x86_64 -name us-vhost-vm2 \
+     -cpu host -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa node,memdev=mem -mem-prealloc \
+     -smp cores=2,sockets=1 -drive file=/home/osimg/ubuntu16-2.img \
+     -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f -net user,vlan=2,hostfwd=tcp:127.0.0.1:6005-:22 \
+     -chardev socket,id=char1,path=./vhost-net1 -netdev type=vhost-user,id=mynet2,chardev=char1,vhostforce \
+     -device virtio-net-pci,mac=52:54:00:00:00:02,netdev=mynet2,mrg_rxbuf=on  \
      -vnc :11 -daemonize
 
-3. On VM1, set virtio device IP and run arp protocol::
+3. On VM1, set virtio device IP and run arp protocal::
 
     ifconfig ens3 1.1.1.2
     arp -s 1.1.1.8 52:54:00:00:00:02
 
-4. On VM2, set virtio device IP and run arp protocol::
+4. On VM2, set virtio device IP and run arp protocal::
+
+    ifconfig ens3 1.1.1.8
+    arp -s 1.1.1.2 52:54:00:00:00:01
+
+5. Scp 64KB file form VM1 to VM2::
+
+    Under VM1, run: `scp [xxx] root@1.1.1.8:/`   [xxx] is the file name
+
+Test Case 5: VM2VM virtio-net split ring non-mergeable zero copy test with large packet payload valid check
+===========================================================================================================
+
+1. Launch the Vhost sample by below commands::
+
+    rm -rf vhost-net*
+    ./testpmd -c 0xF0000000 -n 4 --socket-mem 2048,2048 --legacy-mem --no-pci --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,queues=1,dequeue-zero-copy=1' --vdev 'net_vhost1,iface=vhost-net1,queues=1,dequeue-zero-copy=1'  -- -i --nb-cores=1 --txd=1024 --rxd=1024
+    testpmd>start
+
+2. Launch VM1 and VM2::
+
+    qemu-system-x86_64 -name us-vhost-vm1 \
+     -cpu host -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa node,memdev=mem -mem-prealloc \
+     -smp cores=2,sockets=1 -drive file=/home/osimg/ubuntu16-1.img  \
+     -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f -net user,vlan=2,hostfwd=tcp:127.0.0.1:6004-:22 \
+     -chardev socket,id=char0,path=./vhost-net0 -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce \
+     -device virtio-net-pci,mac=52:54:00:00:00:01,netdev=mynet1,mrg_rxbuf=off \
+     -vnc :12 -daemonize
+
+    qemu-system-x86_64 -name us-vhost-vm2 \
+     -cpu host -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa node,memdev=mem -mem-prealloc \
+     -smp cores=2,sockets=1 -drive file=/home/osimg/ubuntu16-2.img \
+     -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f -net user,vlan=2,hostfwd=tcp:127.0.0.1:6005-:22 \
+     -chardev socket,id=char1,path=./vhost-net1 -netdev type=vhost-user,id=mynet2,chardev=char1,vhostforce \
+     -device virtio-net-pci,mac=52:54:00:00:00:02,netdev=mynet2,mrg_rxbuf=off  \
+     -vnc :11 -daemonize
+
+3. On VM1, set virtio device IP and run arp protocal::
+
+    ifconfig ens3 1.1.1.2
+    arp -s 1.1.1.8 52:54:00:00:00:02
+
+4. On VM2, set virtio device IP and run arp protocal::
+
+    ifconfig ens3 1.1.1.8
+    arp -s 1.1.1.2 52:54:00:00:00:01
+
+5. Scp 64KB file form VM1 to VM2::
+
+    Under VM1, run: `scp [xxx] root@1.1.1.8:/`   [xxx] is the file name
+
+Test Case 6: VM2VM packed ring vhost-user/virtio-net test with tcp traffic
+==========================================================================
+
+1. Launch the Vhost sample by below commands::
+
+    rm -rf vhost-net*
+    ./testpmd -c 0xF0000000 -n 4 --socket-mem 2048,2048 --legacy-mem --no-pci --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,queues=1' --vdev 'net_vhost1,iface=vhost-net1,queues=1'  -- -i --nb-cores=1 --txd=1024 --rxd=1024
+    testpmd>start
+
+2. Launch VM1 and VM2::
+
+    qemu-system-x86_64 -name us-vhost-vm1 \
+     -cpu host -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa node,memdev=mem -mem-prealloc \
+     -smp cores=2,sockets=1 -drive file=/home/osimg/ubuntu16-1.img  \
+     -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f -net user,vlan=2,hostfwd=tcp:127.0.0.1:6004-:22 \
+     -chardev socket,id=char0,path=./vhost-net0 -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce \
+     -device virtio-net-pci,mac=52:54:00:00:00:01,netdev=mynet1,mrg_rxbuf=off,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,packed=on \
+     -vnc :12 -daemonize
+
+    qemu-system-x86_64 -name us-vhost-vm2 \
+     -cpu host -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa node,memdev=mem -mem-prealloc \
+     -smp cores=2,sockets=1 -drive file=/home/osimg/ubuntu16-2.img \
+     -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f -net user,vlan=2,hostfwd=tcp:127.0.0.1:6005-:22 \
+     -chardev socket,id=char1,path=./vhost-net1 -netdev type=vhost-user,id=mynet2,chardev=char1,vhostforce \
+     -device virtio-net-pci,mac=52:54:00:00:00:02,netdev=mynet2,mrg_rxbuf=off,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,packed=on \
+     -vnc :11 -daemonize
+
+3. On VM1, set virtio device IP and run arp protocal::
+
+    ifconfig ens3 1.1.1.2
+    arp -s 1.1.1.8 52:54:00:00:00:02
+
+4. On VM2, set virtio device IP and run arp protocal::
 
     ifconfig ens3 1.1.1.8
     arp -s 1.1.1.2 52:54:00:00:00:01
@@ -139,14 +301,15 @@ Test Case 2: VM2VM vhost-user/virtio-net zero-copy test with tcp traffic
     Under VM1, run: `iperf -s -i 1`
     Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 30`
 
-6. Check both 2VM can receive and send big packets to each other::
+6. Check both 2VMs can receive and send big packets to each other::
 
     testpmd>show port xstats all
     Port 0 should have tx packets above 1522
     Port 1 should have rx packets above 1522
 
-Test Case 3: VM2VM vhost-user/virtio-net test with udp traffic
-==============================================================
+Test Case 7: VM2VM packed ring vhost-user/virtio-net test with udp traffic
+==========================================================================
+
 1. Launch the Vhost sample by below commands::
 
     rm -rf vhost-net*
@@ -155,28 +318,26 @@ Test Case 3: VM2VM vhost-user/virtio-net test with udp traffic
 
 2. Launch VM1 and VM2::
 
-    taskset -c 32-33 \
     qemu-system-x86_64 -name us-vhost-vm1 \
      -cpu host -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa node,memdev=mem -mem-prealloc \
      -smp cores=2,sockets=1 -drive file=/home/osimg/ubuntu16-1.img  \
      -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f -net user,vlan=2,hostfwd=tcp:127.0.0.1:6004-:22 \
      -chardev socket,id=char0,path=./vhost-net0 -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce \
-     -device virtio-net-pci,mac=52:54:00:00:00:01,netdev=mynet1,mrg_rxbuf=off,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on -vnc :10 -daemonize
+     -device virtio-net-pci,mac=52:54:00:00:00:01,netdev=mynet1,mrg_rxbuf=off,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on,packed=on -vnc :10 -daemonize
 
-    taskset -c 34-35 \
     qemu-system-x86_64 -name us-vhost-vm2 \
      -cpu host -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa node,memdev=mem -mem-prealloc \
      -smp cores=2,sockets=1 -drive file=/home/osimg/ubuntu16-2.img \
      -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f -net user,vlan=2,hostfwd=tcp:127.0.0.1:6005-:22 \
      -chardev socket,id=char1,path=./vhost-net1 -netdev type=vhost-user,id=mynet2,chardev=char1,vhostforce \
-     -device virtio-net-pci,mac=52:54:00:00:00:02,netdev=mynet2,mrg_rxbuf=off,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on -vnc :11 -daemonize
+     -device virtio-net-pci,mac=52:54:00:00:00:02,netdev=mynet2,mrg_rxbuf=off,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on,packed=on -vnc :11 -daemonize
 
-3. On VM1, set virtio device IP and run arp protocol::
+3. On VM1, set virtio device IP and run arp protocal::
 
     ifconfig ens3 1.1.1.2
     arp -s 1.1.1.8 52:54:00:00:00:02
 
-4. On VM2, set virtio device IP and run arp protocol::
+4. On VM2, set virtio device IP and run arp protocal::
 
     ifconfig ens3 1.1.1.8
     arp -s 1.1.1.2 52:54:00:00:00:01
@@ -192,9 +353,8 @@ Test Case 3: VM2VM vhost-user/virtio-net test with udp traffic
     Port 0 should have tx packets above 1522
     Port 1 should have rx packets above 1522
 
-
-Test Case 4: Check virtio-net device capability
-===============================================
+Test Case 8: Check packed ring virtio-net device capability
+===========================================================
 
 1. Launch the Vhost sample by below commands::
 
@@ -204,21 +364,19 @@ Test Case 4: Check virtio-net device capability
 
 2. Launch VM1 and VM2,set TSO and UFO on in qemu command::
 
-    taskset -c 32-33 \
     qemu-system-x86_64 -name us-vhost-vm1 \
      -cpu host -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa node,memdev=mem -mem-prealloc \
      -smp cores=2,sockets=1 -drive file=/home/osimg/ubuntu16-1.img  \
      -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f -net user,vlan=2,hostfwd=tcp:127.0.0.1:6004-:22 \
      -chardev socket,id=char0,path=./vhost-net0 -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce \
-     -device virtio-net-pci,mac=52:54:00:00:00:01,netdev=mynet1,mrg_rxbuf=off,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on -vnc :10 -daemonize
+     -device virtio-net-pci,mac=52:54:00:00:00:01,netdev=mynet1,mrg_rxbuf=off,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on,packed=on -vnc :10 -daemonize
 
-    taskset -c 34-35 \
     qemu-system-x86_64 -name us-vhost-vm2 \
      -cpu host -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa node,memdev=mem -mem-prealloc \
      -smp cores=2,sockets=1 -drive file=/home/osimg/ubuntu16-2.img \
      -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f -net user,vlan=2,hostfwd=tcp:127.0.0.1:6005-:22 \
      -chardev socket,id=char1,path=./vhost-net1 -netdev type=vhost-user,id=mynet2,chardev=char1,vhostforce \
-     -device virtio-net-pci,mac=52:54:00:00:00:02,netdev=mynet2,mrg_rxbuf=off,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on -vnc :11 -daemonize
+     -device virtio-net-pci,mac=52:54:00:00:00:02,netdev=mynet2,mrg_rxbuf=off,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on,packed=on -vnc :11 -daemonize
 
 3. Check UFO and TSO offload status on for the Virtio-net driver on VM1 and VM2::
 
@@ -234,13 +392,13 @@ Test Case 4: Check virtio-net device capability
     tx-tcp-ecn-segmentation: on
     tx-tcp6-segmentation: on
 
-Test Case 5: VM2VM vhost-user/virtio-net test with large packet payload valid check
-===================================================================================
+Test Case 9: VM2VM packed ring virtio-net mergeable dequeue zero copy test with large packet payload valid check
+================================================================================================================
 
 1. Launch the Vhost sample by below commands::
 
     rm -rf vhost-net*
-    ./testpmd -c 0xF0000000 -n 4 --socket-mem 2048,2048 --legacy-mem --no-pci --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,queues=1' --vdev 'net_vhost1,iface=vhost-net1,queues=1'  -- -i --nb-cores=1 --txd=1024 --rxd=1024
+    ./testpmd -c 0xF0000000 -n 4 --socket-mem 2048,2048 --legacy-mem --no-pci --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,queues=1,dequeue-zero-copy=1' --vdev 'net_vhost1,iface=vhost-net1,queues=1,dequeue-zero-copy=1'  -- -i --nb-cores=1 --txd=1024 --rxd=1024
     testpmd>start
 
 2. Launch VM1 and VM2::
@@ -250,7 +408,7 @@ Test Case 5: VM2VM vhost-user/virtio-net test with large packet payload valid ch
      -smp cores=2,sockets=1 -drive file=/home/osimg/ubuntu16-1.img  \
      -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f -net user,vlan=2,hostfwd=tcp:127.0.0.1:6004-:22 \
      -chardev socket,id=char0,path=./vhost-net0 -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce \
-     -device virtio-net-pci,mac=52:54:00:00:00:01,netdev=mynet1,mrg_rxbuf=on \
+     -device virtio-net-pci,mac=52:54:00:00:00:01,netdev=mynet1,mrg_rxbuf=on,packed=on \
      -vnc :12 -daemonize
 
     qemu-system-x86_64 -name us-vhost-vm2 \
@@ -258,19 +416,60 @@ Test Case 5: VM2VM vhost-user/virtio-net test with large packet payload valid ch
      -smp cores=2,sockets=1 -drive file=/home/osimg/ubuntu16-2.img \
      -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f -net user,vlan=2,hostfwd=tcp:127.0.0.1:6005-:22 \
      -chardev socket,id=char1,path=./vhost-net1 -netdev type=vhost-user,id=mynet2,chardev=char1,vhostforce \
-     -device virtio-net-pci,mac=52:54:00:00:00:02,netdev=mynet2,mrg_rxbuf=on  \
+     -device virtio-net-pci,mac=52:54:00:00:00:02,netdev=mynet2,mrg_rxbuf=on,packed=on \
+     -vnc :11 -daemonize
+
+3. On VM1, set virtio device IP and run arp protocal::
+
+    ifconfig ens3 1.1.1.2
+    arp -s 1.1.1.8 52:54:00:00:00:02
+
+4. On VM2, set virtio device IP and run arp protocal::
+
+    ifconfig ens3 1.1.1.8
+    arp -s 1.1.1.2 52:54:00:00:00:01
+
+5. Scp 64KB file form VM1 to VM2::
+
+    Under VM1, run: `scp [xxx] root@1.1.1.8:/`   [xxx] is the file name
+
+Test Case 10: VM2VM packed ring virtio-net non-mergeable dequeue zero copy test with large packet payload valid check
+=====================================================================================================================
+
+1. Launch the Vhost sample by below commands::
+
+    rm -rf vhost-net*
+    ./testpmd -c 0xF0000000 -n 4 --socket-mem 2048,2048 --legacy-mem --no-pci --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,queues=1,dequeue-zero-copy=1' --vdev 'net_vhost1,iface=vhost-net1,queues=1,dequeue-zero-copy=1'  -- -i --nb-cores=1 --txd=1024 --rxd=1024
+    testpmd>start
+
+2. Launch VM1 and VM2::
+
+    qemu-system-x86_64 -name us-vhost-vm1 \
+     -cpu host -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa node,memdev=mem -mem-prealloc \
+     -smp cores=2,sockets=1 -drive file=/home/osimg/ubuntu16-1.img  \
+     -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f -net user,vlan=2,hostfwd=tcp:127.0.0.1:6004-:22 \
+     -chardev socket,id=char0,path=./vhost-net0 -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce \
+     -device virtio-net-pci,mac=52:54:00:00:00:01,netdev=mynet1,mrg_rxbuf=off,packed=on \
+     -vnc :12 -daemonize
+
+    qemu-system-x86_64 -name us-vhost-vm2 \
+     -cpu host -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa node,memdev=mem -mem-prealloc \
+     -smp cores=2,sockets=1 -drive file=/home/osimg/ubuntu16-2.img \
+     -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f -net user,vlan=2,hostfwd=tcp:127.0.0.1:6005-:22 \
+     -chardev socket,id=char1,path=./vhost-net1 -netdev type=vhost-user,id=mynet2,chardev=char1,vhostforce \
+     -device virtio-net-pci,mac=52:54:00:00:00:02,netdev=mynet2,mrg_rxbuf=off,packed=on  \
      -vnc :11 -daemonize
 
-3. On VM1, set virtio device IP and run arp protocol::
+3. On VM1, set virtio device IP and run arp protocal::
 
     ifconfig ens3 1.1.1.2
     arp -s 1.1.1.8 52:54:00:00:00:02
 
-4. On VM2, set virtio device IP and run arp protocol::
+4. On VM2, set virtio device IP and run arp protocal::
 
     ifconfig ens3 1.1.1.8
     arp -s 1.1.1.2 52:54:00:00:00:01
 
 5. Scp 64KB file form VM1 to VM2::
 
-    Under VM1, run: `scp [xxx] root@1.1.1.8:/`   [xxx] is the file name
\ No newline at end of file
+    Under VM1, run: `scp [xxx] root@1.1.1.8:/`   [xxx] is the file name
-- 
2.17.1


^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2019-12-20  8:13 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-12-19 19:36 [dts] [PATCH v1] test_plans: add packed virtqueue test in vm2vm topology Yinan
2019-12-20  8:12 ` Tu, Lijuan
2019-12-19 19:41 Yinan
2019-12-20  8:13 ` Tu, Lijuan

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).