test suite reviews and discussions
 help / color / mirror / Atom feed
From: Yinan <yinan.wang@intel.com>
To: dts@dpdk.org
Cc: Wang Yinan <yinan.wang@intel.com>
Subject: [dts] [PATCH v1] test_plans: add packed virtqueue test in vm2vm topology
Date: Thu, 19 Dec 2019 19:41:30 +0000	[thread overview]
Message-ID: <20191219194130.77500-1-yinan.wang@intel.com> (raw)

From: Wang Yinan <yinan.wang@intel.com>

Add packed virtqueue test in vm2vm_virtio_net_perf_test_plan.rst

Signed-off-by: Wang Yinan <yinan.wang@intel.com>
---
 .../vm2vm_virtio_net_perf_test_plan.rst       | 285 +++++++++++++++---
 1 file changed, 242 insertions(+), 43 deletions(-)

diff --git a/test_plans/vm2vm_virtio_net_perf_test_plan.rst b/test_plans/vm2vm_virtio_net_perf_test_plan.rst
index 96f0cc4..0fe8400 100644
--- a/test_plans/vm2vm_virtio_net_perf_test_plan.rst
+++ b/test_plans/vm2vm_virtio_net_perf_test_plan.rst
@@ -8,7 +8,7 @@
    - Redistributions of source code must retain the above copyright
      notice, this list of conditions and the following disclaimer.
 
-   - Redistributions in binary form must reproduce the above copyright
+   - Redistributions in binary forim must reproduce the above copyright
      notice, this list of conditions and the following disclaimer in
      the documentation and/or other materials provided with the
      distribution.
@@ -30,7 +30,6 @@
    ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED
    OF THE POSSIBILITY OF SUCH DAMAGE.
 
-
 =====================================
 vm2vm vhost-user/virtio-net test plan
 =====================================
@@ -38,15 +37,15 @@ vm2vm vhost-user/virtio-net test plan
 Description
 ===========
 
-The feature enabled the DPDK Vhost TX offload (TSO and UFO). The feature added the negotiation between DPDK user space vhost and virtio-net, so we will verify the TSO/cksum in the TCP/IP stack enabled environment and UFO/cksum in the UDP/IP stack enabled environment with vm2vm vhost-user/virtio-net normal path. Also add case to check the payload of large packet is valid by scp with vm2vm vhost-user/virtio-net mergeable path test.
+This test plan test vhost tx offload (TSO and UFO) function by verifing the TSO/cksum in the TCP/IP stack enabled environment and UFO/cksum in the UDP/IP stack enabled environment with vm2vm split ring and packed ring vhost-user/virtio-net non-mergeable path. Also add case to check the payload of large packet is valid with vm2vm split ring and packed ring vhost-user/virtio-net mergeable and non-mergeable dequeue zero copy test. For packed virtqueue test, need using qemu version > 4.2.0.
 
 Test flow
 =========
 
 Virtio-net <-> Vhost <-> Testpmd <-> Vhost <-> Virtio-net
 
-Test Case 1: VM2VM vhost-user/virtio-net test with tcp traffic
-==============================================================
+Test Case 1: VM2VM split ring vhost-user/virtio-net test with tcp traffic
+=========================================================================
 
 1. Launch the Vhost sample by below commands::
 
@@ -56,7 +55,6 @@ Test Case 1: VM2VM vhost-user/virtio-net test with tcp traffic
 
 2. Launch VM1 and VM2::
 
-    taskset -c 32-33 \
     qemu-system-x86_64 -name us-vhost-vm1 \
      -cpu host -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa node,memdev=mem -mem-prealloc \
      -smp cores=2,sockets=1 -drive file=/home/osimg/ubuntu16-1.img  \
@@ -65,7 +63,6 @@ Test Case 1: VM2VM vhost-user/virtio-net test with tcp traffic
      -device virtio-net-pci,mac=52:54:00:00:00:01,netdev=mynet1,mrg_rxbuf=off,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on \
      -vnc :12 -daemonize
 
-    taskset -c 34-35 \
     qemu-system-x86_64 -name us-vhost-vm2 \
      -cpu host -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa node,memdev=mem -mem-prealloc \
      -smp cores=2,sockets=1 -drive file=/home/osimg/ubuntu16-2.img \
@@ -74,12 +71,12 @@ Test Case 1: VM2VM vhost-user/virtio-net test with tcp traffic
      -device virtio-net-pci,mac=52:54:00:00:00:02,netdev=mynet2,mrg_rxbuf=off,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on \
      -vnc :11 -daemonize
 
-3. On VM1, set virtio device IP and run arp protocol::
+3. On VM1, set virtio device IP and run arp protocal::
 
     ifconfig ens3 1.1.1.2
     arp -s 1.1.1.8 52:54:00:00:00:02
 
-4. On VM2, set virtio device IP and run arp protocol::
+4. On VM2, set virtio device IP and run arp protocal::
 
     ifconfig ens3 1.1.1.8
     arp -s 1.1.1.2 52:54:00:00:00:01
@@ -95,41 +92,206 @@ Test Case 1: VM2VM vhost-user/virtio-net test with tcp traffic
     Port 0 should have tx packets above 1522
     Port 1 should have rx packets above 1522
 
-Test Case 2: VM2VM vhost-user/virtio-net zero-copy test with tcp traffic
-========================================================================
+Test Case 2: VM2VM split ring vhost-user/virtio-net test with udp traffic
+=========================================================================
 
 1. Launch the Vhost sample by below commands::
 
     rm -rf vhost-net*
-    testpmd>./testpmd -c 0xF0000000 -n 4 --socket-mem 2048,2048 --legacy-mem --no-pci --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,queues=1,dequeue-zero-copy=1' --vdev 'net_vhost1,iface=vhost-net1,queues=1,dequeue-zero-copy=1'  -- -i --nb-cores=1 --txd=1024 --rxd=1024 --txfreet=992
+    testpmd>./testpmd -c 0xF0000000 -n 4 --socket-mem 2048,2048 --legacy-mem --no-pci --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,queues=1' --vdev 'net_vhost1,iface=vhost-net1,queues=1'  -- -i --nb-cores=1 --txd=1024 --rxd=1024
     testpmd>start
 
 2. Launch VM1 and VM2::
 
-    taskset -c 32-33 \
     qemu-system-x86_64 -name us-vhost-vm1 \
      -cpu host -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa node,memdev=mem -mem-prealloc \
      -smp cores=2,sockets=1 -drive file=/home/osimg/ubuntu16-1.img  \
      -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f -net user,vlan=2,hostfwd=tcp:127.0.0.1:6004-:22 \
      -chardev socket,id=char0,path=./vhost-net0 -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce \
-     -device virtio-net-pci,mac=52:54:00:00:00:01,netdev=mynet1,mrg_rxbuf=off,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on \
-     -vnc :10 -daemonize
+     -device virtio-net-pci,mac=52:54:00:00:00:01,netdev=mynet1,mrg_rxbuf=off,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on -vnc :10 -daemonize
 
-    taskset -c 34-35 \
     qemu-system-x86_64 -name us-vhost-vm2 \
      -cpu host -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa node,memdev=mem -mem-prealloc \
      -smp cores=2,sockets=1 -drive file=/home/osimg/ubuntu16-2.img \
      -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f -net user,vlan=2,hostfwd=tcp:127.0.0.1:6005-:22 \
      -chardev socket,id=char1,path=./vhost-net1 -netdev type=vhost-user,id=mynet2,chardev=char1,vhostforce \
-     -device virtio-net-pci,mac=52:54:00:00:00:02,netdev=mynet2,mrg_rxbuf=off,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on \
+     -device virtio-net-pci,mac=52:54:00:00:00:02,netdev=mynet2,mrg_rxbuf=off,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on -vnc :11 -daemonize
+
+3. On VM1, set virtio device IP and run arp protocal::
+
+    ifconfig ens3 1.1.1.2
+    arp -s 1.1.1.8 52:54:00:00:00:02
+
+4. On VM2, set virtio device IP and run arp protocal::
+
+    ifconfig ens3 1.1.1.8
+    arp -s 1.1.1.2 52:54:00:00:00:01
+
+5. Check the iperf performance between two VMs by below commands::
+
+    Under VM1, run: `iperf -s -u -i 1`
+    Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 30 -P 4 -u -b 1G -l 9000`
+
+6. Check both 2VMs can receive and send big packets to each other::
+
+    testpmd>show port xstats all
+    Port 0 should have tx packets above 1522
+    Port 1 should have rx packets above 1522
+
+Test Case 3: Check split ring virtio-net device capability
+==========================================================
+
+1. Launch the Vhost sample by below commands::
+
+    rm -rf vhost-net*
+    testpmd>./testpmd -c 0xF0000000 -n 4 --socket-mem 2048,2048 --legacy-mem --no-pci --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,queues=1' --vdev 'net_vhost1,iface=vhost-net1,queues=1'  -- -i --nb-cores=1 --txd=1024 --rxd=1024
+    testpmd>start
+
+2. Launch VM1 and VM2,set TSO and UFO on in qemu command::
+
+    qemu-system-x86_64 -name us-vhost-vm1 \
+     -cpu host -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa node,memdev=mem -mem-prealloc \
+     -smp cores=2,sockets=1 -drive file=/home/osimg/ubuntu16-1.img  \
+     -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f -net user,vlan=2,hostfwd=tcp:127.0.0.1:6004-:22 \
+     -chardev socket,id=char0,path=./vhost-net0 -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce \
+     -device virtio-net-pci,mac=52:54:00:00:00:01,netdev=mynet1,mrg_rxbuf=off,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on -vnc :10 -daemonize
+
+    qemu-system-x86_64 -name us-vhost-vm2 \
+     -cpu host -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa node,memdev=mem -mem-prealloc \
+     -smp cores=2,sockets=1 -drive file=/home/osimg/ubuntu16-2.img \
+     -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f -net user,vlan=2,hostfwd=tcp:127.0.0.1:6005-:22 \
+     -chardev socket,id=char1,path=./vhost-net1 -netdev type=vhost-user,id=mynet2,chardev=char1,vhostforce \
+     -device virtio-net-pci,mac=52:54:00:00:00:02,netdev=mynet2,mrg_rxbuf=off,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on -vnc :11 -daemonize
+
+3. Check UFO and TSO offload status on for the Virtio-net driver on VM1 and VM2::
+
+    Under VM1, run: `run ethtool -k ens3`
+    udp-fragmentation-offload: on
+    tx-tcp-segmentation: on
+    tx-tcp-ecn-segmentation: on
+    tx-tcp6-segmentation: on
+
+    Under VM2, run: `run ethtool -k ens3`
+    udp-fragmentation-offload: on
+    tx-tcp-segmentation: on
+    tx-tcp-ecn-segmentation: on
+    tx-tcp6-segmentation: on
+
+Test Case 4: VM2VM virtio-net split ring mergeable zero copy test with large packet payload valid check
+=======================================================================================================
+
+1. Launch the Vhost sample by below commands::
+
+    rm -rf vhost-net*
+    ./testpmd -c 0xF0000000 -n 4 --socket-mem 2048,2048 --legacy-mem --no-pci --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,queues=1,dequeue-zero-copy=1' --vdev 'net_vhost1,iface=vhost-net1,queues=1,dequeue-zero-copy=1'  -- -i --nb-cores=1 --txd=1024 --rxd=1024
+    testpmd>start
+
+2. Launch VM1 and VM2::
+
+    qemu-system-x86_64 -name us-vhost-vm1 \
+     -cpu host -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa node,memdev=mem -mem-prealloc \
+     -smp cores=2,sockets=1 -drive file=/home/osimg/ubuntu16-1.img  \
+     -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f -net user,vlan=2,hostfwd=tcp:127.0.0.1:6004-:22 \
+     -chardev socket,id=char0,path=./vhost-net0 -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce \
+     -device virtio-net-pci,mac=52:54:00:00:00:01,netdev=mynet1,mrg_rxbuf=on \
+     -vnc :12 -daemonize
+
+    qemu-system-x86_64 -name us-vhost-vm2 \
+     -cpu host -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa node,memdev=mem -mem-prealloc \
+     -smp cores=2,sockets=1 -drive file=/home/osimg/ubuntu16-2.img \
+     -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f -net user,vlan=2,hostfwd=tcp:127.0.0.1:6005-:22 \
+     -chardev socket,id=char1,path=./vhost-net1 -netdev type=vhost-user,id=mynet2,chardev=char1,vhostforce \
+     -device virtio-net-pci,mac=52:54:00:00:00:02,netdev=mynet2,mrg_rxbuf=on  \
      -vnc :11 -daemonize
 
-3. On VM1, set virtio device IP and run arp protocol::
+3. On VM1, set virtio device IP and run arp protocal::
 
     ifconfig ens3 1.1.1.2
     arp -s 1.1.1.8 52:54:00:00:00:02
 
-4. On VM2, set virtio device IP and run arp protocol::
+4. On VM2, set virtio device IP and run arp protocal::
+
+    ifconfig ens3 1.1.1.8
+    arp -s 1.1.1.2 52:54:00:00:00:01
+
+5. Scp 64KB file form VM1 to VM2::
+
+    Under VM1, run: `scp [xxx] root@1.1.1.8:/`   [xxx] is the file name
+
+Test Case 5: VM2VM virtio-net split ring non-mergeable zero copy test with large packet payload valid check
+===========================================================================================================
+
+1. Launch the Vhost sample by below commands::
+
+    rm -rf vhost-net*
+    ./testpmd -c 0xF0000000 -n 4 --socket-mem 2048,2048 --legacy-mem --no-pci --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,queues=1,dequeue-zero-copy=1' --vdev 'net_vhost1,iface=vhost-net1,queues=1,dequeue-zero-copy=1'  -- -i --nb-cores=1 --txd=1024 --rxd=1024
+    testpmd>start
+
+2. Launch VM1 and VM2::
+
+    qemu-system-x86_64 -name us-vhost-vm1 \
+     -cpu host -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa node,memdev=mem -mem-prealloc \
+     -smp cores=2,sockets=1 -drive file=/home/osimg/ubuntu16-1.img  \
+     -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f -net user,vlan=2,hostfwd=tcp:127.0.0.1:6004-:22 \
+     -chardev socket,id=char0,path=./vhost-net0 -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce \
+     -device virtio-net-pci,mac=52:54:00:00:00:01,netdev=mynet1,mrg_rxbuf=off \
+     -vnc :12 -daemonize
+
+    qemu-system-x86_64 -name us-vhost-vm2 \
+     -cpu host -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa node,memdev=mem -mem-prealloc \
+     -smp cores=2,sockets=1 -drive file=/home/osimg/ubuntu16-2.img \
+     -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f -net user,vlan=2,hostfwd=tcp:127.0.0.1:6005-:22 \
+     -chardev socket,id=char1,path=./vhost-net1 -netdev type=vhost-user,id=mynet2,chardev=char1,vhostforce \
+     -device virtio-net-pci,mac=52:54:00:00:00:02,netdev=mynet2,mrg_rxbuf=off  \
+     -vnc :11 -daemonize
+
+3. On VM1, set virtio device IP and run arp protocal::
+
+    ifconfig ens3 1.1.1.2
+    arp -s 1.1.1.8 52:54:00:00:00:02
+
+4. On VM2, set virtio device IP and run arp protocal::
+
+    ifconfig ens3 1.1.1.8
+    arp -s 1.1.1.2 52:54:00:00:00:01
+
+5. Scp 64KB file form VM1 to VM2::
+
+    Under VM1, run: `scp [xxx] root@1.1.1.8:/`   [xxx] is the file name
+
+Test Case 6: VM2VM packed ring vhost-user/virtio-net test with tcp traffic
+==========================================================================
+
+1. Launch the Vhost sample by below commands::
+
+    rm -rf vhost-net*
+    ./testpmd -c 0xF0000000 -n 4 --socket-mem 2048,2048 --legacy-mem --no-pci --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,queues=1' --vdev 'net_vhost1,iface=vhost-net1,queues=1'  -- -i --nb-cores=1 --txd=1024 --rxd=1024
+    testpmd>start
+
+2. Launch VM1 and VM2::
+
+    qemu-system-x86_64 -name us-vhost-vm1 \
+     -cpu host -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa node,memdev=mem -mem-prealloc \
+     -smp cores=2,sockets=1 -drive file=/home/osimg/ubuntu16-1.img  \
+     -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f -net user,vlan=2,hostfwd=tcp:127.0.0.1:6004-:22 \
+     -chardev socket,id=char0,path=./vhost-net0 -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce \
+     -device virtio-net-pci,mac=52:54:00:00:00:01,netdev=mynet1,mrg_rxbuf=off,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,packed=on \
+     -vnc :12 -daemonize
+
+    qemu-system-x86_64 -name us-vhost-vm2 \
+     -cpu host -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa node,memdev=mem -mem-prealloc \
+     -smp cores=2,sockets=1 -drive file=/home/osimg/ubuntu16-2.img \
+     -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f -net user,vlan=2,hostfwd=tcp:127.0.0.1:6005-:22 \
+     -chardev socket,id=char1,path=./vhost-net1 -netdev type=vhost-user,id=mynet2,chardev=char1,vhostforce \
+     -device virtio-net-pci,mac=52:54:00:00:00:02,netdev=mynet2,mrg_rxbuf=off,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,packed=on \
+     -vnc :11 -daemonize
+
+3. On VM1, set virtio device IP and run arp protocal::
+
+    ifconfig ens3 1.1.1.2
+    arp -s 1.1.1.8 52:54:00:00:00:02
+
+4. On VM2, set virtio device IP and run arp protocal::
 
     ifconfig ens3 1.1.1.8
     arp -s 1.1.1.2 52:54:00:00:00:01
@@ -139,14 +301,15 @@ Test Case 2: VM2VM vhost-user/virtio-net zero-copy test with tcp traffic
     Under VM1, run: `iperf -s -i 1`
     Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 30`
 
-6. Check both 2VM can receive and send big packets to each other::
+6. Check both 2VMs can receive and send big packets to each other::
 
     testpmd>show port xstats all
     Port 0 should have tx packets above 1522
     Port 1 should have rx packets above 1522
 
-Test Case 3: VM2VM vhost-user/virtio-net test with udp traffic
-==============================================================
+Test Case 7: VM2VM packed ring vhost-user/virtio-net test with udp traffic
+==========================================================================
+
 1. Launch the Vhost sample by below commands::
 
     rm -rf vhost-net*
@@ -155,28 +318,26 @@ Test Case 3: VM2VM vhost-user/virtio-net test with udp traffic
 
 2. Launch VM1 and VM2::
 
-    taskset -c 32-33 \
     qemu-system-x86_64 -name us-vhost-vm1 \
      -cpu host -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa node,memdev=mem -mem-prealloc \
      -smp cores=2,sockets=1 -drive file=/home/osimg/ubuntu16-1.img  \
      -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f -net user,vlan=2,hostfwd=tcp:127.0.0.1:6004-:22 \
      -chardev socket,id=char0,path=./vhost-net0 -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce \
-     -device virtio-net-pci,mac=52:54:00:00:00:01,netdev=mynet1,mrg_rxbuf=off,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on -vnc :10 -daemonize
+     -device virtio-net-pci,mac=52:54:00:00:00:01,netdev=mynet1,mrg_rxbuf=off,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on,packed=on -vnc :10 -daemonize
 
-    taskset -c 34-35 \
     qemu-system-x86_64 -name us-vhost-vm2 \
      -cpu host -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa node,memdev=mem -mem-prealloc \
      -smp cores=2,sockets=1 -drive file=/home/osimg/ubuntu16-2.img \
      -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f -net user,vlan=2,hostfwd=tcp:127.0.0.1:6005-:22 \
      -chardev socket,id=char1,path=./vhost-net1 -netdev type=vhost-user,id=mynet2,chardev=char1,vhostforce \
-     -device virtio-net-pci,mac=52:54:00:00:00:02,netdev=mynet2,mrg_rxbuf=off,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on -vnc :11 -daemonize
+     -device virtio-net-pci,mac=52:54:00:00:00:02,netdev=mynet2,mrg_rxbuf=off,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on,packed=on -vnc :11 -daemonize
 
-3. On VM1, set virtio device IP and run arp protocol::
+3. On VM1, set virtio device IP and run arp protocal::
 
     ifconfig ens3 1.1.1.2
     arp -s 1.1.1.8 52:54:00:00:00:02
 
-4. On VM2, set virtio device IP and run arp protocol::
+4. On VM2, set virtio device IP and run arp protocal::
 
     ifconfig ens3 1.1.1.8
     arp -s 1.1.1.2 52:54:00:00:00:01
@@ -192,9 +353,8 @@ Test Case 3: VM2VM vhost-user/virtio-net test with udp traffic
     Port 0 should have tx packets above 1522
     Port 1 should have rx packets above 1522
 
-
-Test Case 4: Check virtio-net device capability
-===============================================
+Test Case 8: Check packed ring virtio-net device capability
+===========================================================
 
 1. Launch the Vhost sample by below commands::
 
@@ -204,21 +364,19 @@ Test Case 4: Check virtio-net device capability
 
 2. Launch VM1 and VM2,set TSO and UFO on in qemu command::
 
-    taskset -c 32-33 \
     qemu-system-x86_64 -name us-vhost-vm1 \
      -cpu host -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa node,memdev=mem -mem-prealloc \
      -smp cores=2,sockets=1 -drive file=/home/osimg/ubuntu16-1.img  \
      -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f -net user,vlan=2,hostfwd=tcp:127.0.0.1:6004-:22 \
      -chardev socket,id=char0,path=./vhost-net0 -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce \
-     -device virtio-net-pci,mac=52:54:00:00:00:01,netdev=mynet1,mrg_rxbuf=off,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on -vnc :10 -daemonize
+     -device virtio-net-pci,mac=52:54:00:00:00:01,netdev=mynet1,mrg_rxbuf=off,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on,packed=on -vnc :10 -daemonize
 
-    taskset -c 34-35 \
     qemu-system-x86_64 -name us-vhost-vm2 \
      -cpu host -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa node,memdev=mem -mem-prealloc \
      -smp cores=2,sockets=1 -drive file=/home/osimg/ubuntu16-2.img \
      -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f -net user,vlan=2,hostfwd=tcp:127.0.0.1:6005-:22 \
      -chardev socket,id=char1,path=./vhost-net1 -netdev type=vhost-user,id=mynet2,chardev=char1,vhostforce \
-     -device virtio-net-pci,mac=52:54:00:00:00:02,netdev=mynet2,mrg_rxbuf=off,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on -vnc :11 -daemonize
+     -device virtio-net-pci,mac=52:54:00:00:00:02,netdev=mynet2,mrg_rxbuf=off,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on,packed=on -vnc :11 -daemonize
 
 3. Check UFO and TSO offload status on for the Virtio-net driver on VM1 and VM2::
 
@@ -234,13 +392,13 @@ Test Case 4: Check virtio-net device capability
     tx-tcp-ecn-segmentation: on
     tx-tcp6-segmentation: on
 
-Test Case 5: VM2VM vhost-user/virtio-net test with large packet payload valid check
-===================================================================================
+Test Case 9: VM2VM packed ring virtio-net mergeable dequeue zero copy test with large packet payload valid check
+================================================================================================================
 
 1. Launch the Vhost sample by below commands::
 
     rm -rf vhost-net*
-    ./testpmd -c 0xF0000000 -n 4 --socket-mem 2048,2048 --legacy-mem --no-pci --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,queues=1' --vdev 'net_vhost1,iface=vhost-net1,queues=1'  -- -i --nb-cores=1 --txd=1024 --rxd=1024
+    ./testpmd -c 0xF0000000 -n 4 --socket-mem 2048,2048 --legacy-mem --no-pci --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,queues=1,dequeue-zero-copy=1' --vdev 'net_vhost1,iface=vhost-net1,queues=1,dequeue-zero-copy=1'  -- -i --nb-cores=1 --txd=1024 --rxd=1024
     testpmd>start
 
 2. Launch VM1 and VM2::
@@ -250,7 +408,7 @@ Test Case 5: VM2VM vhost-user/virtio-net test with large packet payload valid ch
      -smp cores=2,sockets=1 -drive file=/home/osimg/ubuntu16-1.img  \
      -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f -net user,vlan=2,hostfwd=tcp:127.0.0.1:6004-:22 \
      -chardev socket,id=char0,path=./vhost-net0 -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce \
-     -device virtio-net-pci,mac=52:54:00:00:00:01,netdev=mynet1,mrg_rxbuf=on \
+     -device virtio-net-pci,mac=52:54:00:00:00:01,netdev=mynet1,mrg_rxbuf=on,packed=on \
      -vnc :12 -daemonize
 
     qemu-system-x86_64 -name us-vhost-vm2 \
@@ -258,19 +416,60 @@ Test Case 5: VM2VM vhost-user/virtio-net test with large packet payload valid ch
      -smp cores=2,sockets=1 -drive file=/home/osimg/ubuntu16-2.img \
      -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f -net user,vlan=2,hostfwd=tcp:127.0.0.1:6005-:22 \
      -chardev socket,id=char1,path=./vhost-net1 -netdev type=vhost-user,id=mynet2,chardev=char1,vhostforce \
-     -device virtio-net-pci,mac=52:54:00:00:00:02,netdev=mynet2,mrg_rxbuf=on  \
+     -device virtio-net-pci,mac=52:54:00:00:00:02,netdev=mynet2,mrg_rxbuf=on,packed=on \
+     -vnc :11 -daemonize
+
+3. On VM1, set virtio device IP and run arp protocal::
+
+    ifconfig ens3 1.1.1.2
+    arp -s 1.1.1.8 52:54:00:00:00:02
+
+4. On VM2, set virtio device IP and run arp protocal::
+
+    ifconfig ens3 1.1.1.8
+    arp -s 1.1.1.2 52:54:00:00:00:01
+
+5. Scp 64KB file form VM1 to VM2::
+
+    Under VM1, run: `scp [xxx] root@1.1.1.8:/`   [xxx] is the file name
+
+Test Case 10: VM2VM packed ring virtio-net non-mergeable dequeue zero copy test with large packet payload valid check
+=====================================================================================================================
+
+1. Launch the Vhost sample by below commands::
+
+    rm -rf vhost-net*
+    ./testpmd -c 0xF0000000 -n 4 --socket-mem 2048,2048 --legacy-mem --no-pci --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,queues=1,dequeue-zero-copy=1' --vdev 'net_vhost1,iface=vhost-net1,queues=1,dequeue-zero-copy=1'  -- -i --nb-cores=1 --txd=1024 --rxd=1024
+    testpmd>start
+
+2. Launch VM1 and VM2::
+
+    qemu-system-x86_64 -name us-vhost-vm1 \
+     -cpu host -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa node,memdev=mem -mem-prealloc \
+     -smp cores=2,sockets=1 -drive file=/home/osimg/ubuntu16-1.img  \
+     -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f -net user,vlan=2,hostfwd=tcp:127.0.0.1:6004-:22 \
+     -chardev socket,id=char0,path=./vhost-net0 -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce \
+     -device virtio-net-pci,mac=52:54:00:00:00:01,netdev=mynet1,mrg_rxbuf=off,packed=on \
+     -vnc :12 -daemonize
+
+    qemu-system-x86_64 -name us-vhost-vm2 \
+     -cpu host -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa node,memdev=mem -mem-prealloc \
+     -smp cores=2,sockets=1 -drive file=/home/osimg/ubuntu16-2.img \
+     -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f -net user,vlan=2,hostfwd=tcp:127.0.0.1:6005-:22 \
+     -chardev socket,id=char1,path=./vhost-net1 -netdev type=vhost-user,id=mynet2,chardev=char1,vhostforce \
+     -device virtio-net-pci,mac=52:54:00:00:00:02,netdev=mynet2,mrg_rxbuf=off,packed=on  \
      -vnc :11 -daemonize
 
-3. On VM1, set virtio device IP and run arp protocol::
+3. On VM1, set virtio device IP and run arp protocal::
 
     ifconfig ens3 1.1.1.2
     arp -s 1.1.1.8 52:54:00:00:00:02
 
-4. On VM2, set virtio device IP and run arp protocol::
+4. On VM2, set virtio device IP and run arp protocal::
 
     ifconfig ens3 1.1.1.8
     arp -s 1.1.1.2 52:54:00:00:00:01
 
 5. Scp 64KB file form VM1 to VM2::
 
-    Under VM1, run: `scp [xxx] root@1.1.1.8:/`   [xxx] is the file name
\ No newline at end of file
+    Under VM1, run: `scp [xxx] root@1.1.1.8:/`   [xxx] is the file name
-- 
2.17.1


             reply	other threads:[~2019-12-20  2:46 UTC|newest]

Thread overview: 4+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-12-19 19:41 Yinan [this message]
2019-12-20  8:13 ` Tu, Lijuan
  -- strict thread matches above, loose matches on Subject: below --
2019-12-19 19:36 Yinan
2019-12-20  8:12 ` Tu, Lijuan

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20191219194130.77500-1-yinan.wang@intel.com \
    --to=yinan.wang@intel.com \
    --cc=dts@dpdk.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).