test suite reviews and discussions
 help / color / mirror / Atom feed
From: Yinan Wang <yinan.wang@intel.com>
To: dts@dpdk.org
Cc: Yinan Wang <yinan.wang@intel.com>
Subject: [dts] [PATCH v2] test_plans/vm2vm_virtio_pmd: add three cbdma cases
Date: Fri,  2 Jul 2021 13:28:01 -0400	[thread overview]
Message-ID: <20210702172801.839789-1-yinan.wang@intel.com> (raw)

Signed-off-by: Yinan Wang <yinan.wang@intel.com>
---
 test_plans/vm2vm_virtio_pmd_test_plan.rst | 246 ++++++++++++++++++++--
 1 file changed, 230 insertions(+), 16 deletions(-)

diff --git a/test_plans/vm2vm_virtio_pmd_test_plan.rst b/test_plans/vm2vm_virtio_pmd_test_plan.rst
index db410e48..0b1d4a7f 100644
--- a/test_plans/vm2vm_virtio_pmd_test_plan.rst
+++ b/test_plans/vm2vm_virtio_pmd_test_plan.rst
@@ -1,4 +1,4 @@
-.. Copyright (c) <2019>, Intel Corporation
+.. Copyright (c) <2021>, Intel Corporation
    All rights reserved.
 
    Redistribution and use in source and binary forms, with or without
@@ -38,20 +38,6 @@ This test plan includes vm2vm mergeable, normal and vector_rx path test with vir
 also add mergeable and normal path test with virtio 1.1. Specially, three mergeable path cases check the
 payload of each packets are valid by using pdump.
 
-Prerequisites
-=============
-
-Enable pcap lib in dpdk code and recompile::
-
-    --- a/config/common_base
-    +++ b/config/common_base
-    @@ -492,7 +492,7 @@ CONFIG_RTE_LIBRTE_PMD_NULL=y
-     #
-     # Compile software PMD backed by PCAP files
-     #
-    -CONFIG_RTE_LIBRTE_PMD_PCAP=n
-    +CONFIG_RTE_LIBRTE_PMD_PCAP=y
-
 Test flow
 =========
 Virtio-pmd <-> Vhost-user <-> Testpmd <-> Vhost-user <-> Virtio-pmd
@@ -593,4 +579,232 @@ Test Case 8: VM2VM vhost-user/virtio1.1-pmd with normal path
     xxxxx
     Throughput (since last show)
     RX-pps:            xxx
-    TX-pps:            xxx
\ No newline at end of file
+    TX-pps:            xxx
+
+Test Case 9: VM2VM virtio-pmd split ring mergeable path 8 queues CBDMA enable with server mode stable test
+==========================================================================================================
+
+1. Bind 16 cbdma channels to igb_uio driver, then launch the testpmd with 2 vhost port and 8 queues by below commands::
+
+    ./dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8,dmas=[txq0@00:04.0;txq1@00:04.1;txq2@00:04.2;txq3@00:04.3;txq4@00:04.4;txq5@00:04.5;txq6@00:04.6;txq7@00:04.7],dmathr=512' \
+    --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8,dmas=[txq0@80:04.0;txq1@80:04.1;txq2@80:04.2;txq3@80:04.3;txq4@80:04.4;txq5@80:04.5;txq6@80:04.6;txq7@80:04.7],dmathr=512'  -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8
+    testpmd>vhost enable tx all
+    testpmd>start
+
+2. Launch VM1 and VM2 using qemu5.2.0::
+
+    taskset -c 6-16 /home/qemu-install/qemu-5.2/bin/qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 9 -m 4096 \
+    -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \
+    -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04.img  \
+    -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \
+    -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \
+    -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \
+    -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6002-:22 \
+    -chardev socket,id=char0,path=./vhost-net0,server \
+    -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \
+    -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=on,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on -vnc :10
+
+   taskset -c 17-27 /home/qemu-install/qemu-5.2/bin/qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 9 -m 4096 \
+    -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \
+    -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04-2.img  \
+    -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \
+    -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \
+    -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \
+    -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \
+    -chardev socket,id=char0,path=./vhost-net1,server \
+    -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \
+    -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=on,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on -vnc :12
+
+3. On VM1 and VM2, bind virtio device with vfio-pci driver::
+
+     modprobe vfio
+     modprobe vfio-pci
+     echo 1 > /sys/module/vfio/parameters/enable_unsafe_noiommu_mode
+     ./usertools/dpdk-devbind.py --force --bind=vfio-pci 0000:00:05.0
+
+4. Launch testpmd in VM1::
+
+    ./dpdk-testpmd -c 0x3 -n 4 -- -i --tx-offloads=0x00 --enable-hw-vlan-strip --txq=8 --rxq=8 --txd=1024 --rxd=1024 --max-pkt-len=9600
+    testpmd>set mac fwd
+    testpmd>start
+
+5. Launch testpmd in VM2, sent imix pkts from VM2::
+
+    ./testpmd -c 0x3 -n 4 -- -i --tx-offloads=0x00 --enable-hw-vlan-strip --txq=8 --rxq=8 --txd=1024 --rxd=1024 --max-pkt-len=9600
+    testpmd>set mac fwd
+    testpmd>set txpkts 64,256,512,1024,2000,64,256,512,1024,2000
+    testpmd>start tx_first 1
+
+6. Check imix packets can looped between two VMs for 1 mins and 8 queues all have packets rx/tx::
+
+   testpmd>show port stats all
+   testpmd>stop
+
+7. Relaunch and start vhost side testpmd with below cmd, change cbdma threshold for one vhost port's cbdma channels::
+
+  ./dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8,dmas=[txq0@00:04.0;txq1@00:04.1;txq2@00:04.2;txq3@00:04.3;txq4@00:04.4;txq5@00:04.5;txq6@00:04.6;txq7@00:04.7],dmathr=512' \
+   --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8,dmas=[txq0@80:04.0;txq1@80:04.1;txq2@80:04.2;txq3@80:04.3;txq4@80:04.4;txq5@80:04.5;txq6@80:04.6;txq7@80:04.7],dmathr=64'  -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8
+   testpmd>start
+
+8. Send pkts by testpmd in VM2, check imix packets can looped between two VMs for 1 mins and 8 queues all have packets rx/tx::
+
+   testpmd>stop
+   testpmd>start tx_first 1
+   testpmd>show port stats all
+   testpmd>stop
+
+9. Rerun step 7-8 for 10 times.
+
+Test Case 10: VM2VM virtio-pmd split ring mergeable path dynamic queue size CBDMA enable with server mode test
+==============================================================================================================
+
+1. Bind 16 cbdma channels to igb_uio driver, then launch the testpmd with 2 vhost ports below commands::
+
+    ./dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8,dmas=[txq0@00:04.0;txq1@00:04.1;txq2@00:04.2;txq3@00:04.3;txq4@00:04.4;txq5@00:04.5;txq6@00:04.6;txq7@00:04.7],dmathr=512' \
+    --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8,dmas=[txq0@80:04.0;txq1@80:04.1;txq2@80:04.2;txq3@80:04.3;txq4@80:04.4;txq5@80:04.5;txq6@80:04.6;txq7@80:04.7],dmathr=512'  -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=4 --txq=4
+    testpmd>vhost enable tx all
+    testpmd>start
+
+2. Launch VM1 and VM2 using qemu5.2.0::
+
+    taskset -c 6-16 /home/qemu-install/qemu-5.2/bin/qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 9 -m 4096 \
+    -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \
+    -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04.img  \
+    -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \
+    -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \
+    -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \
+    -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6002-:22 \
+    -chardev socket,id=char0,path=./vhost-net0,server \
+    -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \
+    -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=on,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on -vnc :10
+
+   taskset -c 17-27 /home/qemu-install/qemu-5.2/bin/qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 9 -m 4096 \
+    -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \
+    -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04-2.img  \
+    -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \
+    -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \
+    -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \
+    -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \
+    -chardev socket,id=char0,path=./vhost-net1,server \
+    -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \
+    -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=on,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on -vnc :12
+
+3. On VM1 and VM2, bind virtio device with vfio-pci driver::
+
+     modprobe vfio
+     modprobe vfio-pci
+     echo 1 > /sys/module/vfio/parameters/enable_unsafe_noiommu_mode
+     ./usertools/dpdk-devbind.py --force --bind=vfio-pci 0000:00:05.0
+
+4. Launch testpmd in VM1::
+
+    ./testpmd -c 0x3 -n 4 -- -i --tx-offloads=0x00 --enable-hw-vlan-strip --txq=8 --rxq=8 --txd=1024 --rxd=1024 --max-pkt-len=9600
+    testpmd>set mac fwd
+    testpmd>start
+
+5. Launch testpmd in VM2 and send imix pkts, check imix packets can looped between two VMs for 1 mins and 4 queues (queue0 to queue3) have packets rx/tx::
+
+    ./testpmd -c 0x3 -n 4 -- -i --tx-offloads=0x00 --enable-hw-vlan-strip --txq=8 --rxq=8 --txd=1024 --rxd=1024 --max-pkt-len=9600
+    testpmd>set mac fwd
+    testpmd>set txpkts 64,256,512,1024,2000,64,256,512,1024,2000
+    testpmd>start tx_first 32
+    testpmd>show port stats all
+    testpmd>stop
+
+6. Relaunch and start vhost side testpmd with eight queues, change cbdma threshold for one vhost port's cbdma channels::
+
+  ./dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8,dmas=[txq0@00:04.0;txq1@00:04.1;txq2@00:04.2;txq3@00:04.3;txq4@00:04.4;txq5@00:04.5;txq6@00:04.6;txq7@00:04.7],dmathr=512' \
+   --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8,dmas=[txq0@80:04.0;txq1@80:04.1;txq2@80:04.2;txq3@80:04.3;txq4@80:04.4;txq5@80:04.5;txq6@80:04.6;txq7@80:04.7],dmathr=64'  -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8
+   testpmd>start
+
+7. Send pkts by testpmd in VM2, check imix packets can looped between two VMs for 1 mins and 8 queues all have packets rx/tx::
+
+   testpmd>stop
+   testpmd>start tx_first 32
+   testpmd>show port stats all
+   testpmd>stop
+
+8. Rerun step 6-7 for 10 times.
+
+Test Case 11: VM2VM virtio-pmd packed ring mergeable path 8 queues CBDMA enable test
+=====================================================================================
+
+1. Bind 16 cbdma channels to igb_uio driver, then launch the testpmd with 2 vhost port and 8 queues by below commands::
+
+    rm -rf vhost-net*
+    ./dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,queues=8,dmas=[txq0@00:04.0;txq1@00:04.1;txq2@00:04.2;txq3@00:04.3;txq4@00:04.4;txq5@00:04.5;txq6@00:04.6;txq7@00:04.7],dmathr=512' \
+    --vdev 'net_vhost1,iface=vhost-net1,queues=8,dmas=[txq0@80:04.0;txq1@80:04.1;txq2@80:04.2;txq3@80:04.3;txq4@80:04.4;txq5@80:04.5;txq6@80:04.6;txq7@80:04.7],dmathr=512'  -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8
+    testpmd>vhost enable tx all
+    testpmd>start
+
+2. Launch VM1 and VM2 with qemu 5.2.0::
+
+    taskset -c 6-16 qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 9 -m 4096 \
+    -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \
+    -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04.img  \
+    -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \
+    -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \
+    -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \
+    -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6002-:22 \
+    -chardev socket,id=char0,path=./vhost-net0 \
+    -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \
+    -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=on,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on,packed=on -vnc :10
+
+   taskset -c 17-27 qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 9 -m 4096 \
+    -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \
+    -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04-2.img  \
+    -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \
+    -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \
+    -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \
+    -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \
+    -chardev socket,id=char0,path=./vhost-net1 \
+    -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \
+    -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=on,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on,packed=on -vnc :12
+
+3. On VM1 and VM2, bind virtio device with vfio-pci driver::
+
+     modprobe vfio
+     modprobe vfio-pci
+     echo 1 > /sys/module/vfio/parameters/enable_unsafe_noiommu_mode
+     ./usertools/dpdk-devbind.py --force --bind=vfio-pci 0000:00:05.0
+
+4. Launch testpmd in VM1::
+
+    ./testpmd -c 0x3 -n 4 -- -i --tx-offloads=0x00 --enable-hw-vlan-strip --txq=8 --rxq=8 --txd=1024 --rxd=1024 --max-pkt-len=9600
+    testpmd>set mac fwd
+    testpmd>start
+
+5. Launch testpmd in VM2 and send imix pkts, check imix packets can looped between two VMs for 1 mins and 8 queues all have packets rx/tx::
+
+    ./testpmd -c 0x3 -n 4 -- -i --tx-offloads=0x00 --enable-hw-vlan-strip --txq=8 --rxq=8 --txd=1024 --rxd=1024 --max-pkt-len=9600
+    testpmd>set mac fwd
+    testpmd>set txpkts 64,256,512,1024,20000,64,256,512,1024,20000
+    testpmd>start tx_first 32
+    testpmd>show port stats all
+    testpmd>stop
+
+6. Quit VM2 and relaunch VM2 with split ring::
+
+    taskset -c 6-16 qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 9 -m 4096 \
+    -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \
+    -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04.img  \
+    -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \
+    -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \
+    -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \
+    -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6002-:22 \
+    -chardev socket,id=char0,path=./vhost-net0 \
+    -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \
+    -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=on,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on -vnc :10
+
+7. Bind virtio device with vfio-pci driver, launch testpmd in VM2 and send imix pkts, check imix packets can looped between two VMs for 1 mins and 8 queues all have packets rx/tx::
+
+   modprobe vfio
+   modprobe vfio-pci
+   echo 1 > /sys/module/vfio/parameters/enable_unsafe_noiommu_mode
+   ./usertools/dpdk-devbind.py --force --bind=vfio-pci 0000:00:05.0
+   ./testpmd -c 0x3 -n 4 -- -i --tx-offloads=0x00 --enable-hw-vlan-strip --txq=8 --rxq=8 --txd=1024 --rxd=1024 --max-pkt-len=9600
+   testpmd>set mac fwd
+   testpmd>set txpkts 64,256,512,1024,20000,64,256,512,1024,20000
+   testpmd>start tx_first 32
+   testpmd>show port stats all
+   testpmd>stop
-- 
2.25.1


             reply	other threads:[~2021-07-02  8:44 UTC|newest]

Thread overview: 2+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-07-02 17:28 Yinan Wang [this message]
2021-07-12  3:04 ` Tu, Lijuan

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20210702172801.839789-1-yinan.wang@intel.com \
    --to=yinan.wang@intel.com \
    --cc=dts@dpdk.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).