* [dts] [PATCH v1] test_plans/vm2vm_virtio_pmd_test_plan.rst
@ 2021-07-29 18:06 Yinan Wang
0 siblings, 0 replies; only message in thread
From: Yinan Wang @ 2021-07-29 18:06 UTC (permalink / raw)
To: dts; +Cc: Yinan Wang
1. Correct test app name.
2. Add whole BDF name of cbdma devices.
3. Add a tip that cbdma case need special dpdk code.
Signed-off-by: Yinan Wang <yinan.wang@intel.com>
---
test_plans/vm2vm_virtio_pmd_test_plan.rst | 111 +++++++++++-----------
1 file changed, 56 insertions(+), 55 deletions(-)
diff --git a/test_plans/vm2vm_virtio_pmd_test_plan.rst b/test_plans/vm2vm_virtio_pmd_test_plan.rst
index 0b1d4a7f..6b826f81 100644
--- a/test_plans/vm2vm_virtio_pmd_test_plan.rst
+++ b/test_plans/vm2vm_virtio_pmd_test_plan.rst
@@ -37,6 +37,7 @@ vm2vm vhost-user/virtio-pmd test plan
This test plan includes vm2vm mergeable, normal and vector_rx path test with virtio 0.95 and virtio 1.0,
also add mergeable and normal path test with virtio 1.1. Specially, three mergeable path cases check the
payload of each packets are valid by using pdump.
+Note: Blow cases 9-11 which have cbdma usage should add dpdk local path to support async vhostpmd.
Test flow
=========
@@ -48,7 +49,7 @@ Test Case 1: VM2VM vhost-user/virtio-pmd with vector_rx path
1. Bind one physical nic port to igb_uio, then launch the testpmd by below commands::
rm -rf vhost-net*
- ./testpmd -c 0xc0000 -n 4 --no-pci --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,queues=1' --vdev 'net_vhost1,iface=vhost-net1,queues=1' -- -i --nb-cores=1 --txd=1024 --rxd=1024
+ ./dpdk-testpmd -c 0xc0000 -n 4 --no-pci --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,queues=1' --vdev 'net_vhost1,iface=vhost-net1,queues=1' -- -i --nb-cores=1 --txd=1024 --rxd=1024
testpmd>set fwd mac
testpmd>start
@@ -78,13 +79,13 @@ Test Case 1: VM2VM vhost-user/virtio-pmd with vector_rx path
3. On VM1, bind vdev with igb_uio driver,then run testpmd, set rxonly for virtio1, [0000:xx.00] is [Bus,Device,Function] of virtio-net::
- ./testpmd -c 0x3 -n 4 -w 0000:xx.00,vectorized=1 -- -i --txd=1024 --rxd=1024
+ ./dpdk-testpmd -c 0x3 -n 4 -w 0000:xx.00,vectorized=1 -- -i --txd=1024 --rxd=1024
testpmd>set fwd rxonly
testpmd>start
4. On VM2, bind vdev with igb_uio driver,then run testpmd, set txonly for virtio2 and send 64B packets, [0000:xx.00] is [Bus,Device,Function] of virtio-net::
- ./testpmd -c 0x3 -n 4 -w 0000:xx.00,vectorized=1 -- -i --txd=1024 --rxd=1024
+ ./dpdk-testpmd -c 0x3 -n 4 -w 0000:xx.00,vectorized=1 -- -i --txd=1024 --rxd=1024
testpmd>set fwd txonly
testpmd>set txpkts 64
testpmd>start tx_first 32
@@ -103,7 +104,7 @@ Test Case 2: VM2VM vhost-user/virtio-pmd with normal path
1. Bind one physical nic port to igb_uio, then launch the testpmd by below commands::
rm -rf vhost-net*
- ./testpmd -c 0xc0000 -n 4 --no-pci --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,queues=1' --vdev 'net_vhost1,iface=vhost-net1,queues=1' -- -i --nb-cores=1 --txd=1024 --rxd=1024
+ ./dpdk-testpmd -c 0xc0000 -n 4 --no-pci --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,queues=1' --vdev 'net_vhost1,iface=vhost-net1,queues=1' -- -i --nb-cores=1 --txd=1024 --rxd=1024
testpmd>set fwd mac
testpmd>start
@@ -133,13 +134,13 @@ Test Case 2: VM2VM vhost-user/virtio-pmd with normal path
3. On VM1, bind vdev with igb_uio driver,then run testpmd, set rxonly for virtio1 ::
- ./testpmd -c 0x3 -n 4 -- -i --tx-offloads=0x00 --enable-hw-vlan-strip --txd=1024 --rxd=1024
+ ./dpdk-testpmd -c 0x3 -n 4 -- -i --tx-offloads=0x00 --enable-hw-vlan-strip --txd=1024 --rxd=1024
testpmd>set fwd rxonly
testpmd>start
4. On VM2, bind vdev with igb_uio driver,then run testpmd, set rxonly for virtio2 and send 64B packets ::
- ./testpmd -c 0x3 -n 4 -- -i --tx-offloads=0x00 --enable-hw-vlan-strip --txd=1024 --rxd=1024
+ ./dpdk-testpmd -c 0x3 -n 4 -- -i --tx-offloads=0x00 --enable-hw-vlan-strip --txd=1024 --rxd=1024
testpmd>set fwd txonly
testpmd>set txpkts 64
testpmd>start tx_first 32
@@ -158,7 +159,7 @@ Test Case 3: VM2VM vhost-user/virtio1.0-pmd with vector_rx path
1. Bind one physical nic port to igb_uio, then launch the testpmd by below commands::
rm -rf vhost-net*
- ./testpmd -c 0xc0000 -n 4 --no-pci --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,queues=1' --vdev 'net_vhost1,iface=vhost-net1,queues=1' -- -i --nb-cores=1 --txd=1024 --rxd=1024
+ ./dpdk-testpmd -c 0xc0000 -n 4 --no-pci --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,queues=1' --vdev 'net_vhost1,iface=vhost-net1,queues=1' -- -i --nb-cores=1 --txd=1024 --rxd=1024
testpmd>set fwd mac
testpmd>start
@@ -188,13 +189,13 @@ Test Case 3: VM2VM vhost-user/virtio1.0-pmd with vector_rx path
3. On VM1, bind vdev with igb_uio driver,then run testpmd, set rxonly for virtio1, [0000:xx.00] is [Bus,Device,Function] of virtio-net::
- ./testpmd -c 0x3 -n 4 -w 0000:xx.00,vectorized=1 -- -i --txd=1024 --rxd=1024
+ ./dpdk-testpmd -c 0x3 -n 4 -w 0000:xx.00,vectorized=1 -- -i --txd=1024 --rxd=1024
testpmd>set fwd rxonly
testpmd>start
4. On VM2, bind vdev with igb_uio driver,then run testpmd, set txonly for virtio2, [0000:xx.00] is [Bus,Device,Function] of virtio-net::
- ./testpmd -c 0x3 -n 4 -w 0000:xx.00,vectorized=1 -- -i --txd=1024 --rxd=1024
+ ./dpdk-testpmd -c 0x3 -n 4 -w 0000:xx.00,vectorized=1 -- -i --txd=1024 --rxd=1024
testpmd>set fwd txonly
testpmd>set txpkts 64
testpmd>start tx_first 32
@@ -213,7 +214,7 @@ Test Case 4: VM2VM vhost-user/virtio1.0-pmd with normal path
1. Bind one physical nic port to igb_uio, then launch the testpmd by below commands::
rm -rf vhost-net*
- ./testpmd -c 0xc0000 -n 4 --no-pci --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,queues=1' --vdev 'net_vhost1,iface=vhost-net1,queues=1' -- -i --nb-cores=1 --txd=1024 --rxd=1024
+ ./dpdk-testpmd -c 0xc0000 -n 4 --no-pci --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,queues=1' --vdev 'net_vhost1,iface=vhost-net1,queues=1' -- -i --nb-cores=1 --txd=1024 --rxd=1024
testpmd>set fwd mac
testpmd>start
@@ -243,13 +244,13 @@ Test Case 4: VM2VM vhost-user/virtio1.0-pmd with normal path
3. On VM1, bind vdev with igb_uio driver,then run testpmd, set rxonly for virtio1 ::
- ./testpmd -c 0x3 -n 4 -- -i --tx-offloads=0x00 --enable-hw-vlan-strip --txd=1024 --rxd=1024
+ ./dpdk-testpmd -c 0x3 -n 4 -- -i --tx-offloads=0x00 --enable-hw-vlan-strip --txd=1024 --rxd=1024
testpmd>set fwd rxonly
testpmd>start
4. On VM2, bind vdev with igb_uio driver,then run testpmd, set txonly for virtio2 ::
- ./testpmd -c 0x3 -n 4 -- -i --tx-offloads=0x00 --enable-hw-vlan-strip --txd=1024 --rxd=1024
+ ./dpdk-testpmd -c 0x3 -n 4 -- -i --tx-offloads=0x00 --enable-hw-vlan-strip --txd=1024 --rxd=1024
testpmd>set fwd txonly
testpmd>set txpkts 64
testpmd>start tx_first 32
@@ -267,7 +268,7 @@ Test Case 5: VM2VM vhost-user/virtio-pmd mergeable path with payload valid check
1. Bind virtio with igb_uio driver, launch the testpmd by below commands::
- ./testpmd -c 0xc0000 -n 4 --no-pci --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,queues=1' --vdev 'net_vhost1,iface=vhost-net1,queues=1' -- -i --nb-cores=1 --txd=1024 --rxd=1024
+ ./dpdk-testpmd -c 0xc0000 -n 4 --no-pci --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,queues=1' --vdev 'net_vhost1,iface=vhost-net1,queues=1' -- -i --nb-cores=1 --txd=1024 --rxd=1024
testpmd>set fwd mac
testpmd>start
@@ -310,17 +311,18 @@ Test Case 5: VM2VM vhost-user/virtio-pmd mergeable path with payload valid check
4. Bind virtio with igb_uio driver,then run testpmd, set rxonly mode for virtio-pmd on VM1::
- ./testpmd -c 0x3 -n 4 --file-prefix=test -- -i --txd=1024 --rxd=1024 --max-pkt-len=9600
+ ./dpdk-testpmd -c 0x3 -n 4 --file-prefix=test -- -i --txd=1024 --rxd=1024 --max-pkt-len=9600 --rx-offloads=0x00002000
testpmd>set fwd rxonly
testpmd>start
5. Bootup pdump in VM1::
- ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=test -- --pdump 'port=0,queue=*,rx-dev=/root/pdump-rx.pcap,mbuf-size=8000'
+ ./dpdk-pdump -v --file-prefix=test -- --pdump 'port=0,queue=*,rx-dev=/root/pdump-rx.pcap,mbuf-size=8000'
6. On VM2, bind virtio with igb_uio driver,then run testpmd, config tx_packets to 8k length with chain mode::
- ./testpmd -c 0x3 -n 4 -- -i --txd=1024 --rxd=1024 --max-pkt-len=9600
+ ./dpdk-testpmd -c 0x3 -n 4 -- -i --txd=1024 --rxd=1024 --max-pkt-len=9600 --rx-offloads=0x00002000
+
testpmd>set fwd mac
testpmd>set txpkts 2000,2000,2000,2000
@@ -333,17 +335,17 @@ Test Case 5: VM2VM vhost-user/virtio-pmd mergeable path with payload valid check
9. Relaunch testpmd in VM1::
- ./testpmd -c 0x3 -n 4 --file-prefix=test -- -i --txd=1024 --rxd=1024
+ ./dpdk-testpmd -c 0x3 -n 4 --file-prefix=test -- -i --txd=1024 --rxd=1024
testpmd>set fwd rxonly
testpmd>start
10. Bootup pdump in VM1::
- ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=test -- --pdump 'port=0,queue=*,rx-dev=/root/pdump-rx-small.pcap,mbuf-size=8000'
+ ./dpdk-pdump -v --file-prefix=test -- --pdump 'port=0,queue=*,rx-dev=/root/pdump-rx-small.pcap,mbuf-size=8000'
11. Relaunch testpmd on VM2, send ten 64B packets from virtio-pmd on VM2::
- ./testpmd -c 0x3 -n 4 -- -i --txd=1024 --rxd=1024
+ ./dpdk-testpmd -c 0x3 -n 4 -- -i --txd=1024 --rxd=1024
testpmd>set fwd mac
testpmd>set burst 1
testpmd>start tx_first 10
@@ -355,7 +357,7 @@ Test Case 6: VM2VM vhost-user/virtio1.0-pmd mergeable path with payload valid ch
1. Bind virtio with igb_uio driver, launch the testpmd by below commands::
- ./testpmd -c 0xc0000 -n 4 --no-pci --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,queues=1' --vdev 'net_vhost1,iface=vhost-net1,queues=1' -- -i --nb-cores=1 --txd=1024 --rxd=1024
+ ./dpdk-testpmd -c 0xc0000 -n 4 --no-pci --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,queues=1' --vdev 'net_vhost1,iface=vhost-net1,queues=1' -- -i --nb-cores=1 --txd=1024 --rxd=1024
testpmd>set fwd mac
testpmd>start
@@ -398,17 +400,16 @@ Test Case 6: VM2VM vhost-user/virtio1.0-pmd mergeable path with payload valid ch
4. Bind virtio with igb_uio driver,then run testpmd, set rxonly mode for virtio-pmd on VM1::
- ./testpmd -c 0x3 -n 4 --file-prefix=test -- -i --txd=1024 --rxd=1024 --max-pkt-len=9600
- testpmd>set fwd rxonly
+ ./dpdk-testpmd -c 0x3 -n 4 --file-prefix=test -- -i --txd=1024 --rxd=1024 --max-pkt-len=9600 --rx-offloads=0x00002000
testpmd>start
5. Bootup pdump in VM1::
- ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=test -- --pdump 'port=0,queue=*,rx-dev=/root/pdump-rx.pcap,mbuf-size=8000'
+ ./dpdk-pdump -v --file-prefix=test -- --pdump 'port=0,queue=*,rx-dev=/root/pdump-rx.pcap,mbuf-size=8000'
6. On VM2, bind virtio with igb_uio driver,then run testpmd, config tx_packets to 8k length with chain mode::
- ./testpmd -c 0x3 -n 4 -- -i --txd=1024 --rxd=1024 --max-pkt-len=9600
+ ./dpdk-testpmd -c 0x3 -n 4 -- -i --txd=1024 --rxd=1024 --max-pkt-len=9600 --rx-offloads=0x00002000
testpmd>set fwd mac
testpmd>set txpkts 2000,2000,2000,2000
@@ -421,17 +422,17 @@ Test Case 6: VM2VM vhost-user/virtio1.0-pmd mergeable path with payload valid ch
9. Relaunch testpmd in VM1::
- ./testpmd -c 0x3 -n 4 --file-prefix=test -- -i --txd=1024 --rxd=1024
+ ./dpdk-testpmd -c 0x3 -n 4 --file-prefix=test -- -i --txd=1024 --rxd=1024
testpmd>set fwd rxonly
testpmd>start
10. Bootup pdump in VM1::
- ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=test -- --pdump 'port=0,queue=*,rx-dev=/root/pdump-rx-small.pcap'
+ ./dpdk-pdump -v --file-prefix=test -- --pdump 'port=0,queue=*,rx-dev=/root/pdump-rx-small.pcap'
11. Relaunch testpmd On VM2, send ten 64B packets from virtio-pmd on VM2::
- ./testpmd -c 0x3 -n 4 -- -i --txd=1024 --rxd=1024 --max-pkt-len=9600
+ ./dpdk-testpmd -c 0x3 -n 4 -- -i --txd=1024 --rxd=1024 --max-pkt-len=9600 --rx-offloads=0x00002000
testpmd>set fwd mac
testpmd>set burst 1
testpmd>start tx_first 10
@@ -443,7 +444,7 @@ Test Case 7: VM2VM vhost-user/virtio1.1-pmd mergeable path with payload valid ch
1. Bind virtio with igb_uio driver, launch the testpmd by below commands::
- ./testpmd -c 0xc0000 -n 4 --no-pci --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,queues=1' --vdev 'net_vhost1,iface=vhost-net1,queues=1' -- -i --nb-cores=1 --txd=1024 --rxd=1024
+ ./dpdk-testpmd -c 0xc0000 -n 4 --no-pci --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,queues=1' --vdev 'net_vhost1,iface=vhost-net1,queues=1' -- -i --nb-cores=1 --txd=1024 --rxd=1024
testpmd>set fwd mac
testpmd>start
@@ -486,17 +487,17 @@ Test Case 7: VM2VM vhost-user/virtio1.1-pmd mergeable path with payload valid ch
4. Bind virtio with igb_uio driver,then run testpmd, set rxonly mode for virtio-pmd on VM1::
- ./testpmd -c 0x3 -n 4 --file-prefix=test -- -i --txd=1024 --rxd=1024 --max-pkt-len=9600
+ ./dpdk-testpmd -c 0x3 -n 4 --file-prefix=test -- -i --txd=1024 --rxd=1024 --max-pkt-len=9600 --rx-offloads=0x00002000
testpmd>set fwd rxonly
testpmd>start
5. Bootup pdump in VM1::
- ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=test -- --pdump 'port=0,queue=*,rx-dev=/root/pdump-rx.pcap,mbuf-size=8000'
+ ./dpdk-pdump -v --file-prefix=test -- --pdump 'port=0,queue=*,rx-dev=/root/pdump-rx.pcap,mbuf-size=8000'
6. On VM2, bind virtio with igb_uio driver,then run testpmd, config tx_packets to 8k length with chain mode::
- ./testpmd -c 0x3 -n 4 -- -i --txd=1024 --rxd=1024 --max-pkt-len=9600
+ ./dpdk-testpmd -c 0x3 -n 4 -- -i --txd=1024 --rxd=1024 --max-pkt-len=9600 --rx-offloads=0x00002000
testpmd>set fwd mac
testpmd>set txpkts 2000,2000,2000,2000
@@ -509,17 +510,17 @@ Test Case 7: VM2VM vhost-user/virtio1.1-pmd mergeable path with payload valid ch
9. Relaunch testpmd in VM1::
- ./testpmd -c 0x3 -n 4 --file-prefix=test -- -i --txd=1024 --rxd=1024
+ ./dpdk-testpmd -c 0x3 -n 4 --file-prefix=test -- -i --txd=1024 --rxd=1024
testpmd>set fwd rxonly
testpmd>start
10. Bootup pdump in VM1::
- ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=test -- --pdump 'port=0,queue=*,rx-dev=/root/pdump-rx-small.pcap'
+ ./dpdk-pdump -v --file-prefix=test -- --pdump 'port=0,queue=*,rx-dev=/root/pdump-rx-small.pcap'
11. Relaunch testpmd On VM2, send ten 64B packets from virtio-pmd on VM2::
- ./testpmd -c 0x3 -n 4 -- -i --txd=1024 --rxd=1024 --max-pkt-len=9600
+ ./dpdk-testpmd -c 0x3 -n 4 -- -i --txd=1024 --rxd=1024 --max-pkt-len=9600 --rx-offloads=0x00002000
testpmd>set fwd mac
testpmd>set burst 1
testpmd>start tx_first 10
@@ -532,7 +533,7 @@ Test Case 8: VM2VM vhost-user/virtio1.1-pmd with normal path
1. Bind one physical nic port to igb_uio, then launch the testpmd by below commands::
rm -rf vhost-net*
- ./testpmd -c 0xc0000 -n 4 --no-pci --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,queues=1' --vdev 'net_vhost1,iface=vhost-net1,queues=1' -- -i --nb-cores=1 --txd=1024 --rxd=1024
+ ./dpdk-testpmd -c 0xc0000 -n 4 --no-pci --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,queues=1' --vdev 'net_vhost1,iface=vhost-net1,queues=1' -- -i --nb-cores=1 --txd=1024 --rxd=1024
testpmd>set fwd mac
testpmd>start
@@ -562,13 +563,13 @@ Test Case 8: VM2VM vhost-user/virtio1.1-pmd with normal path
3. On VM1, bind vdev with igb_uio driver,then run testpmd, set rxonly for virtio1 ::
- ./testpmd -c 0x3 -n 4 -- -i --tx-offloads=0x00 --enable-hw-vlan-strip --txd=1024 --rxd=1024
+ ./dpdk-testpmd -c 0x3 -n 4 -- -i --tx-offloads=0x00 --enable-hw-vlan-strip --txd=1024 --rxd=1024
testpmd>set fwd rxonly
testpmd>start
4. On VM2, bind vdev with igb_uio driver,then run testpmd, set txonly for virtio2 ::
- ./testpmd -c 0x3 -n 4 -- -i --tx-offloads=0x00 --enable-hw-vlan-strip --txd=1024 --rxd=1024
+ ./dpdk-testpmd -c 0x3 -n 4 -- -i --tx-offloads=0x00 --enable-hw-vlan-strip --txd=1024 --rxd=1024
testpmd>set fwd txonly
testpmd>set txpkts 64
testpmd>start tx_first 32
@@ -586,8 +587,8 @@ Test Case 9: VM2VM virtio-pmd split ring mergeable path 8 queues CBDMA enable wi
1. Bind 16 cbdma channels to igb_uio driver, then launch the testpmd with 2 vhost port and 8 queues by below commands::
- ./dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8,dmas=[txq0@00:04.0;txq1@00:04.1;txq2@00:04.2;txq3@00:04.3;txq4@00:04.4;txq5@00:04.5;txq6@00:04.6;txq7@00:04.7],dmathr=512' \
- --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8,dmas=[txq0@80:04.0;txq1@80:04.1;txq2@80:04.2;txq3@80:04.3;txq4@80:04.4;txq5@80:04.5;txq6@80:04.6;txq7@80:04.7],dmathr=512' -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8
+ ./dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8,dmas=[txq0@0000:00:04.0;txq1@0000:00:04.1;txq2@0000:00:04.2;txq3@0000:00:04.3;txq4@0000:00:04.4;txq5@0000:00:04.5;txq6@0000:00:04.6;txq7@0000:00:04.7],dmathr=512' \
+ --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8,dmas=[txq0@0000:80:04.0;txq1@0000:80:04.1;txq2@0000:80:04.2;txq3@0000:80:04.3;txq4@0000:80:04.4;txq5@0000:80:04.5;txq6@0000:80:04.6;txq7@0000:80:04.7],dmathr=512' -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8
testpmd>vhost enable tx all
testpmd>start
@@ -624,13 +625,13 @@ Test Case 9: VM2VM virtio-pmd split ring mergeable path 8 queues CBDMA enable wi
4. Launch testpmd in VM1::
- ./dpdk-testpmd -c 0x3 -n 4 -- -i --tx-offloads=0x00 --enable-hw-vlan-strip --txq=8 --rxq=8 --txd=1024 --rxd=1024 --max-pkt-len=9600
+ ./dpdk-testpmd -c 0x3 -n 4 -- -i --enable-hw-vlan-strip --txq=8 --rxq=8 --txd=1024 --rxd=1024 --max-pkt-len=9600 --rx-offloads=0x00002000
testpmd>set mac fwd
testpmd>start
5. Launch testpmd in VM2, sent imix pkts from VM2::
- ./testpmd -c 0x3 -n 4 -- -i --tx-offloads=0x00 --enable-hw-vlan-strip --txq=8 --rxq=8 --txd=1024 --rxd=1024 --max-pkt-len=9600
+ ./dpdk-testpmd -c 0x3 -n 4 -- -i --enable-hw-vlan-strip --txq=8 --rxq=8 --txd=1024 --rxd=1024 --max-pkt-len=9600 --rx-offloads=0x00002000
testpmd>set mac fwd
testpmd>set txpkts 64,256,512,1024,2000,64,256,512,1024,2000
testpmd>start tx_first 1
@@ -642,8 +643,8 @@ Test Case 9: VM2VM virtio-pmd split ring mergeable path 8 queues CBDMA enable wi
7. Relaunch and start vhost side testpmd with below cmd, change cbdma threshold for one vhost port's cbdma channels::
- ./dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8,dmas=[txq0@00:04.0;txq1@00:04.1;txq2@00:04.2;txq3@00:04.3;txq4@00:04.4;txq5@00:04.5;txq6@00:04.6;txq7@00:04.7],dmathr=512' \
- --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8,dmas=[txq0@80:04.0;txq1@80:04.1;txq2@80:04.2;txq3@80:04.3;txq4@80:04.4;txq5@80:04.5;txq6@80:04.6;txq7@80:04.7],dmathr=64' -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8
+ ./dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8,dmas=[txq0@0000:00:04.0;txq1@0000:00:04.1;txq2@0000:00:04.2;txq3@0000:00:04.3;txq4@0000:00:04.4;txq5@0000:00:04.5;txq6@0000:00:04.6;txq7@0000:00:04.7],dmathr=512' \
+ --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8,dmas=[txq0@0000:80:04.0;txq1@0000:80:04.1;txq2@0000:80:04.2;txq3@0000:80:04.3;txq4@0000:80:04.4;txq5@0000:80:04.5;txq6@0000:80:04.6;txq7@0000:80:04.7],dmathr=64' -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8
testpmd>start
8. Send pkts by testpmd in VM2, check imix packets can looped between two VMs for 1 mins and 8 queues all have packets rx/tx::
@@ -660,8 +661,8 @@ Test Case 10: VM2VM virtio-pmd split ring mergeable path dynamic queue size CBDM
1. Bind 16 cbdma channels to igb_uio driver, then launch the testpmd with 2 vhost ports below commands::
- ./dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8,dmas=[txq0@00:04.0;txq1@00:04.1;txq2@00:04.2;txq3@00:04.3;txq4@00:04.4;txq5@00:04.5;txq6@00:04.6;txq7@00:04.7],dmathr=512' \
- --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8,dmas=[txq0@80:04.0;txq1@80:04.1;txq2@80:04.2;txq3@80:04.3;txq4@80:04.4;txq5@80:04.5;txq6@80:04.6;txq7@80:04.7],dmathr=512' -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=4 --txq=4
+ ./dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8,dmas=[txq0@0000:00:04.0;txq1@0000:00:04.1;txq2@0000:00:04.2;txq3@0000:00:04.3;txq4@0000:00:04.4;txq5@0000:00:04.5;txq6@0000:00:04.6;txq7@0000:00:04.7],dmathr=512' \
+ --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8,dmas=[txq0@0000:80:04.0;txq1@0000:80:04.1;txq2@0000:80:04.2;txq3@0000:80:04.3;txq4@0000:80:04.4;txq5@0000:80:04.5;txq6@0000:80:04.6;txq7@0000:80:04.7],dmathr=512' -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=4 --txq=4
testpmd>vhost enable tx all
testpmd>start
@@ -698,14 +699,14 @@ Test Case 10: VM2VM virtio-pmd split ring mergeable path dynamic queue size CBDM
4. Launch testpmd in VM1::
- ./testpmd -c 0x3 -n 4 -- -i --tx-offloads=0x00 --enable-hw-vlan-strip --txq=8 --rxq=8 --txd=1024 --rxd=1024 --max-pkt-len=9600
+ ./dpdk-testpmd -c 0x3 -n 4 -- -i --tx-offloads=0x00 --enable-hw-vlan-strip --txq=8 --rxq=8 --txd=1024 --rxd=1024 --max-pkt-len=9600 --rx-offloads=0x00002000
testpmd>set mac fwd
testpmd>start
5. Launch testpmd in VM2 and send imix pkts, check imix packets can looped between two VMs for 1 mins and 4 queues (queue0 to queue3) have packets rx/tx::
- ./testpmd -c 0x3 -n 4 -- -i --tx-offloads=0x00 --enable-hw-vlan-strip --txq=8 --rxq=8 --txd=1024 --rxd=1024 --max-pkt-len=9600
- testpmd>set mac fwd
+ ./dpdk-testpmd -c 0x3 -n 4 -- -i --tx-offloads=0x00 --enable-hw-vlan-strip --txq=8 --rxq=8 --txd=1024 --rxd=1024 --max-pkt-len=9600 --rx-offloads=0x00002000
+ testpmd>set mac fwd
testpmd>set txpkts 64,256,512,1024,2000,64,256,512,1024,2000
testpmd>start tx_first 32
testpmd>show port stats all
@@ -713,8 +714,8 @@ Test Case 10: VM2VM virtio-pmd split ring mergeable path dynamic queue size CBDM
6. Relaunch and start vhost side testpmd with eight queues, change cbdma threshold for one vhost port's cbdma channels::
- ./dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8,dmas=[txq0@00:04.0;txq1@00:04.1;txq2@00:04.2;txq3@00:04.3;txq4@00:04.4;txq5@00:04.5;txq6@00:04.6;txq7@00:04.7],dmathr=512' \
- --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8,dmas=[txq0@80:04.0;txq1@80:04.1;txq2@80:04.2;txq3@80:04.3;txq4@80:04.4;txq5@80:04.5;txq6@80:04.6;txq7@80:04.7],dmathr=64' -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8
+ ./dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8,dmas=[txq0@0000:00:04.0;txq1@0000:00:04.1;txq2@0000:00:04.2;txq3@0000:00:04.3;txq4@0000:00:04.4;txq5@0000:00:04.5;txq6@0000:00:04.6;txq7@0000:00:04.7],dmathr=512' \
+ --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8,dmas=[txq0@0000:80:04.0;txq1@0000:80:04.1;txq2@0000:80:04.2;txq3@0000:80:04.3;txq4@0000:80:04.4;txq5@0000:80:04.5;txq6@0000:80:04.6;txq7@0000:80:04.7],dmathr=64' -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8
testpmd>start
7. Send pkts by testpmd in VM2, check imix packets can looped between two VMs for 1 mins and 8 queues all have packets rx/tx::
@@ -732,8 +733,8 @@ Test Case 11: VM2VM virtio-pmd packed ring mergeable path 8 queues CBDMA enable
1. Bind 16 cbdma channels to igb_uio driver, then launch the testpmd with 2 vhost port and 8 queues by below commands::
rm -rf vhost-net*
- ./dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,queues=8,dmas=[txq0@00:04.0;txq1@00:04.1;txq2@00:04.2;txq3@00:04.3;txq4@00:04.4;txq5@00:04.5;txq6@00:04.6;txq7@00:04.7],dmathr=512' \
- --vdev 'net_vhost1,iface=vhost-net1,queues=8,dmas=[txq0@80:04.0;txq1@80:04.1;txq2@80:04.2;txq3@80:04.3;txq4@80:04.4;txq5@80:04.5;txq6@80:04.6;txq7@80:04.7],dmathr=512' -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8
+ ./dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,queues=8,dmas=[txq0@0000:00:04.0;txq1@0000:00:04.1;txq2@0000:00:04.2;txq3@0000:00:04.3;txq4@0000:00:04.4;txq5@0000:00:04.5;txq6@0000:00:04.6;txq7@0000:00:04.7],dmathr=512' \
+ --vdev 'net_vhost1,iface=vhost-net1,queues=8,dmas=[txq0@0000:80:04.0;txq1@0000:80:04.1;txq2@0000:80:04.2;txq3@0000:80:04.3;txq4@0000:80:04.4;txq5@0000:80:04.5;txq6@0000:80:04.6;txq7@0000:80:04.7],dmathr=512' -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8
testpmd>vhost enable tx all
testpmd>start
@@ -770,13 +771,13 @@ Test Case 11: VM2VM virtio-pmd packed ring mergeable path 8 queues CBDMA enable
4. Launch testpmd in VM1::
- ./testpmd -c 0x3 -n 4 -- -i --tx-offloads=0x00 --enable-hw-vlan-strip --txq=8 --rxq=8 --txd=1024 --rxd=1024 --max-pkt-len=9600
+ ./dpdk-testpmd -c 0x3 -n 4 -- -i --tx-offloads=0x00 --enable-hw-vlan-strip --txq=8 --rxq=8 --txd=1024 --rxd=1024 --max-pkt-len=9600 --rx-offloads=0x00002000
testpmd>set mac fwd
testpmd>start
5. Launch testpmd in VM2 and send imix pkts, check imix packets can looped between two VMs for 1 mins and 8 queues all have packets rx/tx::
- ./testpmd -c 0x3 -n 4 -- -i --tx-offloads=0x00 --enable-hw-vlan-strip --txq=8 --rxq=8 --txd=1024 --rxd=1024 --max-pkt-len=9600
+ ./dpdk-testpmd -c 0x3 -n 4 -- -i --tx-offloads=0x00 --enable-hw-vlan-strip --txq=8 --rxq=8 --txd=1024 --rxd=1024 --max-pkt-len=9600 --rx-offloads=0x00002000
testpmd>set mac fwd
testpmd>set txpkts 64,256,512,1024,20000,64,256,512,1024,20000
testpmd>start tx_first 32
@@ -802,7 +803,7 @@ Test Case 11: VM2VM virtio-pmd packed ring mergeable path 8 queues CBDMA enable
modprobe vfio-pci
echo 1 > /sys/module/vfio/parameters/enable_unsafe_noiommu_mode
./usertools/dpdk-devbind.py --force --bind=vfio-pci 0000:00:05.0
- ./testpmd -c 0x3 -n 4 -- -i --tx-offloads=0x00 --enable-hw-vlan-strip --txq=8 --rxq=8 --txd=1024 --rxd=1024 --max-pkt-len=9600
+ ./dpdk-testpmd -c 0x3 -n 4 -- -i --tx-offloads=0x00 --enable-hw-vlan-strip --txq=8 --rxq=8 --txd=1024 --rxd=1024 --max-pkt-len=9600 --rx-offloads=0x00002000
testpmd>set mac fwd
testpmd>set txpkts 64,256,512,1024,20000,64,256,512,1024,20000
testpmd>start tx_first 32
--
2.25.1
^ permalink raw reply [flat|nested] only message in thread
only message in thread, other threads:[~2021-07-29 9:24 UTC | newest]
Thread overview: (only message) (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-07-29 18:06 [dts] [PATCH v1] test_plans/vm2vm_virtio_pmd_test_plan.rst Yinan Wang
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).