test suite reviews and discussions
 help / color / mirror / Atom feed
* Re: [dts] [PATCH V1 2/2] tests/loopback_virtio_user_server_mode: add two new cases
  2021-11-05 15:07 ` [dts] [PATCH V1 2/2] tests/loopback_virtio_user_server_mode: " Lingli Chen
@ 2021-11-05  7:17   ` Chen, LingliX
  2021-11-05  9:10     ` Wang, Yinan
  2021-11-22  8:32   ` [dts][PATCH " Wang, Yinan
  1 sibling, 1 reply; 10+ messages in thread
From: Chen, LingliX @ 2021-11-05  7:17 UTC (permalink / raw)
  To: dts; +Cc: Wang, Yinan

[-- Attachment #1: Type: text/plain, Size: 619 bytes --]


> -----Original Message-----
> From: Chen, LingliX <linglix.chen@intel.com>
> Sent: Friday, November 5, 2021 11:07 PM
> To: dts@dpdk.org
> Cc: Wang, Yinan <yinan.wang@intel.com>; Chen, LingliX
> <linglix.chen@intel.com>
> Subject: [dts][PATCH V1 2/2] tests/loopback_virtio_user_server_mode: add
> two new cases
> 
> 1. Add 2 new cases: case 13, 14.
> 2. Modify case 3, 4, 8, 10 sync with testplan.
> 
> Signed-off-by: Lingli Chen <linglix.chen@intel.com>

The new case 13, 14 are failed because dpdk bug: the payload are different in receive packets.

Tested-by: Lingli Chen <linglix.chen@intel.com>

[-- Attachment #2: TestLoopbackVirtioUserServerMode.log --]
[-- Type: application/octet-stream, Size: 46914 bytes --]

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [dts] [PATCH V1 2/2] tests/loopback_virtio_user_server_mode: add two new cases
  2021-11-05  7:17   ` Chen, LingliX
@ 2021-11-05  9:10     ` Wang, Yinan
  0 siblings, 0 replies; 10+ messages in thread
From: Wang, Yinan @ 2021-11-05  9:10 UTC (permalink / raw)
  To: Chen, LingliX, dts

Acked-by:  Yinan Wang <yinan.wang@intel.com>

> -----Original Message-----
> From: Chen, LingliX <linglix.chen@intel.com>
> Sent: 2021?11?5? 15:18
> To: dts@dpdk.org
> Cc: Wang, Yinan <yinan.wang@intel.com>
> Subject: RE: [dts][PATCH V1 2/2] tests/loopback_virtio_user_server_mode:
> add two new cases
> 
> 
> > -----Original Message-----
> > From: Chen, LingliX <linglix.chen@intel.com>
> > Sent: Friday, November 5, 2021 11:07 PM
> > To: dts@dpdk.org
> > Cc: Wang, Yinan <yinan.wang@intel.com>; Chen, LingliX
> > <linglix.chen@intel.com>
> > Subject: [dts][PATCH V1 2/2] tests/loopback_virtio_user_server_mode: add
> > two new cases
> >
> > 1. Add 2 new cases: case 13, 14.
> > 2. Modify case 3, 4, 8, 10 sync with testplan.
> >
> > Signed-off-by: Lingli Chen <linglix.chen@intel.com>
> 
> The new case 13, 14 are failed because dpdk bug: the payload are different in
> receive packets.
> 
> Tested-by: Lingli Chen <linglix.chen@intel.com>

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [dts] [PATCH V1 1/2] test_plans/loopback_virtio_user_server_mode: add two new cases
  2021-11-05 15:07 ` [dts] [PATCH V1 1/2] test_plans/loopback_virtio_user_server_mode: " Lingli Chen
@ 2021-11-05 14:38   ` Tu, Lijuan
  2021-11-08  6:31     ` Chen, LingliX
  0 siblings, 1 reply; 10+ messages in thread
From: Tu, Lijuan @ 2021-11-05 14:38 UTC (permalink / raw)
  To: Chen, LingliX, dts; +Cc: Wang, Yinan, Chen, LingliX

> -----Original Message-----
> From: dts <dts-bounces@dpdk.org> On Behalf Of Lingli Chen
> Sent: 2021年11月5日 23:07
> To: dts@dpdk.org
> Cc: Wang, Yinan <yinan.wang@intel.com>; Chen, LingliX
> <linglix.chen@intel.com>
> Subject: [dts] [PATCH V1 1/2] test_plans/loopback_virtio_user_server_mode:
> add two new cases
> 
> 1. Case 3, Case 10 change to 8 queues.
> 2. Case 4, Case 8 add 8k packets.
> 3. Add 2 new cases: case 13, 14.

Could you please explain why do these changes ?

^ permalink raw reply	[flat|nested] 10+ messages in thread

* [dts] [PATCH V1 0/2] loopback_virtio_user_server_mode: add two new cases
@ 2021-11-05 15:07 Lingli Chen
  2021-11-05 15:07 ` [dts] [PATCH V1 1/2] test_plans/loopback_virtio_user_server_mode: " Lingli Chen
  2021-11-05 15:07 ` [dts] [PATCH V1 2/2] tests/loopback_virtio_user_server_mode: " Lingli Chen
  0 siblings, 2 replies; 10+ messages in thread
From: Lingli Chen @ 2021-11-05 15:07 UTC (permalink / raw)
  To: dts; +Cc: yinan.wang, Lingli Chen

loopback_virtio_user_server_mode test_plans and test suite add two new cases

Lingli Chen (2):
  test_plans/loopback_virtio_user_server_mode: add two new cases
  tests/loopback_virtio_user_server_mode: add two new cases

 ...back_virtio_user_server_mode_test_plan.rst | 242 +++++++++---
 ...tSuite_loopback_virtio_user_server_mode.py | 353 +++++++++++++++---
 2 files changed, 482 insertions(+), 113 deletions(-)

-- 
2.33.1


^ permalink raw reply	[flat|nested] 10+ messages in thread

* [dts] [PATCH V1 1/2] test_plans/loopback_virtio_user_server_mode: add two new cases
  2021-11-05 15:07 [dts] [PATCH V1 0/2] loopback_virtio_user_server_mode: add two new cases Lingli Chen
@ 2021-11-05 15:07 ` Lingli Chen
  2021-11-05 14:38   ` Tu, Lijuan
  2021-11-05 15:07 ` [dts] [PATCH V1 2/2] tests/loopback_virtio_user_server_mode: " Lingli Chen
  1 sibling, 1 reply; 10+ messages in thread
From: Lingli Chen @ 2021-11-05 15:07 UTC (permalink / raw)
  To: dts; +Cc: yinan.wang, Lingli Chen

1. Case 3, Case 10 change to 8 queues.
2. Case 4, Case 8 add 8k packets.
3. Add 2 new cases: case 13, 14.

Signed-off-by: Lingli Chen <linglix.chen@intel.com>
---
 ...back_virtio_user_server_mode_test_plan.rst | 242 ++++++++++++++----
 1 file changed, 187 insertions(+), 55 deletions(-)

diff --git a/test_plans/loopback_virtio_user_server_mode_test_plan.rst b/test_plans/loopback_virtio_user_server_mode_test_plan.rst
index 18d580f4..810db9ea 100644
--- a/test_plans/loopback_virtio_user_server_mode_test_plan.rst
+++ b/test_plans/loopback_virtio_user_server_mode_test_plan.rst
@@ -84,35 +84,38 @@ Test Case 2:  Basic test for split ring server mode
 Test Case 3: loopback reconnect test with split ring mergeable path and server mode
 ===================================================================================
 
-1. launch vhost as client mode with 2 queues::
+1. launch vhost as client mode with 8 queues::
 
     rm -rf vhost-net*
     ./testpmd -c 0xe -n 4 --no-pci --file-prefix=vhost \
-    --vdev 'eth_vhost0,iface=vhost-net,client=1,queues=2' -- -i --nb-cores=2 --rxq=2 --txq=2
+    --vdev 'eth_vhost0,iface=vhost-net,client=1,queues=8' -- -i --nb-cores=2 --rxq=8 --txq=8
     >set fwd mac
     >start
 
-2. Launch virtio-user as server mode with 2 queues::
+2. Launch virtio-user as server mode with 8 queues and check throughput can get expected::
 
     ./testpmd -n 4 -l 5-7 --no-pci --file-prefix=virtio \
-    --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,server=1,queues=2,mrg_rxbuf=1,in_order=0 \
-    -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip --nb-cores=2 --rxq=2 --txq=2
+    --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,server=1,queues=8,mrg_rxbuf=1,in_order=0 \
+    -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip --nb-cores=2 --rxq=8 --txq=8
     >set fwd mac
+    >set txpkts 2000,2000,2000,2000
     >start tx_first 32
+    >show port stats all
 
 3. Quit vhost side testpmd, check the virtio-user side link status::
 
     testpmd> show port info 0
     #it should show "down"
 
-4. Relaunch vhost and send packets::
+4. Relaunch vhost and send chain packets::
 
     ./testpmd -c 0xe -n 4 --no-pci --file-prefix=vhost \
-    --vdev 'eth_vhost0,iface=vhost-net,client=1,queues=2' -- -i --nb-cores=2 --rxq=2 --txq=2
+    --vdev 'eth_vhost0,iface=vhost-net,client=1,queues=8' -- -i --nb-cores=2 --rxq=8 --txq=8
     >set fwd mac
+    >set txpkts 2000,2000,2000,2000
     >start tx_first 32
 
-5. Check the virtio-user side link status and run below command to get throughput,verify the loopback throughput is not zero::
+5. Check the virtio-user side link status and run below command to get throughput, check throughput can get expected::
 
     testpmd> show port info 0
     #it should show up"
@@ -130,22 +133,24 @@ Test Case 3: loopback reconnect test with split ring mergeable path and server m
 8. Relaunch virtio-user and send packets::
 
     ./testpmd -n 4 -l 5-7 --no-pci --file-prefix=virtio \
-    --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,server=1,queues=2,mrg_rxbuf=1,in_order=0 \
-    -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip --nb-cores=2 --rxq=2 --txq=2
+    --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,server=1,queues=8,mrg_rxbuf=1,in_order=0 \
+    -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip --nb-cores=2 --rxq=8 --txq=8
     >set fwd mac
+    >set txpkts 2000,2000,2000,2000
     >start tx_first 32
 
-9. Check the vhost side link status and run below command to get throughput, verify the loopback throughput is not zero::
+9. Check the vhost side link status and run below command to get throughput, check throughput can get expected::
 
     testpmd> show port info 0
     #it should show up"
     testpmd>show port stats all
 
-10. Port restart at vhost side by below command and re-calculate the average throughput::
+10. Port restart at vhost side by below command and check throughput can get expected::
 
       testpmd>stop
       testpmd>port stop 0
       testpmd>port start 0
+      testpmd>set txpkts 2000,2000,2000,2000
       testpmd>start tx_first 32
       testpmd>show port stats all
 
@@ -164,13 +169,15 @@ Test Case 4: loopback reconnect test with split ring inorder mergeable path and
     >set fwd mac
     >start
 
-2. Launch virtio-user as server mode with 2 queues::
+2. Launch virtio-user as server mode with 2 queues, check throughput can get expected::
 
     ./testpmd -n 4 -l 5-7 --no-pci --file-prefix=virtio \
     --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,server=1,queues=2,mrg_rxbuf=1,in_order=1 \
     -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip --nb-cores=2 --rxq=2 --txq=2
     >set fwd mac
+    >set txpkts 2000,2000,2000,2000
     >start tx_first 32
+    >show port stats all
 
 3. Quit vhost side testpmd, check the virtio-user side link status::
 
@@ -182,9 +189,10 @@ Test Case 4: loopback reconnect test with split ring inorder mergeable path and
     ./testpmd -c 0xe -n 4 --no-pci --file-prefix=vhost \
     --vdev 'eth_vhost0,iface=vhost-net,client=1,queues=2' -- -i --nb-cores=2 --rxq=2 --txq=2
     >set fwd mac
+    >set txpkts 2000,2000,2000,2000
     >start tx_first 32
 
-5. Check the virtio-user side link status and run below command to get throughput,verify the loopback throughput is not zero::
+5. Check the virtio-user side link status and run below command to get throughput, check throughput can get expected::
 
     testpmd> show port info 0
     #it should show up"
@@ -205,19 +213,21 @@ Test Case 4: loopback reconnect test with split ring inorder mergeable path and
     --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,server=1,queues=2,mrg_rxbuf=1,in_order=1\
     -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip --nb-cores=2 --rxq=2 --txq=2
     >set fwd mac
+    >set txpkts 2000,2000,2000,2000
     >start tx_first 32
 
-9. Check the vhost side link status and run below command to get throughput, verify the loopback throughput is not zero::
+9. Check the vhost side link status and run below command to get throughput, check throughput can get expected::
 
     testpmd> show port info 0
     #it should show up"
     testpmd>show port stats all
 
-10. Port restart at vhost side by below command and re-calculate the average throughput::
+10. Port restart at vhost side by below command and check throughput can get expected::
 
       testpmd>stop
       testpmd>port stop 0
       testpmd>port start 0
+      testpmd>set txpkts 2000,2000,2000,2000
       testpmd>start tx_first 32
       testpmd>show port stats all
 
@@ -236,13 +246,14 @@ Test Case 5: loopback reconnect test with split ring inorder non-mergeable path
     >set fwd mac
     >start
 
-2. Launch virtio-user as server mode with 2 queues::
+2. Launch virtio-user as server mode with 2 queues check throughput can get expected::
 
     ./testpmd -n 4 -l 5-7 --no-pci --file-prefix=virtio \
     --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,server=1,queues=2,mrg_rxbuf=0,in_order=1 \
     -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip --nb-cores=2 --rxq=2 --txq=2
     >set fwd mac
     >start tx_first 32
+    >show port stats all
 
 3. Quit vhost side testpmd, check the virtio-user side link status::
 
@@ -256,7 +267,7 @@ Test Case 5: loopback reconnect test with split ring inorder non-mergeable path
     >set fwd mac
     >start tx_first 32
 
-5. Check the virtio-user side link status and run below command to get throughput,verify the loopback throughput is not zero::
+5. Check the virtio-user side link status and run below command to get throughput, check throughput can get expected::
 
     testpmd> show port info 0
     #it should show up"
@@ -279,13 +290,13 @@ Test Case 5: loopback reconnect test with split ring inorder non-mergeable path
     >set fwd mac
     >start tx_first 32
 
-9. Check the vhost side link status and run below command to get throughput, verify the loopback throughput is not zero::
+9. Check the vhost side link status and run below command to get throughput, check throughput can get expected::
 
     testpmd> show port info 0
     #it should show up"
     testpmd>show port stats all
 
-10. Port restart at vhost side by below command and re-calculate the average throughput::
+10. Port restart at vhost side by below command and check throughput can get expected::
 
       testpmd>stop
       testpmd>port stop 0
@@ -308,13 +319,14 @@ Test Case 6: loopback reconnect test with split ring non-mergeable path and serv
     >set fwd mac
     >start
 
-2. Launch virtio-user as server mode with 2 queues::
+2. Launch virtio-user as server mode with 2 queues and check throughput can get expected::
 
     ./testpmd -n 4 -l 5-7 --no-pci --file-prefix=virtio \
     --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,server=1,queues=2,mrg_rxbuf=0,in_order=0,vectorized=1 \
     -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip --nb-cores=2 --rxq=2 --txq=2
     >set fwd mac
     >start tx_first 32
+    >show port stats all
 
 3. Quit vhost side testpmd, check the virtio-user side link status::
 
@@ -328,7 +340,7 @@ Test Case 6: loopback reconnect test with split ring non-mergeable path and serv
     >set fwd mac
     >start tx_first 32
 
-5. Check the virtio-user side link status and run below command to get throughput,verify the loopback throughput is not zero::
+5. Check the virtio-user side link status and run below command to get throughput, check throughput can get expected::
 
     testpmd> show port info 0
     #it should show up"
@@ -351,13 +363,13 @@ Test Case 6: loopback reconnect test with split ring non-mergeable path and serv
     >set fwd mac
     >start tx_first 32
 
-9. Check the vhost side link status and run below command to get throughput, verify the loopback throughput is not zero::
+9. Check the vhost side link status and run below command to get throughput, check throughput can get expected::
 
     testpmd> show port info 0
     #it should show up"
     testpmd>show port stats all
 
-10. Port restart at vhost side by below command and re-calculate the average throughput::
+10. Port restart at vhost side by below command and check throughput can get expected::
 
       testpmd>stop
       testpmd>port stop 0
@@ -380,13 +392,14 @@ Test Case 7: loopback reconnect test with split ring vector_rx path and server m
     >set fwd mac
     >start
 
-2. Launch virtio-user as server mode with 2 queues::
+2. Launch virtio-user as server mode with 2 queues and check throughput can get expected::
 
     ./testpmd -n 4 -l 5-7 --no-pci --file-prefix=virtio \
     --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,server=1,queues=2,mrg_rxbuf=0,in_order=0,vectorized=1 \
     -- -i --nb-cores=2 --rxq=2 --txq=2
     >set fwd mac
     >start tx_first 32
+    >show port stats all
 
 3. Quit vhost side testpmd, check the virtio-user side link status::
 
@@ -400,7 +413,7 @@ Test Case 7: loopback reconnect test with split ring vector_rx path and server m
     >set fwd mac
     >start tx_first 32
 
-5. Check the virtio-user side link status and run below command to get throughput,verify the loopback throughput is not zero::
+5. Check the virtio-user side link status and run below command to get throughput, check throughput can get expected::
 
     testpmd> show port info 0
     #it should show up"
@@ -423,13 +436,13 @@ Test Case 7: loopback reconnect test with split ring vector_rx path and server m
     >set fwd mac
     >start tx_first 32
 
-9. Check the vhost side link status and run below command to get throughput, verify the loopback throughput is not zero::
+9. Check the vhost side link status and run below command to get throughput, check throughput can get expected::
 
     testpmd> show port info 0
     #it should show up"
     testpmd>show port stats all
 
-10. Port restart at vhost side by below command and re-calculate the average throughput::
+10. Port restart at vhost side by below command and check throughput can get expected::
 
       testpmd>stop
       testpmd>port stop 0
@@ -452,13 +465,15 @@ Test Case 8: loopback reconnect test with packed ring mergeable path and server
     >set fwd mac
     >start
 
-2. Launch virtio-user as server mode with 2 queues::
+2. Launch virtio-user as server mode with 2 queues and check throughput can get expected::
 
     ./testpmd -n 4 -l 5-7 --no-pci --file-prefix=virtio \
     --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,server=1,queues=2,packed_vq=1,mrg_rxbuf=1,in_order=0 \
     -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip --nb-cores=2 --rxq=2 --txq=2
     >set fwd mac
+    >set txpkts 2000,2000,2000,2000
     >start tx_first 32
+    >show port stats all
 
 3. Quit vhost side testpmd, check the virtio-user side link status::
 
@@ -470,9 +485,10 @@ Test Case 8: loopback reconnect test with packed ring mergeable path and server
     ./testpmd -c 0xe -n 4 --no-pci --file-prefix=vhost \
     --vdev 'eth_vhost0,iface=vhost-net,client=1,queues=2' -- -i --nb-cores=2 --rxq=2 --txq=2
     >set fwd mac
+    >set txpkts 2000,2000,2000,2000
     >start tx_first 32
 
-5. Check the virtio-user side link status and run below command to get throughput,verify the loopback throughput is not zero::
+5. Check the virtio-user side link status and run below command to get throughput, check throughput can get expected::
 
     testpmd> show port info 0
     #it should show up"
@@ -493,19 +509,21 @@ Test Case 8: loopback reconnect test with packed ring mergeable path and server
     --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,server=1,queues=2,packed_vq=1,mrg_rxbuf=1,in_order=0 \
     -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip --nb-cores=2 --rxq=2 --txq=2
     >set fwd mac
+    >set txpkts 2000,2000,2000,2000
     >start tx_first 32
 
-9. Check the vhost side link status and run below command to get throughput, verify the loopback throughput is not zero::
+9. Check the vhost side link status and run below command to get throughput, check throughput can get expected::
 
     testpmd> show port info 0
     #it should show up"
     testpmd>show port stats all
 
-10. Port restart at vhost side by below command and re-calculate the average throughput::
+10. Port restart at vhost side by below command and check throughput can get expected::
 
      testpmd>stop
      testpmd>port stop 0
      testpmd>port start 0
+     testpmd>set txpkts 2000,2000,2000,2000
      testpmd>start tx_first 32
      testpmd>show port stats all
 
@@ -524,13 +542,14 @@ Test Case 9: loopback reconnect test with packed ring non-mergeable path and ser
     >set fwd mac
     >start
 
-2. Launch virtio-user as server mode with 2 queues::
+2. Launch virtio-user as server mode with 2 queues and check throughput can get expected::
 
     ./testpmd -n 4 -l 5-7 --no-pci --file-prefix=virtio \
     --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,server=1,queues=2,packed_vq=1,mrg_rxbuf=0,in_order=0 \
     -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip --nb-cores=2 --rxq=2 --txq=2
     >set fwd mac
     >start tx_first 32
+    >show port stats all
 
 3. Quit vhost side testpmd, check the virtio-user side link status::
 
@@ -544,7 +563,7 @@ Test Case 9: loopback reconnect test with packed ring non-mergeable path and ser
     >set fwd mac
     >start tx_first 32
 
-5. Check the virtio-user side link status and run below command to get throughput,verify the loopback throughput is not zero::
+5. Check the virtio-user side link status and run below command to get throughput, check throughput can get expected::
 
     testpmd> show port info 0
     #it should show up"
@@ -567,13 +586,13 @@ Test Case 9: loopback reconnect test with packed ring non-mergeable path and ser
     >set fwd mac
     >start tx_first 32
 
-9. Check the vhost side link status and run below command to get throughput, verify the loopback throughput is not zero::
+9. Check the vhost side link status and run below command to get throughput, check throughput can get expected::
 
     testpmd> show port info 0
     #it should show up"
     testpmd>show port stats all
 
-10. Port restart at vhost side by below command and re-calculate the average throughput::
+10. Port restart at vhost side by below command and check throughput can get expected::
 
      testpmd>stop
      testpmd>port stop 0
@@ -588,21 +607,23 @@ Test Case 9: loopback reconnect test with packed ring non-mergeable path and ser
 Test Case 10: loopback reconnect test with packed ring inorder mergeable path and server mode
 =============================================================================================
 
-1. launch vhost as client mode with 2 queues::
+1. launch vhost as client mode with 8 queues::
 
     rm -rf vhost-net*
     ./testpmd -c 0xe -n 4 --no-pci --file-prefix=vhost \
-    --vdev 'eth_vhost0,iface=vhost-net,client=1,queues=2' -- -i --nb-cores=2 --rxq=2 --txq=2
+    --vdev 'eth_vhost0,iface=vhost-net,client=1,queues=8' -- -i --nb-cores=2 --rxq=8 --txq=8
     >set fwd mac
     >start
 
-2. Launch virtio-user as server mode with 2 queues::
+2. Launch virtio-user as server mode with 8 queues and check throughput can get expected::
 
     ./testpmd -n 4 -l 5-7 --no-pci --file-prefix=virtio \
-    --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,server=1,queues=2,packed_vq=1,mrg_rxbuf=1,in_order=1 \
-    -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip --nb-cores=2 --rxq=2 --txq=2
+    --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,server=1,queues=8,packed_vq=1,mrg_rxbuf=1,in_order=1 \
+    -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip --nb-cores=2 --rxq=8 --txq=8
     >set fwd mac
+    >set txpkts 2000,2000,2000,2000
     >start tx_first 32
+    >show port stats all
 
 3. Quit vhost side testpmd, check the virtio-user side link status::
 
@@ -612,11 +633,12 @@ Test Case 10: loopback reconnect test with packed ring inorder mergeable path an
 4. Relaunch vhost and send packets::
 
     ./testpmd -c 0xe -n 4 --no-pci --file-prefix=vhost \
-    --vdev 'eth_vhost0,iface=vhost-net,client=1,queues=2' -- -i --nb-cores=2 --rxq=2 --txq=2
+    --vdev 'eth_vhost0,iface=vhost-net,client=1,queues=8' -- -i --nb-cores=2 --rxq=8 --txq=8
     >set fwd mac
+    >set txpkts 2000,2000,2000,2000
     >start tx_first 32
 
-5. Check the virtio-user side link status and run below command to get throughput,verify the loopback throughput is not zero::
+5. Check the virtio-user side link status and run below command to get throughput, check throughput can get expected::
 
     testpmd> show port info 0
     #it should show up"
@@ -634,22 +656,24 @@ Test Case 10: loopback reconnect test with packed ring inorder mergeable path an
 8. Relaunch virtio-user and send packets::
 
     ./testpmd -n 4 -l 5-7 --no-pci --file-prefix=virtio \
-    --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,server=1,queues=2,packed_vq=1,mrg_rxbuf=1,in_order=1 \
-    -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip --nb-cores=2 --rxq=2 --txq=2
+    --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,server=1,queues=8,packed_vq=1,mrg_rxbuf=1,in_order=1 \
+    -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip --nb-cores=2 --rxq=8 --txq=8
     >set fwd mac
+    >set txpkts 2000,2000,2000,2000
     >start tx_first 32
 
-9. Check the vhost side link status and run below command to get throughput, verify the loopback throughput is not zero::
+9. Check the vhost side link status and run below command to get throughput, check throughput can get expected::
 
     testpmd> show port info 0
     #it should show up"
     testpmd>show port stats all
 
-10. Port restart at vhost side by below command and re-calculate the average throughput::
+10. Port restart at vhost side by below command and check throughput can get expected::
 
      testpmd>stop
      testpmd>port stop 0
      testpmd>port start 0
+     testpmd>set txpkts 2000,2000,2000,2000
      testpmd>start tx_first 32
      testpmd>show port stats all
 
@@ -668,13 +692,14 @@ Test Case 11: loopback reconnect test with packed ring inorder non-mergeable pat
     >set fwd mac
     >start
 
-2. Launch virtio-user as server mode with 2 queues::
+2. Launch virtio-user as server mode with 2 queues and check throughput can get expected::
 
     ./testpmd -n 4 -l 5-7 --no-pci --file-prefix=virtio \
     --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,server=1,queues=2,packed_vq=1,mrg_rxbuf=0,in_order=1,vectorized=1 \
     -- -i --rx-offloads=0x10 --enable-hw-vlan-strip --rss-ip --nb-cores=2 --rxq=2 --txq=2
     >set fwd mac
     >start tx_first 32
+    >show port stats all
 
 3. Quit vhost side testpmd, check the virtio-user side link status::
 
@@ -688,7 +713,7 @@ Test Case 11: loopback reconnect test with packed ring inorder non-mergeable pat
     >set fwd mac
     >start tx_first 32
 
-5. Check the virtio-user side link status and run below command to get throughput,verify the loopback throughput is not zero::
+5. Check the virtio-user side link status and run below command to get throughput, check throughput can get expected::
 
     testpmd> show port info 0
     #it should show up"
@@ -711,13 +736,13 @@ Test Case 11: loopback reconnect test with packed ring inorder non-mergeable pat
     >set fwd mac
     >start tx_first 32
 
-9. Check the vhost side link status and run below command to get throughput, verify the loopback throughput is not zero::
+9. Check the vhost side link status and run below command to get throughput, check throughput can get expected::
 
     testpmd> show port info 0
     #it should show up"
     testpmd>show port stats all
 
-10. Port restart at vhost side by below command and re-calculate the average throughput::
+10. Port restart at vhost side by below command and check throughput can get expected::
 
      testpmd>stop
      testpmd>port stop 0
@@ -740,13 +765,14 @@ Test Case 12: loopback reconnect test with packed ring vectorized path and serve
     >set fwd mac
     >start
 
-2. Launch virtio-user as server mode with 2 queues::
+2. Launch virtio-user as server mode with 2 queues and check throughput can get expected::
 
     ./testpmd -n 4 -l 5-7 --log-level=pmd.net.virtio.driver,8 --no-pci --file-prefix=virtio --force-max-simd-bitwidth=512 \
     --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,server=1,queues=2,packed_vq=1,mrg_rxbuf=0,in_order=1,vectorized=1 \
     -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip --nb-cores=2 --rxq=2 --txq=2
     >set fwd mac
     >start tx_first 32
+    >show port stats all
 
 3. Quit vhost side testpmd, check the virtio-user side link status::
 
@@ -760,7 +786,7 @@ Test Case 12: loopback reconnect test with packed ring vectorized path and serve
     >set fwd mac
     >start tx_first 32
 
-5. Check the virtio-user side link status and run below command to get throughput,verify the loopback throughput is not zero::
+5. Check the virtio-user side link status and run below command to get throughput, check throughput can get expected::
 
     testpmd> show port info 0
     #it should show up"
@@ -783,13 +809,13 @@ Test Case 12: loopback reconnect test with packed ring vectorized path and serve
     >set fwd mac
     >start tx_first 32
 
-9. Check the vhost side link status and run below command to get throughput, verify the loopback throughput is not zero::
+9. Check the vhost side link status and run below command to get throughput, check throughput can get expected::
 
     testpmd> show port info 0
     #it should show up"
     testpmd>show port stats all
 
-10. Port restart at vhost side by below command and re-calculate the average throughput::
+10. Port restart at vhost side by below command and check throughput can get expected::
 
      testpmd>stop
      testpmd>port stop 0
@@ -800,3 +826,109 @@ Test Case 12: loopback reconnect test with packed ring vectorized path and serve
 11. Check each RX/TX queue has packets::
 
      testpmd>stop
+
+Test Case 13: loopback packed ring and split ring mergeable path payload check test using server mode and multi-queues
+======================================================================================================================
+
+1. launch vhost::
+
+    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 32-33 --no-pci --file-prefix=vhost -n 4 --vdev 'eth_vhost0,iface=vhost-net0,queues=8,client=1' -- -i --nb-cores=1 --rxq=8 --txq=8 --txd=1024 --rxd=1024
+
+2. Launch virtio-user with packed ring mergeable inorder path::
+
+    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 30,31 -n 4 --file-prefix=virtio --no-pci --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=8,mrg_rxbuf=1,in_order=1,packed_vq=1,server=1 -- -i --nb-cores=1 --rxq=8 --txq=8 --txd=1024 --rxd=1024
+     testpmd> set fwd csum
+     testpmd> start
+
+3. Attach pdump secondary process to primary process by same file-prefix::
+
+   ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=virtio -- --pdump 'device_id=net_virtio_user0,queue=*,rx-dev=./pdump-virtio-rx.pcap,mbuf-size=8000'
+
+4. Send large pkts from vhost::
+
+   testpmd> set fwd csum
+   testpmd> set txpkts 2000,2000,2000,2000
+   testpmd> set burst 1
+   testpmd> start tx_first 1
+   testpmd> stop
+
+5. Quit pdump, check all the packets length are 8000 Byte in the pcap file, and the payload in receive packets are same.
+
+6. Quit and relaunch vhost and rerun step3-5.
+
+7. Quit and relaunch virtio with packed ring mergeable path as below::
+
+   ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 30,31 -n 4 --file-prefix=virtio-user0 --no-pci --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=8,mrg_rxbuf=1,in_order=0,packed_vq=1,server=1 -- -i --nb-cores=1 --rxq=8 --txq=8 --txd=1024 --rxd=1024
+   testpmd> set fwd csum
+   testpmd> start
+
+8. Rerun step3-6.
+
+9. Quit and relaunch virtio with split ring mergeable inorder path as below::
+
+   ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 30,31 -n 4 --file-prefix=virtio-user0 --no-pci --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=8,mrg_rxbuf=1,in_order=1,server=1 -- -i --nb-cores=1 --rxq=8 --txq=8 --txd=1024 --rxd=1024
+   testpmd> set fwd csum
+   testpmd> start
+
+10. Rerun step3-6.
+
+11. Quit and relaunch virtio with split ring mergeable path as below::
+
+   ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 30,31 -n 4 --file-prefix=virtio-user0 --no-pci --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=8,mrg_rxbuf=1,in_order=0,server=1 -- -i --nb-cores=1 --rxq=8 --txq=8 --txd=1024 --rxd=1024
+   testpmd> set fwd csum
+   testpmd> start
+
+12. Rerun step3-6.
+
+Test Case 14: loopback packed ring and split ring mergeable path cbdma test payload check with server mode and multi-queues
+===========================================================================================================================
+
+1. bind 8 cbdma port to vfio-pci and launch vhost::
+
+   ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 32-33 -n 4 --vdev 'eth_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[txq0@0000:80:04.0;txq1@0000:80:04.1;txq2@0000:80:04.2;txq3@0000:80:04.3;txq4@0000:80:04.4;txq5@0000:80:04.5;txq6@0000:80:04.6;txq7@0000:80:04.7]' -- -i --nb-cores=1 --rxq=8 --txq=8 --txd=1024 --rxd=1024
+
+2. Launch virtio-user with packed ring mergeable inorder path::
+
+    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 30,31 -n 4 --file-prefix=virtio-user0 --no-pci --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=8,mrg_rxbuf=1,in_order=1,packed_vq=1,server=1 -- -i --nb-cores=1 --rxq=8 --txq=8 --txd=1024 --rxd=1024
+   testpmd> set fwd csum
+   testpmd> start
+
+3. Attach pdump secondary process to primary process by same file-prefix::
+
+./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=virtio-user0 -- --pdump 'device_id=net_virtio_user0,queue=*,rx-dev=./pdump-virtio-rx.pcap,mbuf-size=8000'
+
+4. Send large pkts from vhost, check loopback performance can get expected and each queue can receive packets::
+
+   testpmd> vhost enable tx all
+   testpmd> set fwd csum
+   testpmd> set txpkts 64,64,64,2000,2000,2000
+   testpmd> start tx_first 32
+   testpmd> stop
+
+5. Quit pdump, check all the packets length are 6192 Byte in the pcap file, and the payload in receive packets are same.
+
+6. Quit and relaunch vhost and rerun step3-5.
+
+7. Quit and relaunch virtio with packed ring mergeable path as below::
+
+    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 30,31 -n 4 --file-prefix=testpmd0 --no-pci --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=8,mrg_rxbuf=1,in_order=0,packed_vq=1,server=1 -- -i --nb-cores=1 --rxq=8 --txq=8 --txd=1024 --rxd=1024
+    testpmd> set fwd csum
+    testpmd> start
+
+8. Rerun step3-6.
+
+9. Quit and relaunch virtio with split ring mergeable inorder path as below::
+
+   ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 30,31 -n 4 --file-prefix=testpmd0 --no-pci --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=8,mrg_rxbuf=1,in_order=1,server=1 -- -i --nb-cores=1 --rxq=8 --txq=8 --txd=1024 --rxd=1024
+    testpmd> set fwd csum
+    testpmd> start
+
+10. Rerun step3-6.
+
+11. Quit and relaunch virtio with split ring mergeable path as below::
+
+   ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 30,31 -n 4 --file-prefix=testpmd0 --no-pci --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=8,mrg_rxbuf=1,in_order=0,server=1 -- -i --nb-cores=1 --rxq=8 --txq=8 --txd=1024 --rxd=1024
+   testpmd> set fwd csum
+   testpmd> start
+
+12. Rerun step3-6.
-- 
2.33.1


^ permalink raw reply	[flat|nested] 10+ messages in thread

* [dts] [PATCH V1 2/2] tests/loopback_virtio_user_server_mode: add two new cases
  2021-11-05 15:07 [dts] [PATCH V1 0/2] loopback_virtio_user_server_mode: add two new cases Lingli Chen
  2021-11-05 15:07 ` [dts] [PATCH V1 1/2] test_plans/loopback_virtio_user_server_mode: " Lingli Chen
@ 2021-11-05 15:07 ` Lingli Chen
  2021-11-05  7:17   ` Chen, LingliX
  2021-11-22  8:32   ` [dts][PATCH " Wang, Yinan
  1 sibling, 2 replies; 10+ messages in thread
From: Lingli Chen @ 2021-11-05 15:07 UTC (permalink / raw)
  To: dts; +Cc: yinan.wang, Lingli Chen

1. Add 2 new cases: case 13, 14.
2. Modify case 3, 4, 8, 10 sync with testplan.

Signed-off-by: Lingli Chen <linglix.chen@intel.com>
---
 ...tSuite_loopback_virtio_user_server_mode.py | 353 +++++++++++++++---
 1 file changed, 295 insertions(+), 58 deletions(-)

diff --git a/tests/TestSuite_loopback_virtio_user_server_mode.py b/tests/TestSuite_loopback_virtio_user_server_mode.py
index 7fbfe804..119b178f 100644
--- a/tests/TestSuite_loopback_virtio_user_server_mode.py
+++ b/tests/TestSuite_loopback_virtio_user_server_mode.py
@@ -37,11 +37,10 @@ Test loopback virtio-user server mode
 """
 import re
 import time
-
 import framework.utils as utils
 from framework.pmd_output import PmdOutput
 from framework.test_case import TestCase
-
+from framework.packet import Packet
 
 class TestLoopbackVirtioUserServerMode(TestCase):
 
@@ -61,6 +60,12 @@ class TestLoopbackVirtioUserServerMode(TestCase):
         self.core_list_host = self.core_list[3:6]
         self.path=self.dut.apps_name['test-pmd']
         self.testpmd_name = self.path.split("/")[-1]
+        self.app_pdump = self.dut.apps_name['pdump']
+        self.dump_pcap = "/root/pdump-rx.pcap"
+        self.device_str = ''
+        self.dut_ports = self.dut.get_ports()
+        self.ports_socket = self.dut.get_numa_id(self.dut_ports[0])
+        self.cbdma_dev_infos = []
 
     def set_up(self):
         """
@@ -108,13 +113,17 @@ class TestLoopbackVirtioUserServerMode(TestCase):
         if set_fwd_mac:
             self.virtio_user_pmd.execute_cmd("set fwd mac", "testpmd> ", 120)
 
-    def lanuch_vhost_testpmd_with_multi_queue(self, extern_params=""):
+    def lanuch_vhost_testpmd_with_multi_queue(self, extern_params="", set_fwd_mac=True):
         """
         start testpmd with multi qeueue
         """
-        self.lanuch_vhost_testpmd(self.queue_number, self.nb_cores, extern_params=extern_params)
+        eal_params = "--vdev 'eth_vhost0,iface=vhost-net,client=1,queues={}'".format(self.queue_number)
+        param = "--rxq={} --txq={} --nb-cores={} {}".format(self.queue_number, self.queue_number, self.nb_cores, extern_params)
+        self.vhost_pmd.start_testpmd(self.core_list_host, param=param, no_pci=True, ports=[], eal_param=eal_params, prefix='vhost', fixed_prefix=True)
+        if set_fwd_mac:
+            self.vhost_pmd.execute_cmd("set fwd mac", "testpmd> ", 120)
 
-    def lanuch_virtio_user_testpmd_with_multi_queue(self, mode, extern_params=""):
+    def lanuch_virtio_user_testpmd_with_multi_queue(self, mode, extern_params="", set_fwd_mac=True):
         """
         start testpmd of vhost user
         """
@@ -126,7 +135,8 @@ class TestLoopbackVirtioUserServerMode(TestCase):
         param = "{} --nb-cores={} --rxq={} --txq={}".format(extern_params, self.nb_cores, self.queue_number, self.queue_number)
         self.virtio_user_pmd.start_testpmd(cores=self.core_list_user, param=param, eal_param=eal_param, \
                 no_pci=True, ports=[], prefix="virtio", fixed_prefix=True)
-        self.virtio_user_pmd.execute_cmd("set fwd mac", "testpmd> ", 120)
+        if set_fwd_mac:
+            self.virtio_user_pmd.execute_cmd("set fwd mac", "testpmd> ", 120)
 
     def start_to_send_packets(self, session_rx, session_tx):
         """
@@ -136,6 +146,35 @@ class TestLoopbackVirtioUserServerMode(TestCase):
         session_rx.send_command("start", 3)
         session_tx.send_expect("start tx_first 32", "testpmd> ", 30)
 
+    def start_to_send_8k_packets(self, session_rx, session_tx):
+        """
+        start the testpmd of vhost-user and virtio-user
+        start to send 8k packets
+        """
+        session_rx.send_command("start", 3)
+        session_tx.send_expect("set txpkts 2000,2000,2000,2000", "testpmd> ", 30)
+        session_tx.send_expect("start tx_first 32", "testpmd> ", 30)
+
+    def start_to_send_8k_packets_csum(self, session_tx):
+        """
+        start the testpmd of vhost-user, start to send 8k packets
+        """
+        session_tx.send_expect("set fwd csum", "testpmd> ", 30)
+        session_tx.send_expect("set txpkts 2000,2000,2000,2000", "testpmd> ", 30)
+        session_tx.send_expect("set burst 1", "testpmd> ", 30)
+        session_tx.send_expect("start tx_first 1", "testpmd> ", 10)
+        session_tx.send_expect("stop", "testpmd> ", 30)
+
+    def start_to_send_8k_packets_csum_cbdma(self, session_tx):
+        """
+        start the testpmd of vhost-user, start to send 8k packets
+        """
+        session_tx.send_expect("vhost enable tx all", "testpmd> ", 30)
+        session_tx.send_expect("set fwd csum", "testpmd> ", 30)
+        session_tx.send_expect("set txpkts 64,64,64,2000,2000,2000", "testpmd> ", 30)
+        session_tx.send_expect("start tx_first 32", "testpmd> ", 5)
+        session_tx.send_expect("stop", "testpmd> ", 30)
+
     def check_port_throughput_after_port_stop(self):
         """
         check the throughput after port stop
@@ -182,6 +221,74 @@ class TestLoopbackVirtioUserServerMode(TestCase):
         self.check_port_link_status_after_port_restart()
         self.vhost_pmd.execute_cmd("start tx_first 32", "testpmd> ", 120)
 
+    def port_restart_send_8k_packets(self):
+        self.vhost_pmd.execute_cmd("stop", "testpmd> ", 120)
+        self.vhost_pmd.execute_cmd("port stop 0", "testpmd> ", 120)
+        self.check_port_throughput_after_port_stop()
+        self.vhost_pmd.execute_cmd("clear port stats all", "testpmd> ", 120)
+        self.vhost_pmd.execute_cmd("port start 0", "testpmd> ", 120)
+        self.check_port_link_status_after_port_restart()
+        self.vhost_pmd.execute_cmd("set txpkts 2000,2000,2000,2000", "testpmd> ", 120)
+        self.vhost_pmd.execute_cmd("start tx_first 32", "testpmd> ", 120)
+
+    def launch_pdump_to_capture_pkt(self, dump_port):
+        """
+        bootup pdump in dut
+        """
+        self.pdump_session = self.dut.new_session(suite="pdump")
+        cmd = self.app_pdump + " " + \
+                "-v --file-prefix=virtio -- " + \
+                "--pdump  'device_id=%s,queue=*,rx-dev=%s,mbuf-size=8000'"
+        self.pdump_session.send_expect(cmd % (dump_port, self.dump_pcap), 'Port')
+
+    def check_packet_payload_valid(self, pkt_len):
+        """
+        check the payload is valid
+        """
+        self.pdump_session.send_expect('^c', '# ', 60)
+        time.sleep(3)
+        self.dut.session.copy_file_from(src="%s" % self.dump_pcap, dst="%s" % self.dump_pcap)
+        pkt = Packet()
+        pkts = pkt.read_pcapfile(self.dump_pcap)
+        data = str(pkts[0]['Raw'])
+
+        for i in range(len(pkts)):
+            self.verify(len(pkts[i]) == pkt_len, "virtio-user0 receive packet's length not equal %s Byte" %pkt_len)
+            value = str(pkts[i]['Raw'])
+            self.verify(data == value, "the payload in receive packets has been changed from %s" %i)
+        self.dut.send_expect("rm -rf %s" % self.dump_pcap, "#")
+
+    def relanuch_vhost_testpmd_send_8k_packets(self, extern_params, cbdma=False):
+
+        self.vhost_pmd.execute_cmd("quit", "#", 60)
+        self.logger.info('now reconnet from vhost')
+        if cbdma:
+            self.lanuch_vhost_testpmd_with_cbdma(extern_params=extern_params)
+        else:
+            self.lanuch_vhost_testpmd_with_multi_queue(extern_params=extern_params, set_fwd_mac=False)
+        self.launch_pdump_to_capture_pkt(self.vuser0_port)
+        if cbdma:
+            self.start_to_send_8k_packets_csum_cbdma(self.vhost)
+        else:
+            self.start_to_send_8k_packets_csum(self.vhost)
+        self.check_packet_payload_valid(self.pkt_len)
+
+    def relanuch_virtio_testpmd_with_multi_path(self, mode, case_info, extern_params, cbdma=False):
+
+        self.virtio_user_pmd.execute_cmd("quit", "#", 60)
+        self.logger.info(case_info)
+        self.lanuch_virtio_user_testpmd_with_multi_queue(mode=mode, extern_params=extern_params, set_fwd_mac=False)
+        self.virtio_user_pmd.execute_cmd("set fwd csum")
+        self.virtio_user_pmd.execute_cmd("start")
+        self.launch_pdump_to_capture_pkt(self.vuser0_port)
+        if cbdma:
+            self.start_to_send_8k_packets_csum_cbdma(self.vhost)
+        else:
+            self.start_to_send_8k_packets_csum(self.vhost)
+        self.check_packet_payload_valid(self.pkt_len)
+
+        self.relanuch_vhost_testpmd_send_8k_packets(extern_params, cbdma)
+
     def relanuch_vhost_testpmd_with_multi_queue(self):
         self.vhost_pmd.execute_cmd("quit", "#", 60)
         self.check_link_status(self.virtio_user, "down")
@@ -192,7 +299,7 @@ class TestLoopbackVirtioUserServerMode(TestCase):
         self.check_link_status(self.vhost, "down")
         self.lanuch_virtio_user_testpmd_with_multi_queue(mode, extern_params)
 
-    def calculate_avg_throughput(self, case_info, cycle):
+    def calculate_avg_throughput(self, case_info, cycle, Pkt_size=True):
         """
         calculate the average throughput
         """
@@ -206,14 +313,19 @@ class TestLoopbackVirtioUserServerMode(TestCase):
             result = lines.group(1)
             results += float(result)
         Mpps = results / (1000000 * 10)
-        self.verify(Mpps > 5, "port can not receive packets")
-
         results_row.append(case_info)
-        results_row.append('64')
+        if Pkt_size:
+            self.verify(Mpps > 5, "port can not receive packets")
+            results_row.append('64')
+        else:
+            self.verify(Mpps > 1, "port can not receive packets")
+            results_row.append('8k')
+
         results_row.append(Mpps)
         results_row.append(self.queue_number)
         results_row.append(cycle)
         self.result_table_add(results_row)
+        self.logger.info(results_row)
 
     def check_packets_of_each_queue(self):
         """
@@ -247,7 +359,7 @@ class TestLoopbackVirtioUserServerMode(TestCase):
 
     def test_server_mode_launch_virtio_first(self):
         """
-        basic test for virtio-user server mode, launch virtio-user first
+        Test Case 2: basic test for split ring server mode, launch virtio-user first
         """
         self.queue_number = 1
         self.nb_cores = 1
@@ -263,7 +375,7 @@ class TestLoopbackVirtioUserServerMode(TestCase):
 
     def test_server_mode_launch_virtio11_first(self):
         """
-        basic test for virtio-user server mode, launch virtio-user first
+        Test Case 1: basic test for packed ring server mode, launch virtio-user first
         """
         self.queue_number = 1
         self.nb_cores = 1
@@ -279,7 +391,7 @@ class TestLoopbackVirtioUserServerMode(TestCase):
 
     def test_server_mode_reconnect_with_virtio11_mergeable_path(self):
         """
-        reconnect test with virtio 1.1 mergeable path and server mode
+        Test Case 8: reconnect test with virtio 1.1 mergeable path and server mode
         """
         self.queue_number = 2
         self.nb_cores = 2
@@ -288,25 +400,25 @@ class TestLoopbackVirtioUserServerMode(TestCase):
         extern_params = '--tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip'
         self.lanuch_vhost_testpmd_with_multi_queue()
         self.lanuch_virtio_user_testpmd_with_multi_queue(mode=mode, extern_params=extern_params)
-        self.start_to_send_packets(self.vhost, self.virtio_user)
-        self.calculate_avg_throughput(case_info, "before reconnet")
+        self.start_to_send_8k_packets(self.vhost, self.virtio_user)
+        self.calculate_avg_throughput(case_info, "before reconnet", Pkt_size=False)
 
         # reconnect from vhost
         self.logger.info('now reconnet from vhost')
         self.relanuch_vhost_testpmd_with_multi_queue()
-        self.start_to_send_packets(self.virtio_user, self.vhost)
-        self.calculate_avg_throughput(case_info, "reconnet from vhost")
+        self.start_to_send_8k_packets(self.virtio_user, self.vhost)
+        self.calculate_avg_throughput(case_info, "reconnet from vhost", Pkt_size=False)
 
         # reconnet from virtio
         self.logger.info('now reconnet from virtio_user')
         self.relanuch_virtio_testpmd_with_multi_queue(mode=mode, extern_params=extern_params)
-        self.start_to_send_packets(self.vhost, self.virtio_user)
-        self.calculate_avg_throughput(case_info, "reconnet from virtio user")
+        self.start_to_send_8k_packets(self.vhost, self.virtio_user)
+        self.calculate_avg_throughput(case_info, "reconnet from virtio user", Pkt_size=False)
 
         # port restart
         self.logger.info('now vhost port restart')
-        self.port_restart()
-        self.calculate_avg_throughput(case_info, "after port restart")
+        self.port_restart_send_8k_packets()
+        self.calculate_avg_throughput(case_info, "after port restart", Pkt_size=False)
 
         self.result_table_print()
         self.check_packets_of_each_queue()
@@ -314,7 +426,7 @@ class TestLoopbackVirtioUserServerMode(TestCase):
 
     def test_server_mode_reconnect_with_virtio11_non_mergeable_path(self):
         """
-        reconnect test with virtio 1.1 non_mergeable path and server mode
+        Test Case 9: reconnect test with virtio 1.1 non_mergeable path and server mode
         """
         self.queue_number = 2
         self.nb_cores = 2
@@ -349,34 +461,34 @@ class TestLoopbackVirtioUserServerMode(TestCase):
 
     def test_server_mode_reconnect_with_virtio11_inorder_mergeable_path(self):
         """
-        reconnect test with virtio 1.1 inorder mergeable path and server mode
+        Test Case 10: reconnect test with virtio 1.1 inorder mergeable path and server mode
         """
-        self.queue_number = 2
+        self.queue_number = 8
         self.nb_cores = 2
         case_info = 'virtio1.1 inorder mergeable path'
         mode = "packed_vq=1,in_order=1,mrg_rxbuf=1"
         extern_params = '--tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip'
         self.lanuch_vhost_testpmd_with_multi_queue()
         self.lanuch_virtio_user_testpmd_with_multi_queue(mode=mode, extern_params=extern_params)
-        self.start_to_send_packets(self.vhost, self.virtio_user)
-        self.calculate_avg_throughput(case_info, "before reconnet")
+        self.start_to_send_8k_packets(self.vhost, self.virtio_user)
+        self.calculate_avg_throughput(case_info, "before reconnet", Pkt_size=False)
 
         # reconnect from vhost
         self.logger.info('now reconnet from vhost')
         self.relanuch_vhost_testpmd_with_multi_queue()
-        self.start_to_send_packets(self.virtio_user, self.vhost)
-        self.calculate_avg_throughput(case_info, "reconnet from vhost")
+        self.start_to_send_8k_packets(self.virtio_user, self.vhost)
+        self.calculate_avg_throughput(case_info, "reconnet from vhost", Pkt_size=False)
 
         # reconnet from virtio
         self.logger.info('now reconnet from virtio_user')
         self.relanuch_virtio_testpmd_with_multi_queue(mode=mode, extern_params=extern_params)
-        self.start_to_send_packets(self.vhost, self.virtio_user)
-        self.calculate_avg_throughput(case_info, "reconnet from virtio user")
+        self.start_to_send_8k_packets(self.vhost, self.virtio_user)
+        self.calculate_avg_throughput(case_info, "reconnet from virtio user", Pkt_size=False)
 
         # port restart
         self.logger.info('now vhost port restart')
-        self.port_restart()
-        self.calculate_avg_throughput(case_info, "after port restart")
+        self.port_restart_send_8k_packets()
+        self.calculate_avg_throughput(case_info, "after port restart", Pkt_size=False)
 
         self.result_table_print()
         self.check_packets_of_each_queue()
@@ -384,11 +496,11 @@ class TestLoopbackVirtioUserServerMode(TestCase):
 
     def test_server_mode_reconnect_with_virtio11_inorder_non_mergeable_path(self):
         """
-        reconnect test with virtio 1.1 inorder non_mergeable path and server mode
+        Test Case 11: reconnect test with virtio 1.1 inorder non_mergeable path and server mode
         """
         self.queue_number = 2
         self.nb_cores = 2
-        case_info = 'virtio1.1 non_mergeable path'
+        case_info = 'virtio1.1 inorder non_mergeable path'
         mode = "packed_vq=1,in_order=1,mrg_rxbuf=0,vectorized=1"
         extern_params = '--rx-offloads=0x10 --enable-hw-vlan-strip --rss-ip'
         self.lanuch_vhost_testpmd_with_multi_queue()
@@ -419,11 +531,11 @@ class TestLoopbackVirtioUserServerMode(TestCase):
 
     def test_server_mode_reconnect_with_virtio11_inorder_vectorized_path(self):
         """
-        reconnect test with virtio 1.1 inorder non_mergeable path and server mode
+        Test Case 12: reconnect test with virtio 1.1 inorder vectorized path and server mode
         """
         self.queue_number = 2
         self.nb_cores = 2
-        case_info = 'virtio1.1 non_mergeable path'
+        case_info = 'virtio1.1 inorder vectorized path'
         mode = "packed_vq=1,in_order=1,mrg_rxbuf=0,vectorized=1"
         extern_params = '--tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip'
         self.lanuch_vhost_testpmd_with_multi_queue()
@@ -454,7 +566,7 @@ class TestLoopbackVirtioUserServerMode(TestCase):
 
     def test_server_mode_reconnect_with_virtio10_inorder_mergeable_path(self):
         """
-        reconnect test with virtio 1.0 inorder mergeable path and server mode
+        Test Case 4: reconnect test with virtio 1.0 inorder mergeable path and server mode
         """
         self.queue_number = 2
         self.nb_cores = 2
@@ -463,25 +575,25 @@ class TestLoopbackVirtioUserServerMode(TestCase):
         extern_params = '--tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip'
         self.lanuch_vhost_testpmd_with_multi_queue()
         self.lanuch_virtio_user_testpmd_with_multi_queue(mode=mode, extern_params=extern_params)
-        self.start_to_send_packets(self.vhost, self.virtio_user)
-        self.calculate_avg_throughput(case_info, "before reconnet")
+        self.start_to_send_8k_packets(self.vhost, self.virtio_user)
+        self.calculate_avg_throughput(case_info, "before reconnet", Pkt_size=False)
 
         # reconnet from vhost
         self.logger.info('now reconnet from vhost')
         self.relanuch_vhost_testpmd_with_multi_queue()
-        self.start_to_send_packets(self.virtio_user, self.vhost)
-        self.calculate_avg_throughput(case_info, "reconnet from vhost")
+        self.start_to_send_8k_packets(self.virtio_user, self.vhost)
+        self.calculate_avg_throughput(case_info, "reconnet from vhost", Pkt_size=False)
 
         # reconnet from virtio
         self.logger.info('now reconnet from virtio_user')
         self.relanuch_virtio_testpmd_with_multi_queue(mode=mode, extern_params=extern_params)
-        self.start_to_send_packets(self.vhost, self.virtio_user)
-        self.calculate_avg_throughput(case_info, "reconnet from virtio_user")
+        self.start_to_send_8k_packets(self.vhost, self.virtio_user)
+        self.calculate_avg_throughput(case_info, "reconnet from virtio_user", Pkt_size=False)
 
         # port restart
         self.logger.info('now vhost port restart')
-        self.port_restart()
-        self.calculate_avg_throughput(case_info, "after port restart")
+        self.port_restart_send_8k_packets()
+        self.calculate_avg_throughput(case_info, "after port restart", Pkt_size=False)
 
         self.result_table_print()
         self.check_packets_of_each_queue()
@@ -489,7 +601,7 @@ class TestLoopbackVirtioUserServerMode(TestCase):
 
     def test_server_mode_reconnect_with_virtio10_inorder_non_mergeable_path(self):
         """
-        reconnect test with virtio 1.0 inorder non_mergeable path and server mode
+        Test Case 5: reconnect test with virtio 1.0 inorder non_mergeable path and server mode
         """
         self.queue_number = 2
         self.nb_cores = 2
@@ -524,34 +636,34 @@ class TestLoopbackVirtioUserServerMode(TestCase):
 
     def test_server_mode_reconnect_with_virtio10_mergeable_path(self):
         """
-        reconnect test with virtio 1.0 mergeable path and server mode
+        Test Case 3: reconnect test with virtio 1.0 mergeable path and server mode
         """
-        self.queue_number = 2
+        self.queue_number = 8
         self.nb_cores = 2
         case_info = 'virtio1.0 mergeable path'
         mode = "in_order=0,mrg_rxbuf=1"
         extern_params = '--tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip'
         self.lanuch_vhost_testpmd_with_multi_queue()
         self.lanuch_virtio_user_testpmd_with_multi_queue(mode=mode, extern_params=extern_params)
-        self.start_to_send_packets(self.vhost, self.virtio_user)
-        self.calculate_avg_throughput(case_info, "before reconnet")
+        self.start_to_send_8k_packets(self.vhost, self.virtio_user)
+        self.calculate_avg_throughput(case_info, "before reconnet", Pkt_size=False)
 
         # reconnet from vhost
         self.logger.info('now reconnet from vhost')
         self.relanuch_vhost_testpmd_with_multi_queue()
-        self.start_to_send_packets(self.virtio_user, self.vhost)
-        self.calculate_avg_throughput(case_info, "reconnet from vhost")
+        self.start_to_send_8k_packets(self.virtio_user, self.vhost)
+        self.calculate_avg_throughput(case_info, "reconnet from vhost", Pkt_size=False)
 
         # reconnet from virtio
         self.logger.info('now reconnet from virtio_user')
         self.relanuch_virtio_testpmd_with_multi_queue(mode=mode, extern_params=extern_params)
-        self.start_to_send_packets(self.vhost, self.virtio_user)
-        self.calculate_avg_throughput(case_info, "reconnet from virtio_user")
+        self.start_to_send_8k_packets(self.vhost, self.virtio_user)
+        self.calculate_avg_throughput(case_info, "reconnet from virtio_user", Pkt_size=False)
 
         # port restart
         self.logger.info('now vhost port restart')
-        self.port_restart()
-        self.calculate_avg_throughput(case_info, "after port restart")
+        self.port_restart_send_8k_packets()
+        self.calculate_avg_throughput(case_info, "after port restart", Pkt_size=False)
 
         self.result_table_print()
         self.check_packets_of_each_queue()
@@ -559,7 +671,7 @@ class TestLoopbackVirtioUserServerMode(TestCase):
 
     def test_server_mode_reconnect_with_virtio10_non_mergeable_path(self):
         """
-        reconnect test with virtio 1.0 non_mergeable path and server mode
+        Test Case 6: reconnect test with virtio 1.0 non_mergeable path and server mode
         """
         self.queue_number = 2
         self.nb_cores = 2
@@ -594,7 +706,7 @@ class TestLoopbackVirtioUserServerMode(TestCase):
 
     def test_server_mode_reconnect_with_virtio10_vector_rx_path(self):
         """
-        reconnect test with virtio 1.0 vector_rx path and server mode
+        Test Case 7: reconnect test with virtio 1.0 vector_rx path and server mode
         """
         self.queue_number = 2
         self.nb_cores = 2
@@ -626,12 +738,137 @@ class TestLoopbackVirtioUserServerMode(TestCase):
         self.check_packets_of_each_queue()
         self.close_all_testpmd()
 
+    def test_server_mode_reconnect_with_packed_and_split_mergeable_path_payload_check(self):
+        """
+        Test Case 13: loopback packed ring and split ring mergeable path payload check test using server mode and multi-queues
+        """
+        self.queue_number = 8
+        self.nb_cores = 1
+        extern_params = '--txd=1024 --rxd=1024'
+        case_info = 'packed ring mergeable inorder path'
+        mode = "mrg_rxbuf=1,in_order=1,packed_vq=1"
+
+        self.lanuch_vhost_testpmd_with_multi_queue(extern_params=extern_params, set_fwd_mac=False)
+        self.logger.info(case_info)
+        self.lanuch_virtio_user_testpmd_with_multi_queue(mode=mode, extern_params=extern_params, set_fwd_mac=False)
+        self.virtio_user_pmd.execute_cmd("set fwd csum")
+        self.virtio_user_pmd.execute_cmd("start")
+        #3. Attach pdump secondary process to primary process by same file-prefix
+        self.vuser0_port = 'net_virtio_user0'
+        self.launch_pdump_to_capture_pkt(self.vuser0_port)
+        self.start_to_send_8k_packets_csum(self.vhost)
+
+        #5. Check all the packets length is 8000 Byte in the pcap file
+        self.pkt_len = 8000
+        self.check_packet_payload_valid(self.pkt_len)
+
+        # reconnet from vhost
+        self.relanuch_vhost_testpmd_send_8k_packets(extern_params)
+
+        # reconnet from virtio
+        self.logger.info('now reconnet from virtio_user with other path')
+        case_info = 'packed ring mergeable path'
+        mode = "mrg_rxbuf=1,in_order=0,packed_vq=1"
+        self.relanuch_virtio_testpmd_with_multi_path(mode, case_info, extern_params)
+
+        case_info = 'split ring mergeable inorder path'
+        mode = "mrg_rxbuf=1,in_order=1"
+        self.relanuch_virtio_testpmd_with_multi_path(mode, case_info, extern_params)
+
+        case_info = 'split ring mergeable path'
+        mode = "mrg_rxbuf=1,in_order=0"
+        self.relanuch_virtio_testpmd_with_multi_path(mode, case_info, extern_params)
+
+        self.close_all_testpmd()
+
+    def test_server_mode_reconnect_with_packed_and_split_mergeable_path_cbdma_payload_check(self):
+        """
+        Test Case 14: loopback packed ring and split ring mergeable path cbdma test payload check with server mode and multi-queues
+        """
+        self.cbdma_nic_dev_num = 8
+        self.get_cbdma_ports_info_and_bind_to_dpdk()
+        self.queue_number = 8
+        self.vdev = f"--vdev 'eth_vhost0,iface=vhost-net,queues={self.queue_number},client=1,dmas=[txq0@{self.cbdma_dev_infos[0]};txq1@{self.cbdma_dev_infos[1]};txq2@{self.cbdma_dev_infos[2]};txq3@{self.cbdma_dev_infos[3]};txq4@{self.cbdma_dev_infos[4]};txq5@{self.cbdma_dev_infos[5]};txq6@{self.cbdma_dev_infos[6]};txq7@{self.cbdma_dev_infos[7]}]' "
+
+        self.nb_cores = 1
+        extern_params = '--txd=1024 --rxd=1024'
+        case_info = 'packed ring mergeable inorder path'
+        mode = "mrg_rxbuf=1,in_order=1,packed_vq=1"
+
+        self.lanuch_vhost_testpmd_with_cbdma(extern_params=extern_params)
+        self.logger.info(case_info)
+        self.lanuch_virtio_user_testpmd_with_multi_queue(mode=mode, extern_params=extern_params, set_fwd_mac=False)
+        self.virtio_user_pmd.execute_cmd("set fwd csum")
+        self.virtio_user_pmd.execute_cmd("start")
+        # 3. Attach pdump secondary process to primary process by same file-prefix
+        self.vuser0_port = 'net_virtio_user0'
+        self.launch_pdump_to_capture_pkt(self.vuser0_port)
+        self.start_to_send_8k_packets_csum_cbdma(self.vhost)
+
+        # 5. Check all the packets length is 6192 Byte in the pcap file
+        self.pkt_len = 6192
+        self.check_packet_payload_valid(self.pkt_len)
+        #reconnet from vhost
+        self.relanuch_vhost_testpmd_send_8k_packets(extern_params, cbdma=True)
+
+        # reconnet from virtio
+        self.logger.info('now reconnet from virtio_user with other path')
+        case_info = 'packed ring mergeable path'
+        mode = "mrg_rxbuf=1,in_order=0,packed_vq=1"
+        self.relanuch_virtio_testpmd_with_multi_path(mode, case_info, extern_params, cbdma=True)
+
+        case_info = 'split ring mergeable inorder path'
+        mode = "mrg_rxbuf=1,in_order=1"
+        self.relanuch_virtio_testpmd_with_multi_path(mode, case_info, extern_params, cbdma=True)
+
+        case_info = 'split ring mergeable path'
+        mode = "mrg_rxbuf=1,in_order=0"
+        self.relanuch_virtio_testpmd_with_multi_path(mode, case_info, extern_params, cbdma=True)
+
+        self.close_all_testpmd()
+
+    def lanuch_vhost_testpmd_with_cbdma(self, extern_params=""):
+        """
+        start testpmd with cbdma
+        """
+        param = "--rxq={} --txq={} --nb-cores={} {}".format(self.queue_number, self.queue_number, self.nb_cores, extern_params)
+        self.vhost_pmd.start_testpmd(self.core_list_host, param=param, no_pci=False, ports=[], eal_param=self.vdev, prefix='vhost', fixed_prefix=True)
+
+    def get_cbdma_ports_info_and_bind_to_dpdk(self):
+        """
+        get all cbdma ports
+        """
+        out = self.dut.send_expect('./usertools/dpdk-devbind.py --status-dev dma', '# ', 30)
+        device_info = out.split('\n')
+        for device in device_info:
+            pci_info = re.search('\s*(0000:\S*:\d*.\d*)', device)
+            if pci_info is not None:
+                dev_info = pci_info.group(1)
+                # the numa id of ioat dev, only add the device which on same socket with nic dev
+                bus = int(dev_info[5:7], base=16)
+                if bus >= 128:
+                    cur_socket = 1
+                else:
+                    cur_socket = 0
+                if self.ports_socket == cur_socket:
+                    self.cbdma_dev_infos.append(pci_info.group(1))
+        self.verify(len(self.cbdma_dev_infos) >= 8, 'There no enough cbdma device to run this suite')
+        self.device_str = ' '.join(self.cbdma_dev_infos[0:self.cbdma_nic_dev_num])
+        self.dut.send_expect('./usertools/dpdk-devbind.py --force --bind=%s %s' % (self.drivername, self.device_str), '# ', 60)
+
+    def bind_cbdma_device_to_kernel(self):
+        if self.device_str is not None:
+            self.dut.send_expect('modprobe ioatdma', '# ')
+            self.dut.send_expect('./usertools/dpdk-devbind.py -u %s' % self.device_str, '# ', 30)
+            self.dut.send_expect('./usertools/dpdk-devbind.py --force --bind=ioatdma  %s' % self.device_str, '# ', 60)
+
     def tear_down(self):
         """
         Run after each test case.
         """
         self.dut.send_expect("killall -s INT %s" % self.testpmd_name, "#")
         self.close_all_session()
+        self.bind_cbdma_device_to_kernel()
         time.sleep(2)
 
     def tear_down_all(self):
-- 
2.33.1


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [dts] [PATCH V1 1/2] test_plans/loopback_virtio_user_server_mode: add two new cases
  2021-11-05 14:38   ` Tu, Lijuan
@ 2021-11-08  6:31     ` Chen, LingliX
  2021-11-09  9:14       ` Tu, Lijuan
  0 siblings, 1 reply; 10+ messages in thread
From: Chen, LingliX @ 2021-11-08  6:31 UTC (permalink / raw)
  To: Tu, Lijuan, dts; +Cc: Wang, Yinan


> -----Original Message-----
> From: Tu, Lijuan <lijuan.tu@intel.com>
> Sent: Friday, November 5, 2021 10:39 PM
> To: Chen, LingliX <linglix.chen@intel.com>; dts@dpdk.org
> Cc: Wang, Yinan <yinan.wang@intel.com>; Chen, LingliX
> <linglix.chen@intel.com>
> Subject: RE: [dts] [PATCH V1 1/2]
> test_plans/loopback_virtio_user_server_mode: add two new cases
> 
> > -----Original Message-----
> > From: dts <dts-bounces@dpdk.org> On Behalf Of Lingli Chen
> > Sent: 2021年11月5日 23:07
> > To: dts@dpdk.org
> > Cc: Wang, Yinan <yinan.wang@intel.com>; Chen, LingliX
> > <linglix.chen@intel.com>
> > Subject: [dts] [PATCH V1 1/2] test_plans/loopback_virtio_user_server_mode:
> > add two new cases
> >
> > 1. Case 3, Case 10 change to 8 queues.
> > 2. Case 4, Case 8 add 8k packets.
> > 3. Add 2 new cases: case 13, 14.
> 
> Could you please explain why do these changes ?

These changes for case coverage improvement.

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [dts] [PATCH V1 1/2] test_plans/loopback_virtio_user_server_mode: add two new cases
  2021-11-08  6:31     ` Chen, LingliX
@ 2021-11-09  9:14       ` Tu, Lijuan
  0 siblings, 0 replies; 10+ messages in thread
From: Tu, Lijuan @ 2021-11-09  9:14 UTC (permalink / raw)
  To: Chen, LingliX, dts; +Cc: Wang, Yinan



> -----Original Message-----
> From: Chen, LingliX <linglix.chen@intel.com>
> Sent: 2021年11月8日 14:31
> To: Tu, Lijuan <lijuan.tu@intel.com>; dts@dpdk.org
> Cc: Wang, Yinan <yinan.wang@intel.com>
> Subject: RE: [dts] [PATCH V1 1/2]
> test_plans/loopback_virtio_user_server_mode: add two new cases
> 
> 
> > -----Original Message-----
> > From: Tu, Lijuan <lijuan.tu@intel.com>
> > Sent: Friday, November 5, 2021 10:39 PM
> > To: Chen, LingliX <linglix.chen@intel.com>; dts@dpdk.org
> > Cc: Wang, Yinan <yinan.wang@intel.com>; Chen, LingliX
> > <linglix.chen@intel.com>
> > Subject: RE: [dts] [PATCH V1 1/2]
> > test_plans/loopback_virtio_user_server_mode: add two new cases
> >
> > > -----Original Message-----
> > > From: dts <dts-bounces@dpdk.org> On Behalf Of Lingli Chen
> > > Sent: 2021年11月5日 23:07
> > > To: dts@dpdk.org
> > > Cc: Wang, Yinan <yinan.wang@intel.com>; Chen, LingliX
> > > <linglix.chen@intel.com>
> > > Subject: [dts] [PATCH V1 1/2] test_plans/loopback_virtio_user_server_mode:
> > > add two new cases
> > >
> > > 1. Case 3, Case 10 change to 8 queues.
> > > 2. Case 4, Case 8 add 8k packets.
> > > 3. Add 2 new cases: case 13, 14.
> >
> > Could you please explain why do these changes ?
> 
> These changes for case coverage improvement.

Please split new cases to another patch, they are for different purpose.


^ permalink raw reply	[flat|nested] 10+ messages in thread

* RE: [dts][PATCH V1 2/2] tests/loopback_virtio_user_server_mode: add two new cases
  2021-11-05 15:07 ` [dts] [PATCH V1 2/2] tests/loopback_virtio_user_server_mode: " Lingli Chen
  2021-11-05  7:17   ` Chen, LingliX
@ 2021-11-22  8:32   ` Wang, Yinan
  1 sibling, 0 replies; 10+ messages in thread
From: Wang, Yinan @ 2021-11-22  8:32 UTC (permalink / raw)
  To: Chen, LingliX, dts

Acked-by:  Yinan Wang <yinan.wang@intel.com>

> -----Original Message-----
> From: Chen, LingliX <linglix.chen@intel.com>
> Sent: 2021?11?5? 23:07
> To: dts@dpdk.org
> Cc: Wang, Yinan <yinan.wang@intel.com>; Chen, LingliX
> <linglix.chen@intel.com>
> Subject: [dts][PATCH V1 2/2] tests/loopback_virtio_user_server_mode: add
> two new cases
> 
> 1. Add 2 new cases: case 13, 14.
> 2. Modify case 3, 4, 8, 10 sync with testplan.
> 
> Signed-off-by: Lingli Chen <linglix.chen@intel.com>
> ---
>  ...tSuite_loopback_virtio_user_server_mode.py | 353 +++++++++++++++---
>  1 file changed, 295 insertions(+), 58 deletions(-)
> 
> diff --git a/tests/TestSuite_loopback_virtio_user_server_mode.py
> b/tests/TestSuite_loopback_virtio_user_server_mode.py
> index 7fbfe804..119b178f 100644
> --- a/tests/TestSuite_loopback_virtio_user_server_mode.py
> +++ b/tests/TestSuite_loopback_virtio_user_server_mode.py
> @@ -37,11 +37,10 @@ Test loopback virtio-user server mode
>  """
>  import re
>  import time
> -
>  import framework.utils as utils
>  from framework.pmd_output import PmdOutput
>  from framework.test_case import TestCase
> -
> +from framework.packet import Packet
> 
>  class TestLoopbackVirtioUserServerMode(TestCase):
> 
> @@ -61,6 +60,12 @@ class TestLoopbackVirtioUserServerMode(TestCase):
>          self.core_list_host = self.core_list[3:6]
>          self.path=self.dut.apps_name['test-pmd']
>          self.testpmd_name = self.path.split("/")[-1]
> +        self.app_pdump = self.dut.apps_name['pdump']
> +        self.dump_pcap = "/root/pdump-rx.pcap"
> +        self.device_str = ''
> +        self.dut_ports = self.dut.get_ports()
> +        self.ports_socket = self.dut.get_numa_id(self.dut_ports[0])
> +        self.cbdma_dev_infos = []
> 
>      def set_up(self):
>          """
> @@ -108,13 +113,17 @@ class
> TestLoopbackVirtioUserServerMode(TestCase):
>          if set_fwd_mac:
>              self.virtio_user_pmd.execute_cmd("set fwd mac", "testpmd> ", 120)
> 
> -    def lanuch_vhost_testpmd_with_multi_queue(self, extern_params=""):
> +    def lanuch_vhost_testpmd_with_multi_queue(self, extern_params="",
> set_fwd_mac=True):
>          """
>          start testpmd with multi qeueue
>          """
> -        self.lanuch_vhost_testpmd(self.queue_number, self.nb_cores,
> extern_params=extern_params)
> +        eal_params = "--vdev 'eth_vhost0,iface=vhost-
> net,client=1,queues={}'".format(self.queue_number)
> +        param = "--rxq={} --txq={} --nb-cores={} {}".format(self.queue_number,
> self.queue_number, self.nb_cores, extern_params)
> +        self.vhost_pmd.start_testpmd(self.core_list_host, param=param,
> no_pci=True, ports=[], eal_param=eal_params, prefix='vhost',
> fixed_prefix=True)
> +        if set_fwd_mac:
> +            self.vhost_pmd.execute_cmd("set fwd mac", "testpmd> ", 120)
> 
> -    def lanuch_virtio_user_testpmd_with_multi_queue(self, mode,
> extern_params=""):
> +    def lanuch_virtio_user_testpmd_with_multi_queue(self, mode,
> extern_params="", set_fwd_mac=True):
>          """
>          start testpmd of vhost user
>          """
> @@ -126,7 +135,8 @@ class TestLoopbackVirtioUserServerMode(TestCase):
>          param = "{} --nb-cores={} --rxq={} --txq={}".format(extern_params,
> self.nb_cores, self.queue_number, self.queue_number)
>          self.virtio_user_pmd.start_testpmd(cores=self.core_list_user,
> param=param, eal_param=eal_param, \
>                  no_pci=True, ports=[], prefix="virtio", fixed_prefix=True)
> -        self.virtio_user_pmd.execute_cmd("set fwd mac", "testpmd> ", 120)
> +        if set_fwd_mac:
> +            self.virtio_user_pmd.execute_cmd("set fwd mac", "testpmd> ", 120)
> 
>      def start_to_send_packets(self, session_rx, session_tx):
>          """
> @@ -136,6 +146,35 @@ class TestLoopbackVirtioUserServerMode(TestCase):
>          session_rx.send_command("start", 3)
>          session_tx.send_expect("start tx_first 32", "testpmd> ", 30)
> 
> +    def start_to_send_8k_packets(self, session_rx, session_tx):
> +        """
> +        start the testpmd of vhost-user and virtio-user
> +        start to send 8k packets
> +        """
> +        session_rx.send_command("start", 3)
> +        session_tx.send_expect("set txpkts 2000,2000,2000,2000", "testpmd>
> ", 30)
> +        session_tx.send_expect("start tx_first 32", "testpmd> ", 30)
> +
> +    def start_to_send_8k_packets_csum(self, session_tx):
> +        """
> +        start the testpmd of vhost-user, start to send 8k packets
> +        """
> +        session_tx.send_expect("set fwd csum", "testpmd> ", 30)
> +        session_tx.send_expect("set txpkts 2000,2000,2000,2000", "testpmd>
> ", 30)
> +        session_tx.send_expect("set burst 1", "testpmd> ", 30)
> +        session_tx.send_expect("start tx_first 1", "testpmd> ", 10)
> +        session_tx.send_expect("stop", "testpmd> ", 30)
> +
> +    def start_to_send_8k_packets_csum_cbdma(self, session_tx):
> +        """
> +        start the testpmd of vhost-user, start to send 8k packets
> +        """
> +        session_tx.send_expect("vhost enable tx all", "testpmd> ", 30)
> +        session_tx.send_expect("set fwd csum", "testpmd> ", 30)
> +        session_tx.send_expect("set txpkts 64,64,64,2000,2000,2000",
> "testpmd> ", 30)
> +        session_tx.send_expect("start tx_first 32", "testpmd> ", 5)
> +        session_tx.send_expect("stop", "testpmd> ", 30)
> +
>      def check_port_throughput_after_port_stop(self):
>          """
>          check the throughput after port stop
> @@ -182,6 +221,74 @@ class TestLoopbackVirtioUserServerMode(TestCase):
>          self.check_port_link_status_after_port_restart()
>          self.vhost_pmd.execute_cmd("start tx_first 32", "testpmd> ", 120)
> 
> +    def port_restart_send_8k_packets(self):
> +        self.vhost_pmd.execute_cmd("stop", "testpmd> ", 120)
> +        self.vhost_pmd.execute_cmd("port stop 0", "testpmd> ", 120)
> +        self.check_port_throughput_after_port_stop()
> +        self.vhost_pmd.execute_cmd("clear port stats all", "testpmd> ", 120)
> +        self.vhost_pmd.execute_cmd("port start 0", "testpmd> ", 120)
> +        self.check_port_link_status_after_port_restart()
> +        self.vhost_pmd.execute_cmd("set txpkts 2000,2000,2000,2000",
> "testpmd> ", 120)
> +        self.vhost_pmd.execute_cmd("start tx_first 32", "testpmd> ", 120)
> +
> +    def launch_pdump_to_capture_pkt(self, dump_port):
> +        """
> +        bootup pdump in dut
> +        """
> +        self.pdump_session = self.dut.new_session(suite="pdump")
> +        cmd = self.app_pdump + " " + \
> +                "-v --file-prefix=virtio -- " + \
> +                "--pdump  'device_id=%s,queue=*,rx-dev=%s,mbuf-size=8000'"
> +        self.pdump_session.send_expect(cmd % (dump_port,
> self.dump_pcap), 'Port')
> +
> +    def check_packet_payload_valid(self, pkt_len):
> +        """
> +        check the payload is valid
> +        """
> +        self.pdump_session.send_expect('^c', '# ', 60)
> +        time.sleep(3)
> +        self.dut.session.copy_file_from(src="%s" % self.dump_pcap, dst="%s" %
> self.dump_pcap)
> +        pkt = Packet()
> +        pkts = pkt.read_pcapfile(self.dump_pcap)
> +        data = str(pkts[0]['Raw'])
> +
> +        for i in range(len(pkts)):
> +            self.verify(len(pkts[i]) == pkt_len, "virtio-user0 receive packet's
> length not equal %s Byte" %pkt_len)
> +            value = str(pkts[i]['Raw'])
> +            self.verify(data == value, "the payload in receive packets has been
> changed from %s" %i)
> +        self.dut.send_expect("rm -rf %s" % self.dump_pcap, "#")
> +
> +    def relanuch_vhost_testpmd_send_8k_packets(self, extern_params,
> cbdma=False):
> +
> +        self.vhost_pmd.execute_cmd("quit", "#", 60)
> +        self.logger.info('now reconnet from vhost')
> +        if cbdma:
> +
> self.lanuch_vhost_testpmd_with_cbdma(extern_params=extern_params)
> +        else:
> +
> self.lanuch_vhost_testpmd_with_multi_queue(extern_params=extern_para
> ms, set_fwd_mac=False)
> +        self.launch_pdump_to_capture_pkt(self.vuser0_port)
> +        if cbdma:
> +            self.start_to_send_8k_packets_csum_cbdma(self.vhost)
> +        else:
> +            self.start_to_send_8k_packets_csum(self.vhost)
> +        self.check_packet_payload_valid(self.pkt_len)
> +
> +    def relanuch_virtio_testpmd_with_multi_path(self, mode, case_info,
> extern_params, cbdma=False):
> +
> +        self.virtio_user_pmd.execute_cmd("quit", "#", 60)
> +        self.logger.info(case_info)
> +        self.lanuch_virtio_user_testpmd_with_multi_queue(mode=mode,
> extern_params=extern_params, set_fwd_mac=False)
> +        self.virtio_user_pmd.execute_cmd("set fwd csum")
> +        self.virtio_user_pmd.execute_cmd("start")
> +        self.launch_pdump_to_capture_pkt(self.vuser0_port)
> +        if cbdma:
> +            self.start_to_send_8k_packets_csum_cbdma(self.vhost)
> +        else:
> +            self.start_to_send_8k_packets_csum(self.vhost)
> +        self.check_packet_payload_valid(self.pkt_len)
> +
> +        self.relanuch_vhost_testpmd_send_8k_packets(extern_params,
> cbdma)
> +
>      def relanuch_vhost_testpmd_with_multi_queue(self):
>          self.vhost_pmd.execute_cmd("quit", "#", 60)
>          self.check_link_status(self.virtio_user, "down")
> @@ -192,7 +299,7 @@ class TestLoopbackVirtioUserServerMode(TestCase):
>          self.check_link_status(self.vhost, "down")
>          self.lanuch_virtio_user_testpmd_with_multi_queue(mode,
> extern_params)
> 
> -    def calculate_avg_throughput(self, case_info, cycle):
> +    def calculate_avg_throughput(self, case_info, cycle, Pkt_size=True):
>          """
>          calculate the average throughput
>          """
> @@ -206,14 +313,19 @@ class
> TestLoopbackVirtioUserServerMode(TestCase):
>              result = lines.group(1)
>              results += float(result)
>          Mpps = results / (1000000 * 10)
> -        self.verify(Mpps > 5, "port can not receive packets")
> -
>          results_row.append(case_info)
> -        results_row.append('64')
> +        if Pkt_size:
> +            self.verify(Mpps > 5, "port can not receive packets")
> +            results_row.append('64')
> +        else:
> +            self.verify(Mpps > 1, "port can not receive packets")
> +            results_row.append('8k')
> +
>          results_row.append(Mpps)
>          results_row.append(self.queue_number)
>          results_row.append(cycle)
>          self.result_table_add(results_row)
> +        self.logger.info(results_row)
> 
>      def check_packets_of_each_queue(self):
>          """
> @@ -247,7 +359,7 @@ class TestLoopbackVirtioUserServerMode(TestCase):
> 
>      def test_server_mode_launch_virtio_first(self):
>          """
> -        basic test for virtio-user server mode, launch virtio-user first
> +        Test Case 2: basic test for split ring server mode, launch virtio-user
> first
>          """
>          self.queue_number = 1
>          self.nb_cores = 1
> @@ -263,7 +375,7 @@ class TestLoopbackVirtioUserServerMode(TestCase):
> 
>      def test_server_mode_launch_virtio11_first(self):
>          """
> -        basic test for virtio-user server mode, launch virtio-user first
> +        Test Case 1: basic test for packed ring server mode, launch virtio-user
> first
>          """
>          self.queue_number = 1
>          self.nb_cores = 1
> @@ -279,7 +391,7 @@ class TestLoopbackVirtioUserServerMode(TestCase):
> 
>      def test_server_mode_reconnect_with_virtio11_mergeable_path(self):
>          """
> -        reconnect test with virtio 1.1 mergeable path and server mode
> +        Test Case 8: reconnect test with virtio 1.1 mergeable path and server
> mode
>          """
>          self.queue_number = 2
>          self.nb_cores = 2
> @@ -288,25 +400,25 @@ class
> TestLoopbackVirtioUserServerMode(TestCase):
>          extern_params = '--tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip'
>          self.lanuch_vhost_testpmd_with_multi_queue()
>          self.lanuch_virtio_user_testpmd_with_multi_queue(mode=mode,
> extern_params=extern_params)
> -        self.start_to_send_packets(self.vhost, self.virtio_user)
> -        self.calculate_avg_throughput(case_info, "before reconnet")
> +        self.start_to_send_8k_packets(self.vhost, self.virtio_user)
> +        self.calculate_avg_throughput(case_info, "before reconnet",
> Pkt_size=False)
> 
>          # reconnect from vhost
>          self.logger.info('now reconnet from vhost')
>          self.relanuch_vhost_testpmd_with_multi_queue()
> -        self.start_to_send_packets(self.virtio_user, self.vhost)
> -        self.calculate_avg_throughput(case_info, "reconnet from vhost")
> +        self.start_to_send_8k_packets(self.virtio_user, self.vhost)
> +        self.calculate_avg_throughput(case_info, "reconnet from vhost",
> Pkt_size=False)
> 
>          # reconnet from virtio
>          self.logger.info('now reconnet from virtio_user')
>          self.relanuch_virtio_testpmd_with_multi_queue(mode=mode,
> extern_params=extern_params)
> -        self.start_to_send_packets(self.vhost, self.virtio_user)
> -        self.calculate_avg_throughput(case_info, "reconnet from virtio user")
> +        self.start_to_send_8k_packets(self.vhost, self.virtio_user)
> +        self.calculate_avg_throughput(case_info, "reconnet from virtio user",
> Pkt_size=False)
> 
>          # port restart
>          self.logger.info('now vhost port restart')
> -        self.port_restart()
> -        self.calculate_avg_throughput(case_info, "after port restart")
> +        self.port_restart_send_8k_packets()
> +        self.calculate_avg_throughput(case_info, "after port restart",
> Pkt_size=False)
> 
>          self.result_table_print()
>          self.check_packets_of_each_queue()
> @@ -314,7 +426,7 @@ class TestLoopbackVirtioUserServerMode(TestCase):
> 
>      def
> test_server_mode_reconnect_with_virtio11_non_mergeable_path(self):
>          """
> -        reconnect test with virtio 1.1 non_mergeable path and server mode
> +        Test Case 9: reconnect test with virtio 1.1 non_mergeable path and
> server mode
>          """
>          self.queue_number = 2
>          self.nb_cores = 2
> @@ -349,34 +461,34 @@ class
> TestLoopbackVirtioUserServerMode(TestCase):
> 
>      def
> test_server_mode_reconnect_with_virtio11_inorder_mergeable_path(self):
>          """
> -        reconnect test with virtio 1.1 inorder mergeable path and server mode
> +        Test Case 10: reconnect test with virtio 1.1 inorder mergeable path
> and server mode
>          """
> -        self.queue_number = 2
> +        self.queue_number = 8
>          self.nb_cores = 2
>          case_info = 'virtio1.1 inorder mergeable path'
>          mode = "packed_vq=1,in_order=1,mrg_rxbuf=1"
>          extern_params = '--tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip'
>          self.lanuch_vhost_testpmd_with_multi_queue()
>          self.lanuch_virtio_user_testpmd_with_multi_queue(mode=mode,
> extern_params=extern_params)
> -        self.start_to_send_packets(self.vhost, self.virtio_user)
> -        self.calculate_avg_throughput(case_info, "before reconnet")
> +        self.start_to_send_8k_packets(self.vhost, self.virtio_user)
> +        self.calculate_avg_throughput(case_info, "before reconnet",
> Pkt_size=False)
> 
>          # reconnect from vhost
>          self.logger.info('now reconnet from vhost')
>          self.relanuch_vhost_testpmd_with_multi_queue()
> -        self.start_to_send_packets(self.virtio_user, self.vhost)
> -        self.calculate_avg_throughput(case_info, "reconnet from vhost")
> +        self.start_to_send_8k_packets(self.virtio_user, self.vhost)
> +        self.calculate_avg_throughput(case_info, "reconnet from vhost",
> Pkt_size=False)
> 
>          # reconnet from virtio
>          self.logger.info('now reconnet from virtio_user')
>          self.relanuch_virtio_testpmd_with_multi_queue(mode=mode,
> extern_params=extern_params)
> -        self.start_to_send_packets(self.vhost, self.virtio_user)
> -        self.calculate_avg_throughput(case_info, "reconnet from virtio user")
> +        self.start_to_send_8k_packets(self.vhost, self.virtio_user)
> +        self.calculate_avg_throughput(case_info, "reconnet from virtio user",
> Pkt_size=False)
> 
>          # port restart
>          self.logger.info('now vhost port restart')
> -        self.port_restart()
> -        self.calculate_avg_throughput(case_info, "after port restart")
> +        self.port_restart_send_8k_packets()
> +        self.calculate_avg_throughput(case_info, "after port restart",
> Pkt_size=False)
> 
>          self.result_table_print()
>          self.check_packets_of_each_queue()
> @@ -384,11 +496,11 @@ class
> TestLoopbackVirtioUserServerMode(TestCase):
> 
>      def
> test_server_mode_reconnect_with_virtio11_inorder_non_mergeable_path(
> self):
>          """
> -        reconnect test with virtio 1.1 inorder non_mergeable path and server
> mode
> +        Test Case 11: reconnect test with virtio 1.1 inorder non_mergeable
> path and server mode
>          """
>          self.queue_number = 2
>          self.nb_cores = 2
> -        case_info = 'virtio1.1 non_mergeable path'
> +        case_info = 'virtio1.1 inorder non_mergeable path'
>          mode = "packed_vq=1,in_order=1,mrg_rxbuf=0,vectorized=1"
>          extern_params = '--rx-offloads=0x10 --enable-hw-vlan-strip --rss-ip'
>          self.lanuch_vhost_testpmd_with_multi_queue()
> @@ -419,11 +531,11 @@ class
> TestLoopbackVirtioUserServerMode(TestCase):
> 
>      def
> test_server_mode_reconnect_with_virtio11_inorder_vectorized_path(self):
>          """
> -        reconnect test with virtio 1.1 inorder non_mergeable path and server
> mode
> +        Test Case 12: reconnect test with virtio 1.1 inorder vectorized path
> and server mode
>          """
>          self.queue_number = 2
>          self.nb_cores = 2
> -        case_info = 'virtio1.1 non_mergeable path'
> +        case_info = 'virtio1.1 inorder vectorized path'
>          mode = "packed_vq=1,in_order=1,mrg_rxbuf=0,vectorized=1"
>          extern_params = '--tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip'
>          self.lanuch_vhost_testpmd_with_multi_queue()
> @@ -454,7 +566,7 @@ class TestLoopbackVirtioUserServerMode(TestCase):
> 
>      def
> test_server_mode_reconnect_with_virtio10_inorder_mergeable_path(self):
>          """
> -        reconnect test with virtio 1.0 inorder mergeable path and server mode
> +        Test Case 4: reconnect test with virtio 1.0 inorder mergeable path and
> server mode
>          """
>          self.queue_number = 2
>          self.nb_cores = 2
> @@ -463,25 +575,25 @@ class
> TestLoopbackVirtioUserServerMode(TestCase):
>          extern_params = '--tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip'
>          self.lanuch_vhost_testpmd_with_multi_queue()
>          self.lanuch_virtio_user_testpmd_with_multi_queue(mode=mode,
> extern_params=extern_params)
> -        self.start_to_send_packets(self.vhost, self.virtio_user)
> -        self.calculate_avg_throughput(case_info, "before reconnet")
> +        self.start_to_send_8k_packets(self.vhost, self.virtio_user)
> +        self.calculate_avg_throughput(case_info, "before reconnet",
> Pkt_size=False)
> 
>          # reconnet from vhost
>          self.logger.info('now reconnet from vhost')
>          self.relanuch_vhost_testpmd_with_multi_queue()
> -        self.start_to_send_packets(self.virtio_user, self.vhost)
> -        self.calculate_avg_throughput(case_info, "reconnet from vhost")
> +        self.start_to_send_8k_packets(self.virtio_user, self.vhost)
> +        self.calculate_avg_throughput(case_info, "reconnet from vhost",
> Pkt_size=False)
> 
>          # reconnet from virtio
>          self.logger.info('now reconnet from virtio_user')
>          self.relanuch_virtio_testpmd_with_multi_queue(mode=mode,
> extern_params=extern_params)
> -        self.start_to_send_packets(self.vhost, self.virtio_user)
> -        self.calculate_avg_throughput(case_info, "reconnet from virtio_user")
> +        self.start_to_send_8k_packets(self.vhost, self.virtio_user)
> +        self.calculate_avg_throughput(case_info, "reconnet from virtio_user",
> Pkt_size=False)
> 
>          # port restart
>          self.logger.info('now vhost port restart')
> -        self.port_restart()
> -        self.calculate_avg_throughput(case_info, "after port restart")
> +        self.port_restart_send_8k_packets()
> +        self.calculate_avg_throughput(case_info, "after port restart",
> Pkt_size=False)
> 
>          self.result_table_print()
>          self.check_packets_of_each_queue()
> @@ -489,7 +601,7 @@ class TestLoopbackVirtioUserServerMode(TestCase):
> 
>      def
> test_server_mode_reconnect_with_virtio10_inorder_non_mergeable_path(
> self):
>          """
> -        reconnect test with virtio 1.0 inorder non_mergeable path and server
> mode
> +        Test Case 5: reconnect test with virtio 1.0 inorder non_mergeable
> path and server mode
>          """
>          self.queue_number = 2
>          self.nb_cores = 2
> @@ -524,34 +636,34 @@ class
> TestLoopbackVirtioUserServerMode(TestCase):
> 
>      def test_server_mode_reconnect_with_virtio10_mergeable_path(self):
>          """
> -        reconnect test with virtio 1.0 mergeable path and server mode
> +        Test Case 3: reconnect test with virtio 1.0 mergeable path and server
> mode
>          """
> -        self.queue_number = 2
> +        self.queue_number = 8
>          self.nb_cores = 2
>          case_info = 'virtio1.0 mergeable path'
>          mode = "in_order=0,mrg_rxbuf=1"
>          extern_params = '--tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip'
>          self.lanuch_vhost_testpmd_with_multi_queue()
>          self.lanuch_virtio_user_testpmd_with_multi_queue(mode=mode,
> extern_params=extern_params)
> -        self.start_to_send_packets(self.vhost, self.virtio_user)
> -        self.calculate_avg_throughput(case_info, "before reconnet")
> +        self.start_to_send_8k_packets(self.vhost, self.virtio_user)
> +        self.calculate_avg_throughput(case_info, "before reconnet",
> Pkt_size=False)
> 
>          # reconnet from vhost
>          self.logger.info('now reconnet from vhost')
>          self.relanuch_vhost_testpmd_with_multi_queue()
> -        self.start_to_send_packets(self.virtio_user, self.vhost)
> -        self.calculate_avg_throughput(case_info, "reconnet from vhost")
> +        self.start_to_send_8k_packets(self.virtio_user, self.vhost)
> +        self.calculate_avg_throughput(case_info, "reconnet from vhost",
> Pkt_size=False)
> 
>          # reconnet from virtio
>          self.logger.info('now reconnet from virtio_user')
>          self.relanuch_virtio_testpmd_with_multi_queue(mode=mode,
> extern_params=extern_params)
> -        self.start_to_send_packets(self.vhost, self.virtio_user)
> -        self.calculate_avg_throughput(case_info, "reconnet from virtio_user")
> +        self.start_to_send_8k_packets(self.vhost, self.virtio_user)
> +        self.calculate_avg_throughput(case_info, "reconnet from virtio_user",
> Pkt_size=False)
> 
>          # port restart
>          self.logger.info('now vhost port restart')
> -        self.port_restart()
> -        self.calculate_avg_throughput(case_info, "after port restart")
> +        self.port_restart_send_8k_packets()
> +        self.calculate_avg_throughput(case_info, "after port restart",
> Pkt_size=False)
> 
>          self.result_table_print()
>          self.check_packets_of_each_queue()
> @@ -559,7 +671,7 @@ class TestLoopbackVirtioUserServerMode(TestCase):
> 
>      def
> test_server_mode_reconnect_with_virtio10_non_mergeable_path(self):
>          """
> -        reconnect test with virtio 1.0 non_mergeable path and server mode
> +        Test Case 6: reconnect test with virtio 1.0 non_mergeable path and
> server mode
>          """
>          self.queue_number = 2
>          self.nb_cores = 2
> @@ -594,7 +706,7 @@ class TestLoopbackVirtioUserServerMode(TestCase):
> 
>      def test_server_mode_reconnect_with_virtio10_vector_rx_path(self):
>          """
> -        reconnect test with virtio 1.0 vector_rx path and server mode
> +        Test Case 7: reconnect test with virtio 1.0 vector_rx path and server
> mode
>          """
>          self.queue_number = 2
>          self.nb_cores = 2
> @@ -626,12 +738,137 @@ class
> TestLoopbackVirtioUserServerMode(TestCase):
>          self.check_packets_of_each_queue()
>          self.close_all_testpmd()
> 
> +    def
> test_server_mode_reconnect_with_packed_and_split_mergeable_path_pay
> load_check(self):
> +        """
> +        Test Case 13: loopback packed ring and split ring mergeable path
> payload check test using server mode and multi-queues
> +        """
> +        self.queue_number = 8
> +        self.nb_cores = 1
> +        extern_params = '--txd=1024 --rxd=1024'
> +        case_info = 'packed ring mergeable inorder path'
> +        mode = "mrg_rxbuf=1,in_order=1,packed_vq=1"
> +
> +
> self.lanuch_vhost_testpmd_with_multi_queue(extern_params=extern_para
> ms, set_fwd_mac=False)
> +        self.logger.info(case_info)
> +        self.lanuch_virtio_user_testpmd_with_multi_queue(mode=mode,
> extern_params=extern_params, set_fwd_mac=False)
> +        self.virtio_user_pmd.execute_cmd("set fwd csum")
> +        self.virtio_user_pmd.execute_cmd("start")
> +        #3. Attach pdump secondary process to primary process by same file-
> prefix
> +        self.vuser0_port = 'net_virtio_user0'
> +        self.launch_pdump_to_capture_pkt(self.vuser0_port)
> +        self.start_to_send_8k_packets_csum(self.vhost)
> +
> +        #5. Check all the packets length is 8000 Byte in the pcap file
> +        self.pkt_len = 8000
> +        self.check_packet_payload_valid(self.pkt_len)
> +
> +        # reconnet from vhost
> +        self.relanuch_vhost_testpmd_send_8k_packets(extern_params)
> +
> +        # reconnet from virtio
> +        self.logger.info('now reconnet from virtio_user with other path')
> +        case_info = 'packed ring mergeable path'
> +        mode = "mrg_rxbuf=1,in_order=0,packed_vq=1"
> +        self.relanuch_virtio_testpmd_with_multi_path(mode, case_info,
> extern_params)
> +
> +        case_info = 'split ring mergeable inorder path'
> +        mode = "mrg_rxbuf=1,in_order=1"
> +        self.relanuch_virtio_testpmd_with_multi_path(mode, case_info,
> extern_params)
> +
> +        case_info = 'split ring mergeable path'
> +        mode = "mrg_rxbuf=1,in_order=0"
> +        self.relanuch_virtio_testpmd_with_multi_path(mode, case_info,
> extern_params)
> +
> +        self.close_all_testpmd()
> +
> +    def
> test_server_mode_reconnect_with_packed_and_split_mergeable_path_cbd
> ma_payload_check(self):
> +        """
> +        Test Case 14: loopback packed ring and split ring mergeable path
> cbdma test payload check with server mode and multi-queues
> +        """
> +        self.cbdma_nic_dev_num = 8
> +        self.get_cbdma_ports_info_and_bind_to_dpdk()
> +        self.queue_number = 8
> +        self.vdev = f"--vdev 'eth_vhost0,iface=vhost-
> net,queues={self.queue_number},client=1,dmas=[txq0@{self.cbdma_dev_in
> fos[0]};txq1@{self.cbdma_dev_infos[1]};txq2@{self.cbdma_dev_infos[2]};txq
> 3@{self.cbdma_dev_infos[3]};txq4@{self.cbdma_dev_infos[4]};txq5@{self.cb
> dma_dev_infos[5]};txq6@{self.cbdma_dev_infos[6]};txq7@{self.cbdma_dev_
> infos[7]}]' "
> +
> +        self.nb_cores = 1
> +        extern_params = '--txd=1024 --rxd=1024'
> +        case_info = 'packed ring mergeable inorder path'
> +        mode = "mrg_rxbuf=1,in_order=1,packed_vq=1"
> +
> +
> self.lanuch_vhost_testpmd_with_cbdma(extern_params=extern_params)
> +        self.logger.info(case_info)
> +        self.lanuch_virtio_user_testpmd_with_multi_queue(mode=mode,
> extern_params=extern_params, set_fwd_mac=False)
> +        self.virtio_user_pmd.execute_cmd("set fwd csum")
> +        self.virtio_user_pmd.execute_cmd("start")
> +        # 3. Attach pdump secondary process to primary process by same file-
> prefix
> +        self.vuser0_port = 'net_virtio_user0'
> +        self.launch_pdump_to_capture_pkt(self.vuser0_port)
> +        self.start_to_send_8k_packets_csum_cbdma(self.vhost)
> +
> +        # 5. Check all the packets length is 6192 Byte in the pcap file
> +        self.pkt_len = 6192
> +        self.check_packet_payload_valid(self.pkt_len)
> +        #reconnet from vhost
> +        self.relanuch_vhost_testpmd_send_8k_packets(extern_params,
> cbdma=True)
> +
> +        # reconnet from virtio
> +        self.logger.info('now reconnet from virtio_user with other path')
> +        case_info = 'packed ring mergeable path'
> +        mode = "mrg_rxbuf=1,in_order=0,packed_vq=1"
> +        self.relanuch_virtio_testpmd_with_multi_path(mode, case_info,
> extern_params, cbdma=True)
> +
> +        case_info = 'split ring mergeable inorder path'
> +        mode = "mrg_rxbuf=1,in_order=1"
> +        self.relanuch_virtio_testpmd_with_multi_path(mode, case_info,
> extern_params, cbdma=True)
> +
> +        case_info = 'split ring mergeable path'
> +        mode = "mrg_rxbuf=1,in_order=0"
> +        self.relanuch_virtio_testpmd_with_multi_path(mode, case_info,
> extern_params, cbdma=True)
> +
> +        self.close_all_testpmd()
> +
> +    def lanuch_vhost_testpmd_with_cbdma(self, extern_params=""):
> +        """
> +        start testpmd with cbdma
> +        """
> +        param = "--rxq={} --txq={} --nb-cores={} {}".format(self.queue_number,
> self.queue_number, self.nb_cores, extern_params)
> +        self.vhost_pmd.start_testpmd(self.core_list_host, param=param,
> no_pci=False, ports=[], eal_param=self.vdev, prefix='vhost',
> fixed_prefix=True)
> +
> +    def get_cbdma_ports_info_and_bind_to_dpdk(self):
> +        """
> +        get all cbdma ports
> +        """
> +        out = self.dut.send_expect('./usertools/dpdk-devbind.py --status-dev
> dma', '# ', 30)
> +        device_info = out.split('\n')
> +        for device in device_info:
> +            pci_info = re.search('\s*(0000:\S*:\d*.\d*)', device)
> +            if pci_info is not None:
> +                dev_info = pci_info.group(1)
> +                # the numa id of ioat dev, only add the device which on same
> socket with nic dev
> +                bus = int(dev_info[5:7], base=16)
> +                if bus >= 128:
> +                    cur_socket = 1
> +                else:
> +                    cur_socket = 0
> +                if self.ports_socket == cur_socket:
> +                    self.cbdma_dev_infos.append(pci_info.group(1))
> +        self.verify(len(self.cbdma_dev_infos) >= 8, 'There no enough cbdma
> device to run this suite')
> +        self.device_str = '
> '.join(self.cbdma_dev_infos[0:self.cbdma_nic_dev_num])
> +        self.dut.send_expect('./usertools/dpdk-devbind.py --force --
> bind=%s %s' % (self.drivername, self.device_str), '# ', 60)
> +
> +    def bind_cbdma_device_to_kernel(self):
> +        if self.device_str is not None:
> +            self.dut.send_expect('modprobe ioatdma', '# ')
> +            self.dut.send_expect('./usertools/dpdk-devbind.py -u %s' %
> self.device_str, '# ', 30)
> +            self.dut.send_expect('./usertools/dpdk-devbind.py --force --
> bind=ioatdma  %s' % self.device_str, '# ', 60)
> +
>      def tear_down(self):
>          """
>          Run after each test case.
>          """
>          self.dut.send_expect("killall -s INT %s" % self.testpmd_name, "#")
>          self.close_all_session()
> +        self.bind_cbdma_device_to_kernel()
>          time.sleep(2)
> 
>      def tear_down_all(self):
> --
> 2.33.1


^ permalink raw reply	[flat|nested] 10+ messages in thread

* [dts] [PATCH V1 0/2] loopback_virtio_user_server_mode: add two new cases
@ 2021-11-05 14:58 Lingli Chen
  0 siblings, 0 replies; 10+ messages in thread
From: Lingli Chen @ 2021-11-05 14:58 UTC (permalink / raw)
  To: dts; +Cc: yinan.wang, Lingli Chen

loopback_virtio_user_server_mode test_plans and test suite add two new cases

Lingli Chen (2):
  test_plans/loopback_virtio_user_server_mode: add two new cases
  tests/loopback_virtio_user_server_mode: add two new cases

 ...back_virtio_user_server_mode_test_plan.rst | 242 +++++++++---
 ...tSuite_loopback_virtio_user_server_mode.py | 353 +++++++++++++++---
 2 files changed, 482 insertions(+), 113 deletions(-)

-- 
2.33.1


^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2021-11-22  8:32 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-11-05 15:07 [dts] [PATCH V1 0/2] loopback_virtio_user_server_mode: add two new cases Lingli Chen
2021-11-05 15:07 ` [dts] [PATCH V1 1/2] test_plans/loopback_virtio_user_server_mode: " Lingli Chen
2021-11-05 14:38   ` Tu, Lijuan
2021-11-08  6:31     ` Chen, LingliX
2021-11-09  9:14       ` Tu, Lijuan
2021-11-05 15:07 ` [dts] [PATCH V1 2/2] tests/loopback_virtio_user_server_mode: " Lingli Chen
2021-11-05  7:17   ` Chen, LingliX
2021-11-05  9:10     ` Wang, Yinan
2021-11-22  8:32   ` [dts][PATCH " Wang, Yinan
  -- strict thread matches above, loose matches on Subject: below --
2021-11-05 14:58 [dts] [PATCH V1 0/2] loopback_virtio_user_server_mode: " Lingli Chen

test suite reviews and discussions

This inbox may be cloned and mirrored by anyone:

	git clone --mirror http://inbox.dpdk.org/dts/0 dts/git/0.git

	# If you have public-inbox 1.1+ installed, you may
	# initialize and index your mirror using the following commands:
	public-inbox-init -V2 dts dts/ http://inbox.dpdk.org/dts \
		dts@dpdk.org
	public-inbox-index dts

Example config snippet for mirrors.
Newsgroup available over NNTP:
	nntp://inbox.dpdk.org/inbox.dpdk.dts


AGPL code for this site: git clone https://public-inbox.org/public-inbox.git