test suite reviews and discussions
 help / color / mirror / Atom feed
* RE: [dts][PATCH V1] test_plans and TestSuite remove copy threshold for async path
  2021-11-23 17:30 [dts][PATCH V1] test_plans and TestSuite remove copy threshold for async path Lingli Chen
@ 2021-11-23  9:32 ` Chen, LingliX
  2021-11-23 13:55   ` Tu, Lijuan
  0 siblings, 1 reply; 3+ messages in thread
From: Chen, LingliX @ 2021-11-23  9:32 UTC (permalink / raw)
  To: dts; +Cc: Wang, Yinan


> -----Original Message-----
> From: Chen, LingliX <linglix.chen@intel.com>
> Sent: Wednesday, November 24, 2021 1:30 AM
> To: dts@dpdk.org
> Cc: Chen, LingliX <linglix.chen@intel.com>
> Subject: [dts][PATCH V1] test_plans and TestSuite remove copy threshold for
> async path
> 
> According to dpdk commit abeb86525577("vhost: remove copy threshold for
> async path") delete dmathr parameter.
> 
> Signed-off-by: Lingli Chen <linglix.chen@intel.com>
> ---
>  test_plans/dpdk_gro_lib_test_plan.rst         |  2 +-
>  test_plans/vhost_cbdma_test_plan.rst          | 20 +++++-----
>  .../vhost_event_idx_interrupt_test_plan.rst   | 10 ++---
>  .../vhost_virtio_pmd_interrupt_test_plan.rst  |  6
> +--  .../virtio_event_idx_interrupt_test_plan.rst  |  4 +-
>  .../vm2vm_virtio_net_perf_test_plan.rst       | 26 ++++++-------
>  test_plans/vm2vm_virtio_user_test_plan.rst    | 10 ++---
>  tests/TestSuite_dpdk_gro_lib.py               |  2 +-
>  tests/TestSuite_vhost_cbdma.py                | 38 ++++++++-----------
>  tests/TestSuite_virtio_event_idx_interrupt.py |  2 +-
>  tests/TestSuite_vm2vm_virtio_net_perf.py      |  4 +-
>  tests/TestSuite_vm2vm_virtio_pmd.py           |  4 +-
>  tests/TestSuite_vm2vm_virtio_user.py          | 16 ++++----
>  13 files changed, 68 insertions(+), 76 deletions(-)
> 
Tested-by: Lingli Chen <linglix.chen@intel.com>

^ permalink raw reply	[flat|nested] 3+ messages in thread

* RE: [dts][PATCH V1] test_plans and TestSuite remove copy threshold for async path
  2021-11-23  9:32 ` Chen, LingliX
@ 2021-11-23 13:55   ` Tu, Lijuan
  0 siblings, 0 replies; 3+ messages in thread
From: Tu, Lijuan @ 2021-11-23 13:55 UTC (permalink / raw)
  To: Chen, LingliX, dts; +Cc: Wang, Yinan


> > -----Original Message-----
> > From: Chen, LingliX <linglix.chen@intel.com>
> > Sent: Wednesday, November 24, 2021 1:30 AM
> > To: dts@dpdk.org
> > Cc: Chen, LingliX <linglix.chen@intel.com>
> > Subject: [dts][PATCH V1] test_plans and TestSuite remove copy
> > threshold for async path
> >
> > According to dpdk commit abeb86525577("vhost: remove copy threshold
> > for async path") delete dmathr parameter.
> >
> > Signed-off-by: Lingli Chen <linglix.chen@intel.com>
> 
> Tested-by: Lingli Chen <linglix.chen@intel.com>

Applied, thanks

^ permalink raw reply	[flat|nested] 3+ messages in thread

* [dts][PATCH V1] test_plans and TestSuite remove copy threshold for async path
@ 2021-11-23 17:30 Lingli Chen
  2021-11-23  9:32 ` Chen, LingliX
  0 siblings, 1 reply; 3+ messages in thread
From: Lingli Chen @ 2021-11-23 17:30 UTC (permalink / raw)
  To: dts; +Cc: Lingli Chen

According to dpdk commit abeb86525577("vhost: remove copy threshold for async path") delete dmathr parameter.

Signed-off-by: Lingli Chen <linglix.chen@intel.com>
---
 test_plans/dpdk_gro_lib_test_plan.rst         |  2 +-
 test_plans/vhost_cbdma_test_plan.rst          | 20 +++++-----
 .../vhost_event_idx_interrupt_test_plan.rst   | 10 ++---
 .../vhost_virtio_pmd_interrupt_test_plan.rst  |  6 +--
 .../virtio_event_idx_interrupt_test_plan.rst  |  4 +-
 .../vm2vm_virtio_net_perf_test_plan.rst       | 26 ++++++-------
 test_plans/vm2vm_virtio_user_test_plan.rst    | 10 ++---
 tests/TestSuite_dpdk_gro_lib.py               |  2 +-
 tests/TestSuite_vhost_cbdma.py                | 38 ++++++++-----------
 tests/TestSuite_virtio_event_idx_interrupt.py |  2 +-
 tests/TestSuite_vm2vm_virtio_net_perf.py      |  4 +-
 tests/TestSuite_vm2vm_virtio_pmd.py           |  4 +-
 tests/TestSuite_vm2vm_virtio_user.py          | 16 ++++----
 13 files changed, 68 insertions(+), 76 deletions(-)

diff --git a/test_plans/dpdk_gro_lib_test_plan.rst b/test_plans/dpdk_gro_lib_test_plan.rst
index bdbcdf62..ef16d997 100644
--- a/test_plans/dpdk_gro_lib_test_plan.rst
+++ b/test_plans/dpdk_gro_lib_test_plan.rst
@@ -425,7 +425,7 @@ NIC2(In kernel) -> NIC1(DPDK) -> testpmd(csum fwd) -> Vhost -> Virtio-net
 
     ./dpdk-devbind.py -b igb_uio xx:xx.x
     ./x86_64-native-linuxapp-gcc/app/testpmd -l 29-31 -n 4 \
-    --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=2,dmas=[txq0@80:04.0;txq1@80:04.1],dmathr=1024' -- -i --txd=1024 --rxd=1024 --txq=2 --rxq=2 --nb-cores=2
+    --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=2,dmas=[txq0@80:04.0;txq1@80:04.1]' -- -i --txd=1024 --rxd=1024 --txq=2 --rxq=2 --nb-cores=2
     set fwd csum
     stop
     port stop 0
diff --git a/test_plans/vhost_cbdma_test_plan.rst b/test_plans/vhost_cbdma_test_plan.rst
index 325b5d87..7fe74f12 100644
--- a/test_plans/vhost_cbdma_test_plan.rst
+++ b/test_plans/vhost_cbdma_test_plan.rst
@@ -57,12 +57,10 @@ operations of queues:
 
  - dmas: This parameter is used to specify the assigned DMA device of
    a queue.
- - dmathr: If packets length >= dmathr, leverage I/OAT device to perform memory copy;
-   otherwise, leverage librte_vhost to perform memory copy.
 
 Here is an example:
  $ ./dpdk-testpmd -c f -n 4 \
-   --vdev 'net_vhost0,iface=/tmp/s0,queues=1,dmas=[txq0@80:04.0],dmathr=1024'
+   --vdev 'net_vhost0,iface=/tmp/s0,queues=1,dmas=[txq0@80:04.0]'
 
 Test Case 1: PVP Split all path with DMA-accelerated vhost enqueue
 ==================================================================
@@ -73,7 +71,7 @@ TG --> NIC --> Vhost --> Virtio--> Vhost --> NIC --> TG
 
 1. Bind one cbdma port and one nic port to igb_uio, then launch vhost by below command::
 
-    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/s0,queues=1,dmas=[txq0@80:04.0],dmathr=1024' \
+    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/s0,queues=1,dmas=[txq0@80:04.0]' \
     -- -i --nb-cores=1 --txd=1024 --rxd=1024
     >set fwd mac
     >start
@@ -152,7 +150,7 @@ Test Case 2: Split ring dynamic queue number test for DMA-accelerated vhost Tx o
 5. Quit vhost port and relaunch vhost with 4 queues w/ cbdma::
 
     ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 28-29  \
-    --file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/s0,queues=4,client=1,dmas=[txq0@00:04.0;txq1@00:04.1;txq2@00:04.2;txq3@00:04.3],dmathr=1024' \
+    --file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/s0,queues=4,client=1,dmas=[txq0@00:04.0;txq1@00:04.1;txq2@00:04.2;txq3@00:04.3]' \
     -- -i --nb-cores=1 --txd=1024 --rxd=1024 --txq=4 --rxq=4
     >set fwd mac
     >start
@@ -164,7 +162,7 @@ Test Case 2: Split ring dynamic queue number test for DMA-accelerated vhost Tx o
 8. Quit vhost port and relaunch vhost with 8 queues w/ cbdma::
 
      ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 28-29  \
-    --file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/s0,queues=8,client=1,dmas=[txq0@80:04.0;txq1@80:04.1;txq2@80:04.2;txq3@80:04.3;txq4@80:04.4;txq5@80:04.5;txq6@80:04.6;txq7@80:04.7],dmathr=1024' \
+    --file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/s0,queues=8,client=1,dmas=[txq0@80:04.0;txq1@80:04.1;txq2@80:04.2;txq3@80:04.3;txq4@80:04.4;txq5@80:04.5;txq6@80:04.6;txq7@80:04.7]' \
     -- -i --nb-cores=1 --txd=1024 --rxd=1024 --txq=8 --rxq=8
     >set fwd mac
     >start
@@ -182,7 +180,7 @@ TG --> NIC --> Vhost --> Virtio--> Vhost --> NIC --> TG
 
 1. Bind one cbdma port and one nic port to igb_uio, then launch vhost by below command::
 
-    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/s0,queues=1,dmas=[txq0@80:04.0],dmathr=1024' \
+    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/s0,queues=1,dmas=[txq0@80:04.0]' \
     -- -i --nb-cores=1 --txd=1024 --rxd=1024
     >set fwd mac
     >start
@@ -270,7 +268,7 @@ Test Case 4: Packed ring dynamic queue number test for DMA-accelerated vhost Tx
 5. Quit vhost port and relaunch vhost with 4 queues w/ cbdma::
 
     ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 28-29  \
-    --file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/s0,queues=4,client=1,dmas=[txq0@80:04.0;txq1@80:04.1;txq2@80:04.2;txq3@80:04.3],dmathr=1024' \
+    --file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/s0,queues=4,client=1,dmas=[txq0@80:04.0;txq1@80:04.1;txq2@80:04.2;txq3@80:04.3]' \
     -- -i --nb-cores=1 --txd=1024 --rxd=1024 --txq=4 --rxq=4
     >set fwd mac
     >start
@@ -282,7 +280,7 @@ Test Case 4: Packed ring dynamic queue number test for DMA-accelerated vhost Tx
 8. Quit vhost port and relaunch vhost with 8 queues w/ cbdma::
 
      ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 28-29  \
-    --file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/s0,queues=8,client=1,dmas=[txq0@80:04.0;txq1@80:04.1;txq2@80:04.2;txq3@80:04.3;txq4@80:04.4;txq5@80:04.5;txq6@80:04.6;txq7@80:04.7],dmathr=1024' \
+    --file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/s0,queues=8,client=1,dmas=[txq0@80:04.0;txq1@80:04.1;txq2@80:04.2;txq3@80:04.3;txq4@80:04.4;txq5@80:04.5;txq6@80:04.6;txq7@80:04.7]' \
     -- -i --nb-cores=1 --txd=1024 --rxd=1024 --txq=8 --rxq=8
     >set fwd mac
     >start
@@ -296,7 +294,7 @@ Test Case 5: Compare PVP split ring performance between CPU copy, CBDMA copy and
 
 1. Bind one cbdma port and one nic port which on same numa to igb_uio, then launch vhost by below command::
 
-    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/s0,queues=1,client=1,dmas=[txq0@00:01.0],dmathr=1024' \
+    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/s0,queues=1,client=1,dmas=[txq0@00:01.0]' \
     -- -i --nb-cores=1 --txd=1024 --rxd=1024
     >set fwd mac
     >start
@@ -315,7 +313,7 @@ Test Case 5: Compare PVP split ring performance between CPU copy, CBDMA copy and
 
 4.Quit vhost side, relaunch with below cmd::
 
- ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/s0,queues=1,client=1,dmas=[txq0@00:01.0],dmathr=2000' \
+ ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/s0,queues=1,client=1,dmas=[txq0@00:01.0]' \
     -- -i --nb-cores=1 --txd=1024 --rxd=1024
     >set fwd mac
     >start
diff --git a/test_plans/vhost_event_idx_interrupt_test_plan.rst b/test_plans/vhost_event_idx_interrupt_test_plan.rst
index fe38292b..0cf4834f 100644
--- a/test_plans/vhost_event_idx_interrupt_test_plan.rst
+++ b/test_plans/vhost_event_idx_interrupt_test_plan.rst
@@ -402,7 +402,7 @@ Test Case 7: wake up split ring vhost-user cores with event idx interrupt mode a
 1. Bind 16 cbdma ports to igb_uio driver, then launch l3fwd-power example app with client mode::
 
     ./l3fwd-power -l 1-16 -n 4 --log-level=9 \
-    --vdev 'eth_vhost0,iface=/vhost-net0,queues=16,client=1,dmas=[txq0@80:04.0;txq1@80:04.1;txq2@80:04.2;txq3@80:04.3;txq4@80:04.4;txq5@80:04.5;txq6@80:04.6;txq7@80:04.7;txq8@00:04.0;txq9@00:04.1;txq10@00:04.2;txq11@00:04.3;txq12@00:04.4;txq13@00:04.5;txq14@00:04.6;txq15@00:04.7],dmathr=1024' \
+    --vdev 'eth_vhost0,iface=/vhost-net0,queues=16,client=1,dmas=[txq0@80:04.0;txq1@80:04.1;txq2@80:04.2;txq3@80:04.3;txq4@80:04.4;txq5@80:04.5;txq6@80:04.6;txq7@80:04.7;txq8@00:04.0;txq9@00:04.1;txq10@00:04.2;txq11@00:04.3;txq12@00:04.4;txq13@00:04.5;txq14@00:04.6;txq15@00:04.7]' \
     -- -p 0x1 --parse-ptype 1 \
     --config "(0,0,1),(0,1,2),(0,2,3),(0,3,4),(0,4,5),(0,5,6),(0,6,7),(0,7,8),(0,8,9),(0,9,10),(0,10,11),(0,11,12),(0,12,13),(0,13,14),(0,14,15),(0,15,16)"
 
@@ -422,7 +422,7 @@ Test Case 7: wake up split ring vhost-user cores with event idx interrupt mode a
 3. Relauch l3fwd-power sample for port up::
 
     ./l3fwd-power -l 1-16 -n 4 --log-level=9 \
-    --vdev 'eth_vhost0,iface=/vhost-net0,queues=16,client=1,dmas=[txq0@80:04.0;txq1@80:04.1;txq2@80:04.2;txq3@80:04.3;txq4@80:04.4;txq5@80:04.5;txq6@80:04.6;txq7@80:04.7;txq8@00:04.0;txq9@00:04.1;txq10@00:04.2;txq11@00:04.3;txq12@00:04.4;txq13@00:04.5;txq14@00:04.6;txq15@00:04.7],dmathr=1024' \
+    --vdev 'eth_vhost0,iface=/vhost-net0,queues=16,client=1,dmas=[txq0@80:04.0;txq1@80:04.1;txq2@80:04.2;txq3@80:04.3;txq4@80:04.4;txq5@80:04.5;txq6@80:04.6;txq7@80:04.7;txq8@00:04.0;txq9@00:04.1;txq10@00:04.2;txq11@00:04.3;txq12@00:04.4;txq13@00:04.5;txq14@00:04.6;txq15@00:04.7]' \
     -- -p 0x1 --parse-ptype 1 \
     --config "(0,0,1),(0,1,2),(0,2,3),(0,3,4),(0,4,5),(0,5,6),(0,6,7),(0,7,8),(0,8,9),(0,9,10),(0,10,11),(0,11,12),(0,12,13),(0,13,14),(0,14,15),(0,15,16)"
 
@@ -518,7 +518,7 @@ Test Case 9: wake up packed ring vhost-user cores with event idx interrupt mode
 1. Bind 16 cbdma ports to igb_uio driver, then launch l3fwd-power example app with client mode::
 
     ./l3fwd-power -l 1-16 -n 4 --log-level=9 \
-    --vdev 'eth_vhost0,iface=/vhost-net0,queues=16,client=1,dmas=[txq0@80:04.0;txq1@80:04.1;txq2@80:04.2;txq3@80:04.3;txq4@80:04.4;txq5@80:04.5;txq6@80:04.6;txq7@80:04.7;txq8@00:04.0;txq9@00:04.1;txq10@00:04.2;txq11@00:04.3;txq12@00:04.4;txq13@00:04.5;txq14@00:04.6;txq15@00:04.7],dmathr=1024' \
+    --vdev 'eth_vhost0,iface=/vhost-net0,queues=16,client=1,dmas=[txq0@80:04.0;txq1@80:04.1;txq2@80:04.2;txq3@80:04.3;txq4@80:04.4;txq5@80:04.5;txq6@80:04.6;txq7@80:04.7;txq8@00:04.0;txq9@00:04.1;txq10@00:04.2;txq11@00:04.3;txq12@00:04.4;txq13@00:04.5;txq14@00:04.6;txq15@00:04.7]' \
     -- -p 0x1 --parse-ptype 1 \
     --config "(0,0,1),(0,1,2),(0,2,3),(0,3,4),(0,4,5),(0,5,6),(0,6,7),(0,7,8),(0,8,9),(0,9,10),(0,10,11),(0,11,12),(0,12,13),(0,13,14),(0,14,15),(0,15,16)"
 
@@ -538,7 +538,7 @@ Test Case 9: wake up packed ring vhost-user cores with event idx interrupt mode
 3. Relauch l3fwd-power sample for port up::
 
     ./l3fwd-power -l 1-16 -n 4 --log-level=9 \
-    --vdev 'eth_vhost0,iface=/vhost-net0,queues=16,client=1,dmas=[txq0@80:04.0;txq1@80:04.1;txq2@80:04.2;txq3@80:04.3;txq4@80:04.4;txq5@80:04.5;txq6@80:04.6;txq7@80:04.7;txq8@00:04.0;txq9@00:04.1;txq10@00:04.2;txq11@00:04.3;txq12@00:04.4;txq13@00:04.5;txq14@00:04.6;txq15@00:04.7],dmathr=1024' \
+    --vdev 'eth_vhost0,iface=/vhost-net0,queues=16,client=1,dmas=[txq0@80:04.0;txq1@80:04.1;txq2@80:04.2;txq3@80:04.3;txq4@80:04.4;txq5@80:04.5;txq6@80:04.6;txq7@80:04.7;txq8@00:04.0;txq9@00:04.1;txq10@00:04.2;txq11@00:04.3;txq12@00:04.4;txq13@00:04.5;txq14@00:04.6;txq15@00:04.7]' \
     -- -p 0x1 --parse-ptype 1 \
     --config "(0,0,1),(0,1,2),(0,2,3),(0,3,4),(0,4,5),(0,5,6),(0,6,7),(0,7,8),(0,8,9),(0,9,10),(0,10,11),(0,11,12),(0,12,13),(0,13,14),(0,14,15),(0,15,16)"
 
@@ -626,4 +626,4 @@ Test Case 10: wake up packed ring vhost-user cores by multi virtio-net in VMs wi
     ping 1.1.1.5
     #send packets to vhost
 
-6. Check vhost related cores are waked up with l3fwd-power log.
\ No newline at end of file
+6. Check vhost related cores are waked up with l3fwd-power log.
diff --git a/test_plans/vhost_virtio_pmd_interrupt_test_plan.rst b/test_plans/vhost_virtio_pmd_interrupt_test_plan.rst
index 5c4fbf94..77c1946c 100644
--- a/test_plans/vhost_virtio_pmd_interrupt_test_plan.rst
+++ b/test_plans/vhost_virtio_pmd_interrupt_test_plan.rst
@@ -198,7 +198,7 @@ Test Case 5: Basic virtio interrupt test with 16 queues and cbdma enabled
 
 1. Bind 16 cbdma channels and one NIC port to igb_uio, then launch testpmd by below command::
 
-    ./testpmd -c 0x1ffff -n 4 --vdev 'eth_vhost0,iface=vhost-net,queues=16,dmas=[txq0@80:04.0;txq1@80:04.1;txq2@80:04.2;txq3@80:04.3;txq4@80:04.4;txq5@80:04.5;txq6@80:04.6;txq7@80:04.7;txq8@00:04.0;txq9@00:04.1;txq10@00:04.2;txq11@00:04.3;txq12@00:04.4;txq13@00:04.5;txq14@00:04.6;txq15@00:04.7],dmathr=1024' -- -i --nb-cores=16 --rxq=16 --txq=16 --rss-ip
+    ./testpmd -c 0x1ffff -n 4 --vdev 'eth_vhost0,iface=vhost-net,queues=16,dmas=[txq0@80:04.0;txq1@80:04.1;txq2@80:04.2;txq3@80:04.3;txq4@80:04.4;txq5@80:04.5;txq6@80:04.6;txq7@80:04.7;txq8@00:04.0;txq9@00:04.1;txq10@00:04.2;txq11@00:04.3;txq12@00:04.4;txq13@00:04.5;txq14@00:04.6;txq15@00:04.7]' -- -i --nb-cores=16 --rxq=16 --txq=16 --rss-ip
 
 2. Launch VM1, set queues=16, vectors>=2xqueues+2, mq=on::
 
@@ -268,7 +268,7 @@ Test Case 7: Packed ring virtio interrupt test with 16 queues and cbdma enabled
 
 1. Bind 16 cbdma channels ports and one NIC port to igb_uio, then launch testpmd by below command::
 
-    ./testpmd -c 0x1ffff -n 4 --vdev 'eth_vhost0,iface=vhost-net,queues=16,dmas=[txq0@80:04.0;txq1@80:04.1;txq2@80:04.2;txq3@80:04.3;txq4@80:04.4;txq5@80:04.5;txq6@80:04.6;txq7@80:04.7;txq8@00:04.0;txq9@00:04.1;txq10@00:04.2;txq11@00:04.3;txq12@00:04.4;txq13@00:04.5;txq14@00:04.6;txq15@00:04.7],dmathr=1024' -- -i --nb-cores=16 --rxq=16 --txq=16 --rss-ip
+    ./testpmd -c 0x1ffff -n 4 --vdev 'eth_vhost0,iface=vhost-net,queues=16,dmas=[txq0@80:04.0;txq1@80:04.1;txq2@80:04.2;txq3@80:04.3;txq4@80:04.4;txq5@80:04.5;txq6@80:04.6;txq7@80:04.7;txq8@00:04.0;txq9@00:04.1;txq10@00:04.2;txq11@00:04.3;txq12@00:04.4;txq13@00:04.5;txq14@00:04.6;txq15@00:04.7]' -- -i --nb-cores=16 --rxq=16 --txq=16 --rss-ip
 
 2. Launch VM1, set queues=16, vectors>=2xqueues+2, mq=on::
 
@@ -296,4 +296,4 @@ Test Case 7: Packed ring virtio interrupt test with 16 queues and cbdma enabled
 
 6. Change dest IP address to fixed ip, packets will distribute to 1 queue, check l3fwd-power log that only one related core is waked up.
 
-7. Stop the date transmitter, check all related core will be back to sleep status.
\ No newline at end of file
+7. Stop the date transmitter, check all related core will be back to sleep status.
diff --git a/test_plans/virtio_event_idx_interrupt_test_plan.rst b/test_plans/virtio_event_idx_interrupt_test_plan.rst
index b93c4759..064aa10e 100644
--- a/test_plans/virtio_event_idx_interrupt_test_plan.rst
+++ b/test_plans/virtio_event_idx_interrupt_test_plan.rst
@@ -302,7 +302,7 @@ Test Case 8: Wake up split ring virtio-net cores with event idx interrupt mode a
 1. Bind one nic port and 16 cbdma channels to igb_uio, then launch the vhost sample by below commands::
 
     rm -rf vhost-net*
-    ./testpmd -l 1-17 -n 4 --file-prefix=vhost --vdev 'net_vhost,iface=vhost-net,queues=16,client=1,dmas=[txq0@80:04.0;txq1@80:04.1;txq2@80:04.2;txq3@80:04.3;txq4@80:04.4;txq5@80:04.5;txq6@80:04.6;txq7@80:04.7;txq8@00:04.0;txq9@00:04.1;txq10@00:04.2;txq11@00:04.3;txq12@00:04.4;txq13@00:04.5;txq14@00:04.6;txq15@00:04.7],dmathr=64' -- -i --nb-cores=16 --txd=1024 --rxd=1024 --rxq=16 --txq=16
+    ./testpmd -l 1-17 -n 4 --file-prefix=vhost --vdev 'net_vhost,iface=vhost-net,queues=16,client=1,dmas=[txq0@80:04.0;txq1@80:04.1;txq2@80:04.2;txq3@80:04.3;txq4@80:04.4;txq5@80:04.5;txq6@80:04.6;txq7@80:04.7;txq8@00:04.0;txq9@00:04.1;txq10@00:04.2;txq11@00:04.3;txq12@00:04.4;txq13@00:04.5;txq14@00:04.6;txq15@00:04.7]' -- -i --nb-cores=16 --txd=1024 --rxd=1024 --rxq=16 --txq=16
     testpmd>start
 
 2. Launch VM::
@@ -375,7 +375,7 @@ Test Case 10: Wake up packed ring virtio-net cores with event idx interrupt mode
 1. Bind one nic port and 16 cbdma channels to igb_uio, then launch the vhost sample by below commands::
 
     rm -rf vhost-net*
-    ./testpmd -l 1-17 -n 4 --file-prefix=vhost --vdev 'net_vhost,iface=vhost-net,queues=16,client=1,dmas=[txq0@80:04.0;txq1@80:04.1;txq2@80:04.2;txq3@80:04.3;txq4@80:04.4;txq5@80:04.5;txq6@80:04.6;txq7@80:04.7;txq8@00:04.0;txq9@00:04.1;txq10@00:04.2;txq11@00:04.3;txq12@00:04.4;txq13@00:04.5;txq14@00:04.6;txq15@00:04.7],dmathr=64' -- -i --nb-cores=16 --txd=1024 --rxd=1024 --rxq=16 --txq=16
+    ./testpmd -l 1-17 -n 4 --file-prefix=vhost --vdev 'net_vhost,iface=vhost-net,queues=16,client=1,dmas=[txq0@80:04.0;txq1@80:04.1;txq2@80:04.2;txq3@80:04.3;txq4@80:04.4;txq5@80:04.5;txq6@80:04.6;txq7@80:04.7;txq8@00:04.0;txq9@00:04.1;txq10@00:04.2;txq11@00:04.3;txq12@00:04.4;txq13@00:04.5;txq14@00:04.6;txq15@00:04.7]' -- -i --nb-cores=16 --txd=1024 --rxd=1024 --rxq=16 --txq=16
     testpmd>start
 
 2. Launch VM::
diff --git a/test_plans/vm2vm_virtio_net_perf_test_plan.rst b/test_plans/vm2vm_virtio_net_perf_test_plan.rst
index 3fb12f41..d4c49851 100644
--- a/test_plans/vm2vm_virtio_net_perf_test_plan.rst
+++ b/test_plans/vm2vm_virtio_net_perf_test_plan.rst
@@ -113,8 +113,8 @@ Test Case 2: VM2VM split ring vhost-user/virtio-net CBDMA enable test with tcp t
 1. Launch the Vhost sample by below commands::
 
     rm -rf vhost-net*
-    ./dpdk-testpmd -l 2-4 -n 4 --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,queues=1,dmas=[txq0@00:04.0],dmathr=512' \
-    --vdev 'net_vhost1,iface=vhost-net1,queues=1,dmas=[txq0@00:04.1],dmathr=512'  -- -i --nb-cores=2 --txd=1024 --rxd=1024
+    ./dpdk-testpmd -l 2-4 -n 4 --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,queues=1,dmas=[txq0@00:04.0]' \
+    --vdev 'net_vhost1,iface=vhost-net1,queues=1,dmas=[txq0@00:04.1]'  -- -i --nb-cores=2 --txd=1024 --rxd=1024
     testpmd>start
 
 2. Launch VM1 and VM2 on socket 1::
@@ -273,8 +273,8 @@ Test Case 5: VM2VM virtio-net split ring mergeable 8 queues CBDMA enable test wi
 1. Launch the Vhost sample by below commands::
 
     rm -rf vhost-net*
-    ./dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8,dmas=[txq0@00:04.0;txq1@00:04.1;txq2@00:04.2;txq3@00:04.3;txq4@00:04.4;txq5@00:04.5;txq6@00:04.6;txq7@00:04.7],dmathr=512' \
-    --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8,dmas=[txq0@80:04.0;txq1@80:04.1;txq2@80:04.2;txq3@80:04.3;txq4@80:04.4;txq5@80:04.5;txq6@80:04.6;txq7@80:04.7],dmathr=512'  -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8
+    ./dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8,dmas=[txq0@00:04.0;txq1@00:04.1;txq2@00:04.2;txq3@00:04.3;txq4@00:04.4;txq5@00:04.5;txq6@00:04.6;txq7@00:04.7]' \
+    --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8,dmas=[txq0@80:04.0;txq1@80:04.1;txq2@80:04.2;txq3@80:04.3;txq4@80:04.4;txq5@80:04.5;txq6@80:04.6;txq7@80:04.7]'  -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8
     testpmd>start
 
 2. Launch VM1 and VM2 using qemu 5.2.0::
@@ -366,8 +366,8 @@ Test Case 6: VM2VM virtio-net split ring non-mergeable 8 queues CBDMA enable tes
 1. Launch the Vhost sample by below commands::
 
     rm -rf vhost-net*
-    ./dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8,dmas=[txq0@00:04.0;txq1@00:04.1;txq2@00:04.2;txq3@00:04.3;txq4@00:04.4;txq5@00:04.5;txq6@00:04.6;txq7@00:04.7],dmathr=512' \
-    --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8,dmas=[txq0@80:04.0;txq1@80:04.1;txq2@80:04.2;txq3@80:04.3;txq4@80:04.4;txq5@80:04.5;txq6@80:04.6;txq7@80:04.7],dmathr=512'  -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8
+    ./dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8,dmas=[txq0@00:04.0;txq1@00:04.1;txq2@00:04.2;txq3@00:04.3;txq4@00:04.4;txq5@00:04.5;txq6@00:04.6;txq7@00:04.7]' \
+    --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8,dmas=[txq0@80:04.0;txq1@80:04.1;txq2@80:04.2;txq3@80:04.3;txq4@80:04.4;txq5@80:04.5;txq6@80:04.6;txq7@80:04.7]'  -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8
     testpmd>start
 
 2. Launch VM1 and VM2 using qemu 5.2.0::
@@ -514,8 +514,8 @@ Test Case 8: VM2VM packed ring vhost-user/virtio-net CBDMA enable test with tcp
 1. Launch the Vhost sample by below commands::
 
     rm -rf vhost-net*
-    ./dpdk-testpmd -l 2-4 -n 4 --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,queues=1,dmas=[txq0@00:04.0],dmathr=512' \
-    --vdev 'net_vhost1,iface=vhost-net1,queues=1,dmas=[txq0@00:04.1],dmathr=512'  -- -i --nb-cores=2 --txd=1024 --rxd=1024
+    ./dpdk-testpmd -l 2-4 -n 4 --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,queues=1,dmas=[txq0@00:04.0]' \
+    --vdev 'net_vhost1,iface=vhost-net1,queues=1,dmas=[txq0@00:04.1]'  -- -i --nb-cores=2 --txd=1024 --rxd=1024
     testpmd>start
 
 2. Launch VM1 and VM2 on socket 1 with qemu 5.2.0::
@@ -674,8 +674,8 @@ Test Case 11: VM2VM virtio-net packed ring mergeable 8 queues CBDMA enable test
 1. Launch the Vhost sample by below commands::
 
     rm -rf vhost-net*
-    ./dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,queues=8,dmas=[txq0@00:04.0;txq1@00:04.1;txq2@00:04.2;txq3@00:04.3;txq4@00:04.4;txq5@00:04.5;txq6@00:04.6;txq7@00:04.7],dmathr=512' \
-    --vdev 'net_vhost1,iface=vhost-net1,queues=8,dmas=[txq0@80:04.0;txq1@80:04.1;txq2@80:04.2;txq3@80:04.3;txq4@80:04.4;txq5@80:04.5;txq6@80:04.6;txq7@80:04.7],dmathr=512'  -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8
+    ./dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,queues=8,dmas=[txq0@00:04.0;txq1@00:04.1;txq2@00:04.2;txq3@00:04.3;txq4@00:04.4;txq5@00:04.5;txq6@00:04.6;txq7@00:04.7]' \
+    --vdev 'net_vhost1,iface=vhost-net1,queues=8,dmas=[txq0@80:04.0;txq1@80:04.1;txq2@80:04.2;txq3@80:04.3;txq4@80:04.4;txq5@80:04.5;txq6@80:04.6;txq7@80:04.7]'  -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8
     testpmd>start
 
 2. Launch VM1 and VM2 with qemu 5.2.0::
@@ -731,8 +731,8 @@ Test Case 12: VM2VM virtio-net packed ring non-mergeable 8 queues CBDMA enable t
 1. Launch the Vhost sample by below commands::
 
     rm -rf vhost-net*
-    ./dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,queues=8,dmas=[txq0@00:04.0;txq1@00:04.1;txq2@00:04.2;txq3@00:04.3;txq4@00:04.4;txq5@00:04.5;txq6@00:04.6;txq7@00:04.7],dmathr=512' \
-    --vdev 'net_vhost1,iface=vhost-net1,queues=8,dmas=[txq0@80:04.0;txq1@80:04.1;txq2@80:04.2;txq3@80:04.3;txq4@80:04.4;txq5@80:04.5;txq6@80:04.6;txq7@80:04.7],dmathr=512'  -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8
+    ./dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,queues=8,dmas=[txq0@00:04.0;txq1@00:04.1;txq2@00:04.2;txq3@00:04.3;txq4@00:04.4;txq5@00:04.5;txq6@00:04.6;txq7@00:04.7]' \
+    --vdev 'net_vhost1,iface=vhost-net1,queues=8,dmas=[txq0@80:04.0;txq1@80:04.1;txq2@80:04.2;txq3@80:04.3;txq4@80:04.4;txq5@80:04.5;txq6@80:04.6;txq7@80:04.7]'  -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8
     testpmd>start
 
 2. Launch VM1 and VM2::
@@ -780,4 +780,4 @@ Test Case 12: VM2VM virtio-net packed ring non-mergeable 8 queues CBDMA enable t
     Under VM1, run: `iperf -s -i 1`
     Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60`
 
-7. Rerun step 5-6 five times.
\ No newline at end of file
+7. Rerun step 5-6 five times.
diff --git a/test_plans/vm2vm_virtio_user_test_plan.rst b/test_plans/vm2vm_virtio_user_test_plan.rst
index 4855b51f..9554fa40 100644
--- a/test_plans/vm2vm_virtio_user_test_plan.rst
+++ b/test_plans/vm2vm_virtio_user_test_plan.rst
@@ -797,7 +797,7 @@ Test Case 12: split virtqueue vm2vm non-mergeable path multi-queues payload chec
 1. Launch vhost by below command::
 
     ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 \
-    --vdev 'eth_vhost0,iface=vhost-net,queues=2,client=1,dmas=[txq0@80:04.0;txq1@80:04.1],dmathr=64' --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0@80:04.2;txq1@80:04.3],dmathr=64' -- \
+    --vdev 'eth_vhost0,iface=vhost-net,queues=2,client=1,dmas=[txq0@80:04.0;txq1@80:04.1]' --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0@80:04.2;txq1@80:04.3]' -- \
     -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx
     testpmd>vhost enable tx all
 
@@ -834,7 +834,7 @@ Test Case 13: split virtqueue vm2vm mergeable path multi-queues payload check wi
 1. Launch vhost by below command::
 
     ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 \
-    --vdev 'eth_vhost0,iface=vhost-net,queues=2,client=1,dmas=[txq0@00:04.0;txq1@00:04.1],dmathr=512' --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0@00:04.2;txq1@00:04.3],dmathr=512' -- \
+    --vdev 'eth_vhost0,iface=vhost-net,queues=2,client=1,dmas=[txq0@00:04.0;txq1@00:04.1]' --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0@00:04.2;txq1@00:04.3]' -- \
     -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx
     testpmd>vhost enable tx all
 
@@ -873,7 +873,7 @@ Test Case 14: packed virtqueue vm2vm non-mergeable path multi-queues payload che
 1. Launch vhost by below command::
 
     ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 \
-    --vdev 'eth_vhost0,iface=vhost-net,queues=2,client=1,dmas=[txq0@80:04.0;txq1@80:04.1],dmathr=64' --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0@80:04.2;txq1@80:04.3],dmathr=64' -- \
+    --vdev 'eth_vhost0,iface=vhost-net,queues=2,client=1,dmas=[txq0@80:04.0;txq1@80:04.1]' --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0@80:04.2;txq1@80:04.3]' -- \
     -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx
     testpmd>vhost enable tx all
 
@@ -910,7 +910,7 @@ Test Case 15: packed virtqueue vm2vm mergeable path multi-queues payload check w
 1. Launch vhost by below command::
 
     ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 \
-    --vdev 'eth_vhost0,iface=vhost-net,queues=2,client=1,dmas=[txq0@00:04.0;txq1@00:04.1],dmathr=512' --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0@00:04.2;txq1@00:04.3],dmathr=512' -- \
+    --vdev 'eth_vhost0,iface=vhost-net,queues=2,client=1,dmas=[txq0@00:04.0;txq1@00:04.1]' --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0@00:04.2;txq1@00:04.3]' -- \
     -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx
     testpmd>vhost enable tx all
 
@@ -941,4 +941,4 @@ Test Case 15: packed virtqueue vm2vm mergeable path multi-queues payload check w
     testpmd>set txpkts 64
     testpmd>start tx_first 7
 
-5. Start vhost testpmd, then quit pdump, check 502 packets received by virtio-user1 and 54 packets with 4640 length and 448 packets with 64 length in pdump-virtio-rx.pcap.
\ No newline at end of file
+5. Start vhost testpmd, then quit pdump, check 502 packets received by virtio-user1 and 54 packets with 4640 length and 448 packets with 64 length in pdump-virtio-rx.pcap.
diff --git a/tests/TestSuite_dpdk_gro_lib.py b/tests/TestSuite_dpdk_gro_lib.py
index 08ce9f8a..e11bfd32 100644
--- a/tests/TestSuite_dpdk_gro_lib.py
+++ b/tests/TestSuite_dpdk_gro_lib.py
@@ -160,7 +160,7 @@ class TestDPDKGROLib(TestCase):
         # mode 5 : tcp traffice light mode with cdbma enable
         if mode == 5:
             self.get_cbdma_ports_info_and_bind_to_dpdk(cbdma_num=2)
-            eal_param = self.dut.create_eal_parameters(cores=self.vhost_list, vdevs=["'net_vhost0,iface=%s/vhost-net,queues=%s,dmas=[%s],dmathr=1024'" % (self.base_dir, queue, self.dmas_info)])
+            eal_param = self.dut.create_eal_parameters(cores=self.vhost_list, vdevs=["'net_vhost0,iface=%s/vhost-net,queues=%s,dmas=[%s]'" % (self.base_dir, queue, self.dmas_info)])
             self.testcmd_start = self.path + eal_param + " -- -i --txd=1024 --rxd=1024 --txq=2 --rxq=2"
             self.vhost_user = self.dut.new_session(suite="user")
             self.vhost_user.send_expect(self.testcmd_start, "testpmd> ", 120)
diff --git a/tests/TestSuite_vhost_cbdma.py b/tests/TestSuite_vhost_cbdma.py
index 9330a558..5ebd734f 100644
--- a/tests/TestSuite_vhost_cbdma.py
+++ b/tests/TestSuite_vhost_cbdma.py
@@ -35,12 +35,10 @@ We introduce a new vdev parameter to enable DMA acceleration for Tx
 operations of queues:
  - dmas: This parameter is used to specify the assigned DMA device of
    a queue.
- - dmathr: If packets length >= dmathr, leverage I/OAT device to perform memory copy;
-   otherwise, leverage librte_vhost to perform memory copy.
 
 Here is an example:
  $ ./testpmd -c f -n 4 \
-   --vdev 'net_vhost0,iface=/tmp/s0,queues=1,dmas=[txq0@80:04.0],dmathr=1024'
+   --vdev 'net_vhost0,iface=/tmp/s0,queues=1,dmas=[txq0@80:04.0]'
 """
 import json
 import os
@@ -214,12 +212,11 @@ class TestVirTioVhostCbdma(TestCase):
         self.test_target = self.running_case
         self.expected_throughput = self.get_suite_cfg()['expected_throughput'][self.test_target]
         txd_rxd = 1024
-        dmathr = 1024
         eal_tx_rxd = ' --nb-cores=%d --txd=%d --rxd=%d'
         queue = 1
         used_cbdma_num = 1
         self.get_cbdma_ports_info_and_bind_to_dpdk(used_cbdma_num)
-        vhost_vdevs = f"'net_vhost0,iface=/tmp/s0,queues=%d,dmas=[txq0@{self.device_str}],dmathr=%d'"
+        vhost_vdevs = f"'net_vhost0,iface=/tmp/s0,queues=%d,dmas=[txq0@{self.device_str}]'"
         dev_path_mode_mapper = {
             "inorder_mergeable_path": 'mrg_rxbuf=1,in_order=1',
             "mergeable_path": 'mrg_rxbuf=1,in_order=0',
@@ -231,7 +228,7 @@ class TestVirTioVhostCbdma(TestCase):
         allow_pci = [self.dut.ports_info[0]['pci']]
         for index in range(used_cbdma_num):
             allow_pci.append(self.cbdma_dev_infos[index])
-        self.launch_testpmd_as_vhost_user(eal_tx_rxd % (queue, txd_rxd, txd_rxd), self.cores[0:2], dev=vhost_vdevs % (queue, dmathr), ports=allow_pci)
+        self.launch_testpmd_as_vhost_user(eal_tx_rxd % (queue, txd_rxd, txd_rxd), self.cores[0:2], dev=vhost_vdevs % (queue), ports=allow_pci)
         for key, path_mode in dev_path_mode_mapper.items():
             if key == "vector_rx_path":
                 pvp_split_all_path_virtio_params = eal_tx_rxd % (queue, txd_rxd, txd_rxd)
@@ -262,7 +259,6 @@ class TestVirTioVhostCbdma(TestCase):
         used_cbdma_num = 8
         queue = 8
         txd_rxd = 1024
-        dmathr = 1024
         nb_cores = 1
         virtio_path = "/tmp/s0"
         path_mode = 'mrg_rxbuf=1,in_order=1'
@@ -286,14 +282,14 @@ class TestVirTioVhostCbdma(TestCase):
 
         # used 4 cbdma_num and 4 queue to launch vhost
 
-        vhost_dmas = f"dmas=[txq0@{self.used_cbdma[0]};txq1@{self.used_cbdma[1]};txq2@{self.used_cbdma[2]};txq3@{self.used_cbdma[3]}],dmathr={dmathr}"
+        vhost_dmas = f"dmas=[txq0@{self.used_cbdma[0]};txq1@{self.used_cbdma[1]};txq2@{self.used_cbdma[2]};txq3@{self.used_cbdma[3]}]"
         self.launch_testpmd_as_vhost_user(eal_params % (queue/2,queue/2), self.cores[0:2], dev=vhost_dev % (int(queue/2),vhost_dmas), ports=allow_pci[:5])
         self.send_and_verify("used_4_cbdma_num", queue_list=range(int(queue/2)))
         self.mode_list.append("used_4_cbdma_num")
         self.vhost_user.send_expect("quit", "#")
 
         #used 8 cbdma_num to launch vhost
-        vhost_dmas = f"dmas=[txq0@{self.used_cbdma[0]};txq1@{self.used_cbdma[1]};txq2@{self.used_cbdma[2]};txq3@{self.used_cbdma[3]};txq4@{self.used_cbdma[4]};txq5@{self.used_cbdma[5]};txq6@{self.used_cbdma[6]};txq7@{self.used_cbdma[7]}],dmathr={dmathr}"
+        vhost_dmas = f"dmas=[txq0@{self.used_cbdma[0]};txq1@{self.used_cbdma[1]};txq2@{self.used_cbdma[2]};txq3@{self.used_cbdma[3]};txq4@{self.used_cbdma[4]};txq5@{self.used_cbdma[5]};txq6@{self.used_cbdma[6]};txq7@{self.used_cbdma[7]}]"
         self.launch_testpmd_as_vhost_user(eal_params % (queue, queue), self.cores[0:2],
                                           dev=vhost_dev % (queue,vhost_dmas), ports=allow_pci)
         self.send_and_verify("used_8_cbdma_num", queue_list=range(queue))
@@ -317,12 +313,11 @@ class TestVirTioVhostCbdma(TestCase):
         self.test_target = self.running_case
         self.expected_throughput = self.get_suite_cfg()['expected_throughput'][self.test_target]
         txd_rxd = 1024
-        dmathr = 1024
         eal_tx_rxd = ' --nb-cores=%d --txd=%d --rxd=%d'
         queue = 1
         used_cbdma_num = 1
         self.get_cbdma_ports_info_and_bind_to_dpdk(used_cbdma_num)
-        vhost_vdevs = f"'net_vhost0,iface=/tmp/s0,queues=%d,dmas=[txq0@{self.device_str}],dmathr=%d'"
+        vhost_vdevs = f"'net_vhost0,iface=/tmp/s0,queues=%d,dmas=[txq0@{self.device_str}]'"
         dev_path_mode_mapper = {
             "inorder_mergeable_path": 'mrg_rxbuf=1,in_order=1,packed_vq=1',
             "mergeable_path": 'mrg_rxbuf=1,in_order=0,packed_vq=1',
@@ -334,7 +329,7 @@ class TestVirTioVhostCbdma(TestCase):
         allow_pci = [self.dut.ports_info[0]['pci']]
         for index in range(used_cbdma_num):
             allow_pci.append(self.cbdma_dev_infos[index])
-        self.launch_testpmd_as_vhost_user(eal_tx_rxd % (queue, txd_rxd, txd_rxd), self.cores[0:2], dev=vhost_vdevs % (queue, dmathr), ports=allow_pci)
+        self.launch_testpmd_as_vhost_user(eal_tx_rxd % (queue, txd_rxd, txd_rxd), self.cores[0:2], dev=vhost_vdevs % (queue), ports=allow_pci)
         for key, path_mode in dev_path_mode_mapper.items():
             if key == "vector_rx_path":
                 pvp_split_all_path_virtio_params = eal_tx_rxd % (queue, txd_rxd, txd_rxd)
@@ -365,12 +360,11 @@ class TestVirTioVhostCbdma(TestCase):
         used_cbdma_num = 8
         queue = 8
         txd_rxd = 1024
-        dmathr = 1024
         nb_cores = 1
         virtio_path = "/tmp/s0"
         path_mode = 'mrg_rxbuf=1,in_order=1,packed_vq=1'
         self.get_cbdma_ports_info_and_bind_to_dpdk(used_cbdma_num)
-        vhost_dmas = f"dmas=[txq0@{self.used_cbdma[0]};txq1@{self.used_cbdma[1]}],dmathr={dmathr}"
+        vhost_dmas = f"dmas=[txq0@{self.used_cbdma[0]};txq1@{self.used_cbdma[1]}]"
         eal_params = " --nb-cores=1 --txd=1024 --rxd=1024 --txq=%d --rxq=%d "
         dynamic_queue_number_cbdma_virtio_params = f"  --tx-offloads=0x0 --enable-hw-vlan-strip {eal_params % (queue, queue)}"
         virtio_dev = f"net_virtio_user0,mac={self.virtio_mac},path={virtio_path},{path_mode},queues={queue},server=1"
@@ -389,7 +383,7 @@ class TestVirTioVhostCbdma(TestCase):
         self.vhost_user.send_expect("quit", "#")
 
         # used 4 cbdma_num and 4 queue to launch vhost
-        vhost_dmas = f"dmas=[txq0@{self.used_cbdma[0]};txq1@{self.used_cbdma[1]};txq2@{self.used_cbdma[2]};txq3@{self.used_cbdma[3]}],dmathr={dmathr}"
+        vhost_dmas = f"dmas=[txq0@{self.used_cbdma[0]};txq1@{self.used_cbdma[1]};txq2@{self.used_cbdma[2]};txq3@{self.used_cbdma[3]}]"
         self.launch_testpmd_as_vhost_user(eal_params % (queue/2,queue/2), self.cores[0:2],
                 dev=vhost_dev % (int(queue/2),vhost_dmas), ports=allow_pci[:5])
         self.send_and_verify("used_4_cbdma_num", queue_list=range(int(queue/2)))
@@ -397,7 +391,7 @@ class TestVirTioVhostCbdma(TestCase):
         self.vhost_user.send_expect("quit", "#")
 
         #used 8 cbdma_num to launch vhost
-        vhost_dmas = f"dmas=[txq0@{self.used_cbdma[0]};txq1@{self.used_cbdma[1]};txq2@{self.used_cbdma[2]};txq3@{self.used_cbdma[3]};txq4@{self.used_cbdma[4]};txq5@{self.used_cbdma[5]};txq6@{self.used_cbdma[6]};txq7@{self.used_cbdma[7]}],dmathr={dmathr}"
+        vhost_dmas = f"dmas=[txq0@{self.used_cbdma[0]};txq1@{self.used_cbdma[1]};txq2@{self.used_cbdma[2]};txq3@{self.used_cbdma[3]};txq4@{self.used_cbdma[4]};txq5@{self.used_cbdma[5]};txq6@{self.used_cbdma[6]};txq7@{self.used_cbdma[7]}]"
         self.launch_testpmd_as_vhost_user(eal_params % (queue, queue), self.cores[0:2],
                                           dev=vhost_dev % (queue,vhost_dmas), ports=allow_pci)
         self.send_and_verify("used_8_cbdma_num", queue_list=range(queue))
@@ -430,13 +424,13 @@ class TestVirTioVhostCbdma(TestCase):
         for index in range(used_cbdma_num):
             allow_pci.append(self.cbdma_dev_infos[index])
         path_mode = 'mrg_rxbuf=1,in_order=1'
-        vhost_vdevs = f"'net_vhost0,iface=/tmp/s0,queues=%d,client=1,dmas=[txq0@{self.device_str}],%s'"
+        vhost_vdevs = f"'net_vhost0,iface=/tmp/s0,queues=%d,client=1,dmas=[txq0@{self.device_str}]'"
         compare_pvp_split_ring_performance = "--tx-offloads=0x0 --enable-hw-vlan-strip --nb-cores=%d --txd=%d --rxd=%d" % (queue, txd_rxd, txd_rxd)
         dev_path_mode_mapper = {
-            "sync_cbdma": ['dmathr=1024', 'dmathr=2000'],
-            "cpu": 'dmathr=0',
+            "sync_cbdma": '',
+            "cpu": '',
         }
-        for key,dma_mode in dev_path_mode_mapper.items():
+        for key in dev_path_mode_mapper.items():
             if key == "cpu":
                 vhost_vdevs = f"'net_vhost0,iface=/tmp/s0,queues=1'"
                 self.launch_testpmd_as_vhost_user(eal_tx_rxd % (queue, txd_rxd, txd_rxd), self.cores[0:2], dev=vhost_vdevs, ports=[allow_pci[0]])
@@ -450,7 +444,7 @@ class TestVirTioVhostCbdma(TestCase):
                 self.virtio_user.send_expect("quit", "# ")
                 self.vhost_user.send_expect("quit", "# ")
             else:
-                self.launch_testpmd_as_vhost_user(eal_tx_rxd % (queue, txd_rxd, txd_rxd), self.cores[0:2],dev=vhost_vdevs % (queue, dma_mode[0]), ports=allow_pci)
+                self.launch_testpmd_as_vhost_user(eal_tx_rxd % (queue, txd_rxd, txd_rxd), self.cores[0:2],dev=vhost_vdevs % (queue), ports=allow_pci)
                 vdevs = f"'net_virtio_user0,mac={self.virtio_mac},path=/tmp/s0,{path_mode},queues=%d,server=1'" % queue
                 self.launch_testpmd_as_virtio_user(compare_pvp_split_ring_performance, self.cores[2:4],dev=vdevs)
                 mode = "sync_copy_64"
@@ -464,7 +458,7 @@ class TestVirTioVhostCbdma(TestCase):
                 self.virtio_user.send_expect('show port stats all', 'testpmd> ', 10)
                 self.vhost_user.send_expect("quit", "# ")
                 time.sleep(3)
-                self.launch_testpmd_as_vhost_user(eal_tx_rxd % (queue, txd_rxd, txd_rxd), self.cores[0:2],dev=vhost_vdevs % (queue, dma_mode[1]), ports=allow_pci)
+                self.launch_testpmd_as_vhost_user(eal_tx_rxd % (queue, txd_rxd, txd_rxd), self.cores[0:2],dev=vhost_vdevs % (queue), ports=allow_pci)
                 mode = "sync_copy_1518"
                 self.mode_list.append(mode)
                 self.send_and_verify(mode,frame_sizes=[1518],pkt_length_mode='fixed')
diff --git a/tests/TestSuite_virtio_event_idx_interrupt.py b/tests/TestSuite_virtio_event_idx_interrupt.py
index 8c6f7aa0..2a1f7b26 100644
--- a/tests/TestSuite_virtio_event_idx_interrupt.py
+++ b/tests/TestSuite_virtio_event_idx_interrupt.py
@@ -122,7 +122,7 @@ class TestVirtioIdxInterrupt(TestCase):
             device_str = self.device_str.split(" ")
             device_str.append(self.pf_pci)
             if mode:
-                vdev = ["'net_vhost,iface=%s/vhost-net,queues=%d,%s=1,dmas=[%s],dmathr=64'" % (self.base_dir, self.queues, mode, dmas)]
+                vdev = ["'net_vhost,iface=%s/vhost-net,queues=%d,%s=1,dmas=[%s]'" % (self.base_dir, self.queues, mode, dmas)]
             else:
                 vdev = ['net_vhost,iface=%s/vhost-net,queues=%d,dmas=[%s]' % (self.base_dir, self.queues, dmas)]
             eal_params = self.dut.create_eal_parameters(cores=self.core_list, prefix='vhost', ports=device_str, vdevs=vdev)
diff --git a/tests/TestSuite_vm2vm_virtio_net_perf.py b/tests/TestSuite_vm2vm_virtio_net_perf.py
index c627bd96..8c4b4ff2 100644
--- a/tests/TestSuite_vm2vm_virtio_net_perf.py
+++ b/tests/TestSuite_vm2vm_virtio_net_perf.py
@@ -142,8 +142,8 @@ class TestVM2VMVirtioNetPerf(TestCase):
                     cbdma_arg_0_list.append(item)
                 else:
                     cbdma_arg_1_list.append(item)
-            cbdma_arg_0 = ",dmas=[{}],dmathr=512".format(";".join(cbdma_arg_0_list))
-            cbdma_arg_1 = ",dmas=[{}],dmathr=512".format(";".join(cbdma_arg_1_list))
+            cbdma_arg_0 = ",dmas=[{}]".format(";".join(cbdma_arg_0_list))
+            cbdma_arg_1 = ",dmas=[{}]".format(";".join(cbdma_arg_1_list))
         else:
             cbdma_arg_0 = ""
             cbdma_arg_1 = ""
diff --git a/tests/TestSuite_vm2vm_virtio_pmd.py b/tests/TestSuite_vm2vm_virtio_pmd.py
index 5e0148b2..cbb0321c 100644
--- a/tests/TestSuite_vm2vm_virtio_pmd.py
+++ b/tests/TestSuite_vm2vm_virtio_pmd.py
@@ -646,8 +646,8 @@ class TestVM2VMVirtioPMD(TestCase):
                     cbdma_arg_0_list.append(item)
                 else:
                     cbdma_arg_1_list.append(item)
-            cbdma_arg_0 = ",dmas=[{}],dmathr=512".format(";".join(cbdma_arg_0_list))
-            cbdma_arg_1 = ",dmas=[{}],dmathr=512".format(";".join(cbdma_arg_1_list))
+            cbdma_arg_0 = ",dmas=[{}]".format(";".join(cbdma_arg_0_list))
+            cbdma_arg_1 = ",dmas=[{}]".format(";".join(cbdma_arg_1_list))
         else:
             cbdma_arg_0 = ""
             cbdma_arg_1 = ""
diff --git a/tests/TestSuite_vm2vm_virtio_user.py b/tests/TestSuite_vm2vm_virtio_user.py
index 9c6bad1b..4b868b32 100644
--- a/tests/TestSuite_vm2vm_virtio_user.py
+++ b/tests/TestSuite_vm2vm_virtio_user.py
@@ -755,8 +755,8 @@ class TestVM2VMVirtioUser(TestCase):
         # get dump pcap file of virtio
         # the virtio0 will send 283 pkts, but the virtio only will received 252 pkts
         self.logger.info('check pcap file info about virtio')
-        vdevs = f"--vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[txq0@{self.cbdma_dev_infos[0]};txq1@{self.cbdma_dev_infos[1]}],dmathr=64' " \
-                f"--vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0@{self.cbdma_dev_infos[2]};txq1@{self.cbdma_dev_infos[3]}],dmathr=64'"
+        vdevs = f"--vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[txq0@{self.cbdma_dev_infos[0]};txq1@{self.cbdma_dev_infos[1]}]' " \
+                f"--vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0@{self.cbdma_dev_infos[2]};txq1@{self.cbdma_dev_infos[3]}]'"
 
         self.get_dump_file_of_virtio_user_cbdma(path_mode, extern_params, ringsize, vdevs, no_pci=False, cbdma=True, pdump=False)
         self.send_251_960byte_and_32_64byte_pkts()
@@ -784,8 +784,8 @@ class TestVM2VMVirtioUser(TestCase):
         # get dump pcap file of virtio
         # the virtio0 will send 283 pkts, but the virtio only will received 252 pkts
         self.logger.info('check pcap file info about virtio')
-        vdevs = f"--vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[txq0@{self.cbdma_dev_infos[0]};txq1@{self.cbdma_dev_infos[1]}],dmathr=512' " \
-                f"--vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0@{self.cbdma_dev_infos[2]};txq1@{self.cbdma_dev_infos[3]}],dmathr=512'"
+        vdevs = f"--vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[txq0@{self.cbdma_dev_infos[0]};txq1@{self.cbdma_dev_infos[1]}]' " \
+                f"--vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0@{self.cbdma_dev_infos[2]};txq1@{self.cbdma_dev_infos[3]}]'"
 
         self.get_dump_file_of_virtio_user_cbdma(path_mode, extern_params, ringsize, vdevs, no_pci=False, cbdma=True, pdump=True)
         # self.get_dump_file_of_virtio_user_cbdma(path_mode, extern_params, ringsize, vdevs, no_pci=False)
@@ -814,8 +814,8 @@ class TestVM2VMVirtioUser(TestCase):
         # get dump pcap file of virtio
         # the virtio0 will send 283 pkts, but the virtio only will received 252 pkts
         self.logger.info('check pcap file info about virtio')
-        vdevs = f"--vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[txq0@{self.cbdma_dev_infos[0]};txq1@{self.cbdma_dev_infos[1]}],dmathr=64' " \
-                f"--vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0@{self.cbdma_dev_infos[2]};txq1@{self.cbdma_dev_infos[3]}],dmathr=64'"
+        vdevs = f"--vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[txq0@{self.cbdma_dev_infos[0]};txq1@{self.cbdma_dev_infos[1]}]' " \
+                f"--vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0@{self.cbdma_dev_infos[2]};txq1@{self.cbdma_dev_infos[3]}]'"
         self.get_dump_file_of_virtio_user_cbdma(path_mode, extern_params, ringsize, vdevs, no_pci=False, cbdma=True, pdump=False)
         self.send_251_960byte_and_32_64byte_pkts()
         # execute stop and port stop all to avoid testpmd tail_pkts issue.
@@ -842,8 +842,8 @@ class TestVM2VMVirtioUser(TestCase):
         # get dump pcap file of virtio
         # the virtio0 will send 283 pkts, but the virtio only will received 252 pkts
         self.logger.info('check pcap file info about virtio')
-        vdevs = f"--vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[txq0@{self.cbdma_dev_infos[0]};txq1@{self.cbdma_dev_infos[1]}],dmathr=512' " \
-                f"--vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0@{self.cbdma_dev_infos[2]};txq1@{self.cbdma_dev_infos[3]}],dmathr=512'"
+        vdevs = f"--vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[txq0@{self.cbdma_dev_infos[0]};txq1@{self.cbdma_dev_infos[1]}]' " \
+                f"--vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0@{self.cbdma_dev_infos[2]};txq1@{self.cbdma_dev_infos[3]}]'"
 
         self.get_dump_file_of_virtio_user_cbdma(path_mode, extern_params, ringsize, vdevs, no_pci=False, cbdma=True, pdump=True)
         self.send_27_4640byte_and_224_64byte_pkts()
-- 
2.33.1


^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2021-11-23 13:55 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-11-23 17:30 [dts][PATCH V1] test_plans and TestSuite remove copy threshold for async path Lingli Chen
2021-11-23  9:32 ` Chen, LingliX
2021-11-23 13:55   ` Tu, Lijuan

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).