* [dts][PATCH V1 0/1] modify test plan to test vhost async
@ 2022-09-06 11:20 Xingguang He
2022-09-06 11:20 ` [dts][PATCH V1 1/1] test_plans/vm2vm_virtio_user_dsa_test_plan: modify test plan to test vhost async dequeue Xingguang He
0 siblings, 1 reply; 3+ messages in thread
From: Xingguang He @ 2022-09-06 11:20 UTC (permalink / raw)
To: dts; +Cc: Xingguang He
From DPDK-22.07, vhost async dequeue is supported in both split and
packed ring, so modify vm2vm_virtio_user_dsa_test_plan to test vhost
async dequeue feature. This suite has not been automated now.
Xingguang He (1):
test_plans/vm2vm_virtio_user_dsa_test_plan: modify test plan to test
vhost async dequeue
.../vm2vm_virtio_user_dsa_test_plan.rst | 714 +++++++++++-------
1 file changed, 424 insertions(+), 290 deletions(-)
--
2.25.1
^ permalink raw reply [flat|nested] 3+ messages in thread
* [dts][PATCH V1 1/1] test_plans/vm2vm_virtio_user_dsa_test_plan: modify test plan to test vhost async dequeue
2022-09-06 11:20 [dts][PATCH V1 0/1] modify test plan to test vhost async Xingguang He
@ 2022-09-06 11:20 ` Xingguang He
2022-10-09 10:10 ` lijuan.tu
0 siblings, 1 reply; 3+ messages in thread
From: Xingguang He @ 2022-09-06 11:20 UTC (permalink / raw)
To: dts; +Cc: Xingguang He
From DPDK-22.07, vhost async dequeue is supported in both split and
packed ring, so modify vm2vm_virtio_user_dsa_test_plan to test vhost
async dequeue feature.
Signed-off-by: Xingguang He <xingguang.he@intel.com>
---
.../vm2vm_virtio_user_dsa_test_plan.rst | 714 +++++++++++-------
1 file changed, 424 insertions(+), 290 deletions(-)
diff --git a/test_plans/vm2vm_virtio_user_dsa_test_plan.rst b/test_plans/vm2vm_virtio_user_dsa_test_plan.rst
index 240a2a27..8f2f7133 100644
--- a/test_plans/vm2vm_virtio_user_dsa_test_plan.rst
+++ b/test_plans/vm2vm_virtio_user_dsa_test_plan.rst
@@ -9,8 +9,8 @@ Description
===========
Vhost asynchronous data path leverages DMA devices to offload memory copies from the CPU and it is implemented in an asynchronous way.
In addition, vhost supports M:N mapping between vrings and DMA virtual channels. Specifically, one vring can use multiple different DMA
-channels and one DMA channel can be shared by multiple vrings at the same time.Vhost enqueue operation with CBDMA channels is supported
-in both split and packed ring.
+channels and one DMA channel can be shared by multiple vrings at the same time. From DPDK22.07, Vhost enqueue and dequeue operation with
+DSA driver is supported in both split and packed ring.
This document provides the test plan for testing the following features when Vhost-user using asynchronous data path with
DSA driver (kernel IDXD driver and DPDK vfio-pci driver) in VM2VM virtio-user topology.
@@ -26,7 +26,7 @@ If iommu on, kernel dsa driver only can work with iova=va by program IOMMU, can'
Note:
1.When DMA devices are bound to vfio driver, VA mode is the default and recommended. For PA mode, page by page mapping may
exceed IOMMU's max capability, better to use 1G guest hugepage.
-2.DPDK local patch that about vhost pmd is needed when testing Vhost asynchronous data path with testpmd, and the suite has not yet been automated.
+2.DPDK local patch that about vhost pmd is needed when testing Vhost asynchronous data path with testpmd.
Prerequisites
=============
@@ -39,10 +39,10 @@ General set up
--------------
1. Compile DPDK::
- # CC=gcc meson --werror -Denable_kmods=True -Dlibdir=lib -Dexamples=all --default-library=<dpdk build dir>
+ # CC=gcc meson --werror -Denable_kmods=True -Dlibdir=lib -Dexamples=all --default-library=static <dpdk build dir>
# ninja -C <dpdk build dir> -j 110
For example,
- CC=gcc meson --werror -Denable_kmods=True -Dlibdir=lib -Dexamples=all --default-library=x86_64-native-linuxapp-gcc
+ CC=gcc meson --werror -Denable_kmods=True -Dlibdir=lib -Dexamples=all --default-library=static x86_64-native-linuxapp-gcc
ninja -C x86_64-native-linuxapp-gcc -j 110
2. Get the PCI device ID and DSA device ID of DUT, for example, 0000:4f:00.1 is PCI device ID, 0000:6a:01.0 - 0000:f6:01.0 are DSA device IDs::
@@ -86,12 +86,12 @@ Common steps
2. Bind DSA devices to kernel idxd driver, and configure Work Queue (WQ)::
<dpdk dir># ./usertools/dpdk-devbind.py -b idxd <numDevices * 2>
- <dpdk dir># ./<dpdk build dir>drivers/dma/idxd/dpdk_idxd_cfg.py -q <numWq>
+ <dpdk dir># ./drivers/dma/idxd/dpdk_idxd_cfg.py -q <numWq>
.. note::
Better to reset WQ when need operate DSA devices that bound to idxd drvier:
- <dpdk dir># ./drivers/dma/idxd/dpdk_idxd_cfg.py --reset <numDevices * 2>
+ <dpdk dir># ./drivers/dma/idxd/dpdk_idxd_cfg.py --reset <numDevices>
You can check it by 'ls /dev/dsa'
numDevices: number of devices, where 0<=numDevices<=7, corresponding to 0000:6a:01.0 - 0000:f6:01.0
numWq: Number of workqueues per DSA endpoint, where 1<=numWq<=8
@@ -99,14 +99,14 @@ Common steps
For example, bind 2 DMA devices to idxd driver and configure WQ:
<dpdk dir># ./usertools/dpdk-devbind.py -b idxd 6a:01.0 6f:01.0
- <dpdk dir># ./<dpdk build dir>drivers/dma/idxd/dpdk_idxd_cfg.py -q 1 0
- <dpdk dir># ./<dpdk build dir>drivers/dma/idxd/dpdk_idxd_cfg.py -q 4 2
- Check WQ by 'ls /dev/dsa' and can find "wq0.0 wq2.0 wq2.1 wq2.2 wq2.3"
+ <dpdk dir># ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 1 0
+ <dpdk dir># ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 4 1
+ Check WQ by 'ls /dev/dsa' and can find "wq0.0 wq1.0 wq1.1 wq1.2 wq1.3"
-Test Case 1: VM2VM vhost-user/virtio-user split ring non-mergeable path and multi-queues payload check with dsa dpdk driver
-----------------------------------------------------------------------------------------------------------------------------
+Test Case 1: VM2VM split ring non-mergeable path and multi-queues payload check with dsa dpdk driver
+------------------------------------------------------------------------------------------------------
This case uses testpmd to test the payload is valid after packets forwarding in vhost-user/virtio-user split ring non-mergeable path
-and multi-queues when vhost uses the asynchronous enqueue operations with dsa dpdk driver. Both iova as VA and PA mode test.
+and multi-queues when vhost uses the asynchronous operations with dsa dpdk driver. Both iova as VA and PA mode test.
1. bind 2 dsa device to vfio-pci like common step 1::
@@ -115,8 +115,8 @@ and multi-queues when vhost uses the asynchronous enqueue operations with dsa dp
2. Launch vhost by below command::
<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 -a 0000:e7:01.0,max_queues=2 -a 0000:ec:01.0,max_queues=2 \
- --vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[txq0;txq1]' \
- --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0;txq1]' \
+ --vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[txq0;txq1;rxq0;rxq1]' \
+ --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0;txq1;rxq0;rxq1]' \
--iova=va -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx --lcore-dma=[lcore2@0000:e7:01.0-q0,lcore2@0000:e7:01.0-q1]
3. Launch virtio-user1 by below command::
@@ -129,7 +129,7 @@ and multi-queues when vhost uses the asynchronous enqueue operations with dsa dp
4. Attach pdump secondary process to primary process of virtio-user1 by same file-prefix::
- <dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=virtio1 -- --pdump 'device_id=net_virtio_user1,queue=0,rx-dev=/tmp/pdump-virtio-rx.pcap,mbuf-size=8000'
+ <dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=virtio1 -- --pdump 'device_id=net_virtio_user1,queue=*,rx-dev=/tmp/pdump-virtio-rx.pcap,mbuf-size=8000'
5. Launch virtio-user0 and send packets::
@@ -158,8 +158,8 @@ and multi-queues when vhost uses the asynchronous enqueue operations with dsa dp
8. Relaunch vhost with pa mode by below command::
<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 -a 0000:e7:01.0,max_queues=4 -a 0000:ec:01.0,max_queues=4 \
- --vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[txq0;txq1]' \
- --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0;txq1]' \
+ --vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[txq0;txq1;rxq0;rxq1]' \
+ --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0;txq1;rxq0;rxq1]' \
--iova=pa -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx --lcore-dma=[lcore2@0000:e7:01.0-q3,lcore2@0000:ec:01.0-q3]
9. Rerun step 4.
@@ -169,6 +169,7 @@ and multi-queues when vhost uses the asynchronous enqueue operations with dsa dp
testpmd>set burst 1
testpmd>set txpkts 64,128,256,512
testpmd>start tx_first 27
+ testpmd>stop
testpmd>set burst 32
testpmd>start tx_first 7
testpmd>stop
@@ -178,10 +179,10 @@ and multi-queues when vhost uses the asynchronous enqueue operations with dsa dp
11. Rerun step 6.
-Test Case 2: VM2VM vhost-user/virtio-user split ring inorder non-mergeable path and multi-queues payload check with dsa dpdk driver
--------------------------------------------------------------------------------------------------------------------------------------
+Test Case 2: VM2VM split ring inorder non-mergeable path and multi-queues payload check with dsa dpdk driver
+---------------------------------------------------------------------------------------------------------------
This case uses testpmd to test the payload is valid after packets forwarding in vhost-user/virtio-user split ring inorder
-non-mergeable path and multi-queues when vhost uses the asynchronous enqueue operations with dsa dpdk driver. Both iova as VA and PA mode test.
+non-mergeable path and multi-queues when vhost uses the asynchronous operations with dsa dpdk driver. Both iova as VA and PA mode test.
1. bind 3 dsa device to vfio-pci like common step 1::
@@ -190,8 +191,8 @@ non-mergeable path and multi-queues when vhost uses the asynchronous enqueue ope
2. Launch vhost by below command::
<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 -a 0000:e7:01.0,max_queues=2 -a 0000:ec:01.0,max_queues=2 -a 0000:f1:01.0,max_queues=2 \
- --vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[txq0;txq1]' \
- --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0;txq1]' \
+ --vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[txq0;txq1;rxq0;rxq1]' \
+ --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0;txq1;rxq0;rxq1]' \
--iova=va -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx --lcore-dma=[lcore2@0000:e7:01.0-q0,lcore2@0000:ec:01.0-q0,lcore2@0000:f1:01.0-q0]
3. Launch virtio-user1 by below command::
@@ -204,7 +205,7 @@ non-mergeable path and multi-queues when vhost uses the asynchronous enqueue ope
4. Attach pdump secondary process to primary process of virtio-user1 by same file-prefix::
- <dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=virtio1 -- --pdump 'device_id=net_virtio_user1,queue=0,rx-dev=/tmp/pdump-virtio-rx.pcap,mbuf-size=8000'
+ <dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=virtio1 -- --pdump 'device_id=net_virtio_user1,queue=*,rx-dev=/tmp/pdump-virtio-rx.pcap,mbuf-size=8000'
5. Launch virtio-user0 and send packets::
@@ -233,8 +234,8 @@ non-mergeable path and multi-queues when vhost uses the asynchronous enqueue ope
8. Relaunch vhost with pa mode by below command::
<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 -a 0000:e7:01.0 -a 0000:ec:01.0,max_queues=4 -a 0000:f1:01.0,max_queues=4 -a 0000:f6:01.0,max_queues=4 \
- --vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[txq0;txq1]' \
- --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0;txq1]' \
+ --vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[txq0;txq1;rxq0;rxq1]' \
+ --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0;txq1;rxq0;rxq1]' \
--iova=pa -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx --lcore-dma=[lcore2@0000:e7:01.0-q2,lcore2@0000:ec:01.0-q2,lcore2@0000:f1:01.0-q2]
9. Rerun step 4.
@@ -254,10 +255,10 @@ non-mergeable path and multi-queues when vhost uses the asynchronous enqueue ope
11. Rerun step 6.
-Test Case 3: VM2VM vhost-user/virtio-user split ring inorder mergeable path and multi-queues non-indirect descriptor with dsa dpdk driver
-------------------------------------------------------------------------------------------------------------------------------------------
+Test Case 3: VM2VM split ring inorder mergeable path and multi-queues test non-indirect descriptor and payload check with dsa dpdk driver
+-------------------------------------------------------------------------------------------------------------------------------------------
This case uses testpmd to test the payload is valid and non-indirect descriptor after packets forwarding in vhost-user/virtio-user
-split ring inorder mergeable path and multi-queues when vhost uses the asynchronous enqueue operations with dsa dpdk driver. Both iova as VA and PA mode test.
+split ring inorder mergeable path and multi-queues when vhost uses the asynchronous operations with dsa dpdk driver. Both iova as VA and PA mode test.
1. bind 4 dsa device to vfio-pci like common step 1::
@@ -266,8 +267,8 @@ split ring inorder mergeable path and multi-queues when vhost uses the asynchron
2. Launch vhost by below command::
<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 -a 0000:e7:01.0,max_queues=1 -a 0000:ec:01.0,max_queues=2 -a 0000:f1:01.0,max_queues=3 -a 0000:f6:01.0,max_queues=4 \
- --vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[txq0;txq1]' \
- --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0;txq1]' \
+ --vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[txq0;txq1;rxq0;rxq1]' \
+ --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0;txq1;rxq0;rxq1]' \
--iova=va -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=256 --rxd=256 --no-flush-rx --lcore-dma=[lcore2@0000:e7:01.0-q0,lcore2@0000:ec:01.0-q1,lcore2@0000:f1:01.0-q2,lcore2@0000:f6:01.0-q3]
3. Launch virtio-user1 by below command::
@@ -280,7 +281,7 @@ split ring inorder mergeable path and multi-queues when vhost uses the asynchron
4. Attach pdump secondary process to primary process of virtio-user1 by same file-prefix::
- <dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=virtio1 -- --pdump 'device_id=net_virtio_user1,queue=0,rx-dev=/tmp/pdump-virtio-rx.pcap,mbuf-size=8000'
+ <dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=virtio1 -- --pdump 'device_id=net_virtio_user1,queue=*,rx-dev=/tmp/pdump-virtio-rx.pcap,mbuf-size=8000'
5. Launch virtio-user0 and send packets::
@@ -303,16 +304,16 @@ still need one ring put header. So check 504 packets and 48128 bytes received by
7. Relaunch vhost with pa mode by below command::
<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 -a 0000:e7:01.0,max_queues=1 -a 0000:ec:01.0,max_queues=2 -a 0000:f1:01.0,max_queues=3 -a 0000:f6:01.0,max_queues=4 \
- --vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[txq0;txq1]' \
- --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0;txq1]' \
+ --vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[txq0;txq1;rxq0;rxq1]' \
+ --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0;txq1;rxq0;rxq1]' \
--iova=pa -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=256 --rxd=256 --no-flush-rx --lcore-dma=[lcore2@0000:e7:01.0-q0,lcore2@0000:ec:01.0-q1,lcore2@0000:f1:01.0-q2,lcore2@0000:f6:01.0-q3]
8. Rerun step 3-6.
-Test Case 4: VM2VM vhost-user/virtio-user split ring mergeable path and multi-queues indirect descriptor with dsa dpdk driver
--------------------------------------------------------------------------------------------------------------------------------
+Test Case 4: VM2VM split ring mergeable path and multi-queues test indirect descriptor and payload check with dsa dpdk driver
+------------------------------------------------------------------------------------------------------------------------------
This case uses testpmd to test the payload is valid and indirect descriptor after packets forwarding in vhost-user/virtio-user
-split ring mergeable path and multi-queues when vhost uses the asynchronous enqueue operations with dsa dpdk driver. Both iova as VA and PA mode test.
+split ring mergeable path and multi-queues when vhost uses the asynchronous operations with dsa dpdk driver. Both iova as VA and PA mode test.
1. bind 4 dsa device to vfio-pci like common step 1::
@@ -321,8 +322,8 @@ split ring mergeable path and multi-queues when vhost uses the asynchronous enqu
2. Launch vhost by below command::
<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 -a 0000:e7:01.0,max_queues=1 -a 0000:ec:01.0,max_queues=2 -a 0000:f1:01.0,max_queues=3 -a 0000:f6:01.0,max_queues=4 \
- --vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[txq0;txq1]' \
- --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0;txq1]' \
+ --vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[txq0;txq1;rxq0;rxq1]' \
+ --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0;txq1;rxq0;rxq1]' \
--iova=va -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=256 --rxd=256 --no-flush-rx --lcore-dma=[lcore2@0000:e7:01.0-q0,lcore2@0000:ec:01.0-q1,lcore2@0000:f1:01.0-q2,lcore2@0000:f6:01.0-q3]
3. Launch virtio-user1 by below command::
@@ -335,7 +336,7 @@ split ring mergeable path and multi-queues when vhost uses the asynchronous enqu
4. Attach pdump secondary process to primary process of virtio-user1 by same file-prefix::
- <dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=virtio1 -- --pdump 'device_id=net_virtio_user1,queue=0,rx-dev=/tmp/pdump-virtio-rx.pcap,mbuf-size=8000'
+ <dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=virtio1 -- --pdump 'device_id=net_virtio_user1,queue=*,rx-dev=/tmp/pdump-virtio-rx.pcap,mbuf-size=8000'
5. Launch virtio-user0 and send packets::
@@ -364,10 +365,88 @@ So check 512 packets and 112128 bytes received by virtio-user1 and 502 packets w
8. Rerun step 3-6.
-Test Case 5: VM2VM vhost-user/virtio-user packed ring non-mergeable path and multi-queues payload check with dsa dpdk driver
---------------------------------------------------------------------------------------------------------------------------------
+Test Case 5: VM2VM split ring vectorized path and multi-queues payload check with vhost async operation and dsa dpdk driver
+------------------------------------------------------------------------------------------------------------------------------
+This case uses testpmd to test the payload is valid after packets forwarding in vhost-user/virtio-user split ring vectorized path
+and multi-queues when vhost uses the asynchronous operations with dsa dpdk driver. Both iova as VA and PA mode test.
+
+1. bind 3 dsa ports to vfio-pci::
+
+ ls /dev/dsa #check wq configure, reset if exist
+ <dpdk dir># ./usertools/dpdk-devbind.py -u e7:01.0 ec:01.0 f1:01.0
+ <dpdk dir># ./usertools/dpdk-devbind.py -b vfio-pci e7:01.0 ec:01.0 f1:01.0
+
+2. Launch vhost by below command::
+
+ <dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 -a 0000:e7:01.0 -a 0000:ec:01.0 -a 0000:f1:01.0 \
+ --vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[txq0;txq1;rxq0;rxq1]' \
+ --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0;txq1;rxq0;rxq1]' \
+ --iova=va -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx --lcore-dma=[lcore2@0000:e7:01.0-q0,lcore2@0000:ec:01.0-q1,lcore2@0000:f1:01.0-q2]
+
+3. Launch virtio-user1 by below command::
+
+ <dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 7-8 --no-pci --file-prefix=virtio1 \
+ --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=2,server=1,packed_vq=0,mrg_rxbuf=0,in_order=0,vectorized=1,queue_size=4096 \
+ -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096
+ testpmd>set fwd rxonly
+ testpmd>start
+
+4. Attach pdump secondary process to primary process of virtio-user1 by same file-prefix::
+
+ <dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=virtio1 -- --pdump 'device_id=net_virtio_user1,queue=*,rx-dev=/tmp/pdump-virtio-rx.pcap,mbuf-size=8000'
+
+5. Launch virtio-user0 and send packets::
+
+ <dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \
+ --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net0,queues=2,server=1,packed_vq=0,mrg_rxbuf=0,in_order=0,vectorized=1,queue_size=4096 \
+ -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096
+ testpmd>set burst 1
+ testpmd>set txpkts 64,128,256,512
+ testpmd>start tx_first 27
+ testpmd>stop
+ testpmd>set burst 32
+ testpmd>start tx_first 7
+ testpmd>stop
+ testpmd>set txpkts 64
+ testpmd>start tx_first 1
+ testpmd>stop
+
+6. Start vhost testpmd, quit pdump and check virtio-user1 RX-packets is 566 and RX-bytes is 486016 and 502 packets with 960 length and 64 packets with 64 length in pdump-virtio-rx.pcap.
+
+7. Clear virtio-user1 port stats::
+
+ testpmd>stop
+ testpmd>clear port stats all
+ testpmd>start
+
+8. Relaunch vhost with pa mode by below command::
+
+ <dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 -a 0000:e7:01.0 -a 0000:ec:01.0 -a 0000:f1:01.0 -a 0000:f6:01.0 \
+ --vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[txq0;txq1;rxq0;rxq1]' \
+ --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0;txq1;rxq0;rxq1]' \
+ --iova=pa -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx --lcore-dma=[lcore2@0000:e7:01.0-q3,lcore2@0000:ec:01.0-q4,lcore2@0000:f1:01.0-q3]
+
+9. Rerun step 4.
+
+10. Virtio-user0 send packets::
+
+ testpmd>set burst 1
+ testpmd>set txpkts 64,128,256,512
+ testpmd>start tx_first 27
+ testpmd>stop
+ testpmd>set burst 32
+ testpmd>start tx_first 7
+ testpmd>stop
+ testpmd>set txpkts 64
+ testpmd>start tx_first 1
+ testpmd>stop
+
+11. Rerun step 6.
+
+Test Case 6: VM2VM packed ring non-mergeable path and multi-queues payload check with dsa dpdk driver
+------------------------------------------------------------------------------------------------------
This case uses testpmd to test the payload is valid after packets forwarding in vhost-user/virtio-user packed ring
-non-mergeable path and multi-queues when vhost uses the asynchronous enqueue operations with dsa dpdk driver. Both iova as VA and PA mode test.
+non-mergeable path and multi-queues when vhost uses the asynchronous operations with dsa dpdk driver. Both iova as VA and PA mode test.
1. bind 3 dsa device to vfio-pci like common step 1::
@@ -376,14 +455,13 @@ non-mergeable path and multi-queues when vhost uses the asynchronous enqueue ope
2. Launch vhost by below command::
<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 -a 0000:e7:01.0 -a 0000:ec:01.0 -a 0000:f1:01.0 \
- --vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[txq0;txq1]' \
- --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0;txq1]' \
+ --vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[txq0;txq1;rxq0;rxq1]' \
+ --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0;txq1;rxq0;rxq1]' \
--iova=va -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx --lcore-dma=[lcore2@0000:e7:01.0-q0,lcore2@0000:ec:01.0-q1,lcore2@0000:f1:01.0-q2]
3. Launch virtio-user1 by below command::
- <dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 7-8 \
- --no-pci --file-prefix=virtio1 \
+ <dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 7-8 --no-pci --file-prefix=virtio1 \
--vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=2,server=1,packed_vq=1,mrg_rxbuf=0,in_order=0,queue_size=4096 \
-- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096
testpmd>set fwd rxonly
@@ -391,7 +469,7 @@ non-mergeable path and multi-queues when vhost uses the asynchronous enqueue ope
4. Attach pdump secondary process to primary process of virtio-user1 by same file-prefix::
- <dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=virtio1 -- --pdump 'device_id=net_virtio_user1,queue=0,rx-dev=/tmp/pdump-virtio-rx.pcap,mbuf-size=8000'
+ <dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=virtio1 -- --pdump 'device_id=net_virtio_user1,queue=*,rx-dev=/tmp/pdump-virtio-rx.pcap,mbuf-size=8000'
5. Launch virtio-user0 and send packets::
@@ -420,8 +498,8 @@ non-mergeable path and multi-queues when vhost uses the asynchronous enqueue ope
8. Relaunch vhost with iova=pa by below command::
<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 -a 0000:e7:01.0 -a 0000:ec:01.0 -a 0000:f1:01.0 \
- --vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[txq0;txq1]' \
- --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0;txq1]' \
+ --vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[txq0;txq1;rxq0;rxq1]' \
+ --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0;txq1;rxq0;rxq1]' \
--iova=pa -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx --lcore-dma=[lcore2@0000:e7:01.0-q4,lcore2@0000:ec:01.0-q5,lcore2@0000:f1:01.0-q6]
9. Rerun step 4.
@@ -441,10 +519,10 @@ non-mergeable path and multi-queues when vhost uses the asynchronous enqueue ope
11. Rerun step 6.
-Test Case 6: VM2VM vhost-user/virtio-user packed ring inorder non-mergeable path and multi-queues payload check with dsa dpdk driver
--------------------------------------------------------------------------------------------------------------------------------------
+Test Case 7: VM2VM packed ring inorder non-mergeable path and multi-queues payload check with dsa dpdk driver
+---------------------------------------------------------------------------------------------------------------
This case uses testpmd to test the payload is valid after packets forwarding in vhost-user/virtio-user packed ring inorder
-non-mergeable path and multi-queues when vhost uses the asynchronous enqueue operations with dsa dpdk driver. Both iova as VA and PA mode test.
+non-mergeable path and multi-queues when vhost uses the asynchronous operations with dsa dpdk driver. Both iova as VA and PA mode test.
1. bind 4 dsa device to vfio-pci like common step 1::
@@ -453,8 +531,8 @@ non-mergeable path and multi-queues when vhost uses the asynchronous enqueue ope
2. Launch vhost by below command::
<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 -a 0000:e7:01.0,max_queues=2 -a 0000:ec:01.0,max_queues=2 -a 0000:f1:01.0,max_queues=2 -a 0000:f6:01.0,max_queues=2 \
- --vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[txq0;txq1]' \
- --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0;txq1]' \
+ --vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[txq0;txq1;rxq0;rxq1]' \
+ --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0;txq1;rxq0;rxq1]' \
--iova=va -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx --lcore-dma=[lcore2@0000:e7:01.0-q0,lcore2@0000:ec:01.0-q0,lcore2@0000:f1:01.0-q1,lcore2@0000:f6:01.0-q1]
3. Launch virtio-user1 by below command::
@@ -467,7 +545,7 @@ non-mergeable path and multi-queues when vhost uses the asynchronous enqueue ope
4. Attach pdump secondary process to primary process of virtio-user1 by same file-prefix::
- <dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=virtio1 -- --pdump 'device_id=net_virtio_user1,queue=0,rx-dev=/tmp/pdump-virtio-rx.pcap,mbuf-size=8000'
+ <dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=virtio1 -- --pdump 'device_id=net_virtio_user1,queue=*,rx-dev=/tmp/pdump-virtio-rx.pcap,mbuf-size=8000'
5. Launch virtio-user0 and send packets::
@@ -483,6 +561,7 @@ non-mergeable path and multi-queues when vhost uses the asynchronous enqueue ope
testpmd>stop
testpmd>set txpkts 64
testpmd>start tx_first 1
+ testpmd>stop
6. Start vhost testpmd, quit pdump and check virtio-user1 RX-packets is 566 and RX-bytes is 486016 and 502 packets with 960 length and 64 packets with 64 length in pdump-virtio-rx.pcap.
@@ -495,15 +574,14 @@ non-mergeable path and multi-queues when vhost uses the asynchronous enqueue ope
8. Relaunch vhost with iova=pa by below command::
<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 -a 0000:e7:01.0 -a 0000:ec:01.0 -a 0000:f1:01.0 -a 0000:f6:01.0 \
- --vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[txq0;txq1]' \
- --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0;txq1]' \
+ --vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[txq0;txq1;rxq0;rxq1]' \
+ --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0;txq1;rxq0;rxq1]' \
--iova=pa -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx --lcore-dma=[lcore2@0000:e7:01.0-q5,lcore2@0000:ec:01.0-q6,lcore2@0000:f1:01.0-q5,lcore2@0000:f6:01.0-q6]
9. Rerun step 4.
10. virtio-user0 send packets::
- testpmd>stop
testpmd>set burst 1
testpmd>set txpkts 64,128,256,512
testpmd>start tx_first 27
@@ -513,13 +591,14 @@ non-mergeable path and multi-queues when vhost uses the asynchronous enqueue ope
testpmd>stop
testpmd>set txpkts 64
testpmd>start tx_first 1
+ testpmd>stop
11. Rerun step 6.
-Test Case 7: VM2VM vhost-user/virtio-user packed ring mergeable path and multi-queues payload check with dsa dpdk driver
---------------------------------------------------------------------------------------------------------------------------
+Test Case 8: VM2VM packed ring mergeable path and multi-queues payload check with dsa dpdk driver
+--------------------------------------------------------------------------------------------------
This case uses testpmd to test the payload is valid after packets forwarding in vhost-user/virtio-user packed ring
-mergeable path and multi-queues when vhost uses the asynchronous enqueue operations with dsa dpdk driver. Both iova as VA and PA mode test.
+mergeable path and multi-queues when vhost uses the asynchronous operations with dsa dpdk driver. Both iova as VA and PA mode test.
1. bind 2 dsa device to vfio-pci like common step 1::
@@ -528,8 +607,8 @@ mergeable path and multi-queues when vhost uses the asynchronous enqueue operati
2. Launch vhost by below command::
<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 -a 0000:e7:01.0,max_queues=4 -a 0000:ec:01.0,max_queues=4 \
- --vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[txq0;txq1]' \
- --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0;txq1]' \
+ --vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[txq0;txq1;rxq0;rxq1]' \
+ --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0;txq1;rxq0;rxq1]' \
--iova=va -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx --lcore-dma=[lcore2@0000:e7:01.0-q0,lcore2@0000:e7:01.0-q1,lcore2@0000:e7:01.0-q2,lcore2@0000:ec:01.0-q0,lcore2@0000:ec:01.0-q1]
3. Launch virtio-user1 by below command::
@@ -543,7 +622,7 @@ mergeable path and multi-queues when vhost uses the asynchronous enqueue operati
4. Attach pdump secondary process to primary process of virtio-user1 by same file-prefix::
- <dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=virtio1 -- --pdump 'device_id=net_virtio_user1,queue=0,rx-dev=/tmp/pdump-virtio-rx.pcap,mbuf-size=8000'
+ <dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=virtio1 -- --pdump 'device_id=net_virtio_user1,queue=*,rx-dev=/tmp/pdump-virtio-rx.pcap,mbuf-size=8000'
5. Launch virtio-user0 and send packets::
@@ -570,8 +649,8 @@ mergeable path and multi-queues when vhost uses the asynchronous enqueue operati
8. Relaunch vhost with iova=pa by below command::
<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 -a 0000:e7:01.0 -a 0000:ec:01.0 \
- --vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[txq0;txq1]' \
- --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0;txq1]' \
+ --vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[txq0;txq1;rxq0;rxq1]' \
+ --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0;txq1;rxq0;rxq1]' \
--iova=pa -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx --lcore-dma=[lcore2@0000:e7:01.0-q6,lcore2@0000:e7:01.0-q7,lcore2@0000:ec:01.0-q2,lcore2@0000:ec:01.0-q3,lcore2@0000:ec:01.0-q4]
9. Rerun step 4.
@@ -589,23 +668,23 @@ mergeable path and multi-queues when vhost uses the asynchronous enqueue operati
11. Rerun step 6.
-Test Case 8: VM2VM vhost-user/virtio-user packed ring inorder mergeable path and multi-queues payload check with dsa dpdk driver
-----------------------------------------------------------------------------------------------------------------------------------
+Test Case 9: VM2VM packed ring inorder mergeable path and multi-queues payload check with dsa dpdk driver
+-----------------------------------------------------------------------------------------------------------
This case uses testpmd to test the payload is valid after packets forwarding in vhost-user/virtio-user packed ring inorder
-mergeable path and multi-queues when vhost uses the asynchronous enqueue operations with dsa dpdk driver. Both iova as VA and PA mode test.
+mergeable path and multi-queues when vhost uses the asynchronous operations with dsa dpdk driver. Both iova as VA and PA mode test.
-1. bind 8 dsa device to vfio-pci like common step 1::
+1. bind 4 dsa device to vfio-pci like common step 1::
ls /dev/dsa #check wq configure, reset if exist
- <dpdk dir># ./usertools/dpdk-devbind.py -u 6a:01.0 6f:01.0 74:01.0 79:01.0 e7:01.0 ec:01.0 f1:01.0 f6:01.0
- <dpdk dir># ./usertools/dpdk-devbind.py -b vfio-pci 6a:01.0 6f:01.0 74:01.0 79:01.0 e7:01.0 ec:01.0 f1:01.0 f6:01.0
+ <dpdk dir># ./usertools/dpdk-devbind.py -u e7:01.0 ec:01.0 f1:01.0 f6:01.0
+ <dpdk dir># ./usertools/dpdk-devbind.py -b vfio-pci e7:01.0 ec:01.0 f1:01.0 f6:01.0
2. Launch vhost by below command::
- <dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 -a 0000:6a:01.0 -a 0000:6f:01.0 -a 0000:74:01.0 -a 0000:79:01.0 -a 0000:e7:01.0 -a 0000:ec:01.0 -a 0000:f1:01.0 -a 0000:f6:01.0 \
- --vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[txq0;txq1]' \
- --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0;txq1]' \
- --iova=va -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx --lcore-dma=[lcore2@0000:6a:01.0-q0,lcore2@0000:6f:01.0-q1,lcore2@0000:74:01.0-q2,lcore2@0000:79:01.0-q3,lcore2@0000:e7:01.0-q4,lcore2@0000:ec:01.0-q5,lcore2@0000:f1:01.0-q6,lcore2@0000:f6:01.0-q7]
+ <dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 -a 0000:e7:01.0 -a 0000:ec:01.0 -a 0000:f1:01.0 -a 0000:f6:01.0 \
+ --vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[txq0;txq1;rxq0;rxq1]' \
+ --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0;txq1;rxq0;rxq1]' \
+ --iova=va -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx --lcore-dma=[lcore2@0000:e7:01.0-q4,lcore2@0000:ec:01.0-q5,lcore2@0000:f1:01.0-q6,lcore2@0000:f6:01.0-q7]
3. Launch virtio-user1 by below command::
@@ -617,7 +696,7 @@ mergeable path and multi-queues when vhost uses the asynchronous enqueue operati
4. Attach pdump secondary process to primary process of virtio-user1 by same file-prefix::
- <dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=virtio1 -- --pdump 'device_id=net_virtio_user1,queue=0,rx-dev=/tmp/pdump-virtio-rx.pcap,mbuf-size=8000'
+ <dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=virtio1 -- --pdump 'device_id=net_virtio_user1,queue=*,rx-dev=/tmp/pdump-virtio-rx.pcap,mbuf-size=8000'
5. Launch virtio-user0 and send packets::
@@ -643,10 +722,10 @@ mergeable path and multi-queues when vhost uses the asynchronous enqueue operati
8. Relaunch vhost with iova=pa by below command::
- <dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 -a 0000:6a:01.0 -a 0000:6f:01.0 -a 0000:74:01.0 -a 0000:79:01.0 -a 0000:e7:01.0 -a 0000:ec:01.0 -a 0000:f1:01.0 -a 0000:f6:01.0 \
- --vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[txq0;txq1]' \
- --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0;txq1]' \
- --iova=pa -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx --lcore-dma=[lcore2@0000:6a:01.0-q1,lcore2@0000:6f:01.0-q2,lcore2@0000:74:01.0-q3,lcore2@0000:79:01.0-q4,lcore2@0000:e7:01.0-q5,lcore2@0000:ec:01.0-q6,lcore2@0000:f1:01.0-q7,lcore2@0000:f6:01.0-q7]
+ <dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 -a 0000:e7:01.0 -a 0000:ec:01.0 -a 0000:f1:01.0 -a 0000:f6:01.0 \
+ --vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[txq0;txq1;rxq0;rxq1]' \
+ --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0;txq1;rxq0;rxq1]' \
+ --iova=pa -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx --lcore-dma=[lcore2@0000:e7:01.0-q4,lcore2@0000:ec:01.0-q5,lcore2@0000:f1:01.0-q6,lcore2@0000:f6:01.0-q7]
9. Rerun step 4.
@@ -663,22 +742,22 @@ mergeable path and multi-queues when vhost uses the asynchronous enqueue operati
11. Rerun step 6.
-Test Case 9: VM2VM vhost-user/virtio-user packed ring vectorized-tx path and multi-queues indirect descriptor with dsa dpdk driver
------------------------------------------------------------------------------------------------------------------------------------
+Test Case 10: VM2VM packed ring vectorized-tx path and multi-queues test indirect descriptor and payload check with dsa dpdk driver
+------------------------------------------------------------------------------------------------------------------------------------
This case uses testpmd to test the payload is valid and indirect descriptor after packets forwarding in vhost-user/virtio-user
-packed ring vectorized-tx path and multi-queues when vhost uses the asynchronous enqueue operations with dsa dpdk driver.
+packed ring vectorized-tx path and multi-queues when vhost uses the asynchronous operations with dsa dpdk driver.
Both iova as VA and PA mode test.
-1. bind 4 dsa device to vfio-pci like common step 1::
+1. bind 2 dsa device to vfio-pci like common step 1::
- <dpdk dir># ./usertools/dpdk-devbind.py -b vfio-pci 6a:01.0 6f:01.0 74:01.0
+ <dpdk dir># ./usertools/dpdk-devbind.py -b vfio-pci e7:01.0 ec:01.0
2. Launch vhost by below command::
- <dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 -a 0000:6a:01.0 -a 0000:6f:01.0 -a 0000:74:01.0 -a 0000:79:01.0 \
- --vdev 'eth_vhost0,iface=vhost-net,queues=2,client=1,dmas=[txq0;txq1],dma_ring_size=2048' --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0;txq1],dma_ring_size=2048' \
+ <dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 -a 0000:e7:01.0,max_queues=2 -a 0000:ec:01.0,max_queues=2 \
+ --vdev 'eth_vhost0,iface=vhost-net,queues=2,client=1,dmas=[txq0;txq1;rxq0;rxq1]' --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0;txq1;rxq0;rxq1]' \
--iova=va -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=256 --rxd=256 --no-flush-rx \
- --lcore-dma=[lcore2@0000:6a:01.0-q0,lcore2@0000:6f:01.0-q1,lcore2@0000:74:01.0-q2,lcore2@0000:79:01.0-q3]
+ --lcore-dma=[lcore2@0000:e7:01.0-q0,lcore2@0000:ec:01.0-q1]
3. Launch virtio-user1 by below command::
@@ -686,19 +765,17 @@ Both iova as VA and PA mode test.
--vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=2,server=1,packed_vq=1,mrg_rxbuf=1,in_order=1,vectorized=1,queue_size=256 \
-- -i --nb-cores=1 --rxq=2 --txq=2 --txd=256 --rxd=256
testpmd>set fwd rxonly
- set verbose 1
testpmd>start
4. Attach pdump secondary process to primary process by same file-prefix::
- <dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=virtio1 -- --pdump 'device_id=net_virtio_user1,queue=*,rx-dev=./pdump-virtio-rx.pcap,mbuf-size=8000'
+ <dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=virtio1 -- --pdump 'device_id=net_virtio_user1,queue=*,rx-dev=/tmp/pdump-virtio-rx.pcap,mbuf-size=8000'
5. Launch virtio-user0 and send 8k length packets::
<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --force-max-simd-bitwidth=512 --no-pci --file-prefix=virtio \
--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=2,server=1,packed_vq=1,mrg_rxbuf=1,in_order=1,vectorized=1,queue_size=256 \
-- -i --nb-cores=1 --rxq=2 --txq=2 --txd=256 --rxd=256
-
testpmd>set burst 1
testpmd>start tx_first 27
testpmd>stop
@@ -714,32 +791,32 @@ So check 512 packets and 112128 bytes received by virtio-user1 and 502 packets w
7.Relaunch vhost with iova=pa by below command::
- <dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 -a 0000:6a:01.0 -a 0000:6f:01.0 -a 0000:74:01.0 -a 0000:79:01.0 \
- --vdev 'eth_vhost0,iface=vhost-net,queues=2,client=1,dmas=[txq0;txq1],dma_ring_size=2048' --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0;txq1],dma_ring_size=2048' \
+ <dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 -a 0000:e7:01.0,max_queues=2 -a 0000:ec:01.0,max_queues=2 \
+ --vdev 'eth_vhost0,iface=vhost-net,queues=2,client=1,dmas=[txq0;txq1;rxq0;rxq1]' --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0;txq1]' \
--iova=pa -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=256 --rxd=256 --no-flush-rx \
- --lcore-dma=[lcore2@0000:6a:01.0-q1,lcore2@0000:6f:01.0-q2,lcore2@0000:74:01.0-q3,lcore2@0000:79:01.0-q4]
+ --lcore-dma=[lcore2@0000:e7:01.0-q0,lcore2@0000:ec:01.0-q1]
8. Rerun step 3-6.
-Test Case 10: VM2VM vhost-user/virtio-user split ring non-mergeable path and multi-queues payload check with dsa kernel driver
---------------------------------------------------------------------------------------------------------------------------------
+Test Case 11: VM2VM split ring non-mergeable path and multi-queues payload check with dsa kernel driver
+---------------------------------------------------------------------------------------------------------
This case uses testpmd to test the payload is valid after packets forwarding in vhost-user/virtio-user split ring
-non-mergeable path and multi-queues when vhost uses the asynchronous enqueue operations with dsa kernel driver.
+non-mergeable path and multi-queues when vhost uses the asynchronous operations with dsa kernel driver.
1. bind 1 dsa device to idxd like common step 2::
ls /dev/dsa #check wq configure, reset if exist
- <dpdk dir># ./<dpdk build dir>drivers/dma/idxd/dpdk_idxd_cfg.py --reset xx
+ <dpdk dir># ./drivers/dma/idxd/dpdk_idxd_cfg.py --reset xx
<dpdk dir># ./usertools/dpdk-devbind.py -u 6a:01.0
<dpdk dir># ./usertools/dpdk-devbind.py -b idxd 6a:01.0
- <dpdk dir># ./<dpdk build dir>drivers/dma/idxd/dpdk_idxd_cfg.py -q 4 0
+ <dpdk dir># ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 4 0
ls /dev/dsa #check wq configure success
2. Launch vhost by below command::
- <dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 \
- --vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[txq0;txq1]' \
- --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0;txq1]' \
+ <dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 --no-pci \
+ --vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[txq0;txq1;rxq0;rxq1]' \
+ --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0;txq1;rxq0;rxq1]' \
--iova=va -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx --lcore-dma=[lcore2@wq0.0,lcore2@wq0.1]
3. Launch virtio-user1 by below command::
@@ -752,7 +829,7 @@ non-mergeable path and multi-queues when vhost uses the asynchronous enqueue ope
4. Attach pdump secondary process to primary process of virtio-user1 by same file-prefix::
- <dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=virtio1 -- --pdump 'device_id=net_virtio_user1,queue=0,rx-dev=/tmp/pdump-virtio-rx.pcap,mbuf-size=8000'
+ <dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=virtio1 -- --pdump 'device_id=net_virtio_user1,queue=*,rx-dev=/tmp/pdump-virtio-rx.pcap,mbuf-size=8000'
5. Launch virtio-user0 and send packets::
@@ -780,9 +857,9 @@ non-mergeable path and multi-queues when vhost uses the asynchronous enqueue ope
8. Quit and relaunch vhost with diff channel by below command::
- <dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 \
- --vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[txq0;txq1]' \
- --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0;txq1]' \
+ <dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 --no-pci \
+ --vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[txq0;txq1;rxq0;rxq1]' \
+ --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0;txq1;rxq0;rxq1]' \
--iova=va -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx --lcore-dma=[lcore2@wq0.2,lcore2@wq0.3]
9. Rerun step 4.
@@ -802,27 +879,26 @@ non-mergeable path and multi-queues when vhost uses the asynchronous enqueue ope
11. Rerun step 6.
-Test Case 11: VM2VM vhost-user/virtio-user split ring inorder non-mergeable path and multi-queues payload check with dsa kernel driver
-----------------------------------------------------------------------------------------------------------------------------------------
+Test Case 12: VM2VM split ring inorder non-mergeable path and multi-queues payload check with dsa kernel driver
+----------------------------------------------------------------------------------------------------------------
This case uses testpmd to test the payload is valid after packets forwarding in vhost-user/virtio-user split ring inorder
-non-mergeable path and multi-queues when vhost uses the asynchronous enqueue operations with dsa kernel driver.
+non-mergeable path and multi-queues when vhost uses the asynchronous operations with dsa kernel driver.
-1. bind 3 dsa device to idxd like common step 2::
+1. bind 2 dsa device to idxd like common step 2::
ls /dev/dsa #check wq configure, reset if exist
- <dpdk dir># ./usertools/dpdk-devbind.py -u 6a:01.0 6f:01.0 74:01.0
- <dpdk dir># ./usertools/dpdk-devbind.py -b idxd 6a:01.0 6f:01.0 74:01.0
- <dpdk dir># ./<dpdk build dir>drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 0
- <dpdk dir># ./<dpdk build dir>drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 2
- <dpdk dir># ./<dpdk build dir>drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 4
+ <dpdk dir># ./usertools/dpdk-devbind.py -u 6a:01.0 6f:01.0
+ <dpdk dir># ./usertools/dpdk-devbind.py -b idxd 6a:01.0 6f:01.0
+ <dpdk dir># ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 4 0
+ <dpdk dir># ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 1
ls /dev/dsa #check wq configure success
2. Launch vhost by below command::
- <dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 \
- --vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[txq0;txq1]' \
- --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0;txq1]' \
- --iova=va -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx --lcore-dma=[lcore2@wq0.0,lcore2@wq2.1,lcore2@wq4.2]
+ <dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 --no-pci \
+ --vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[txq0;txq1;rxq0;rxq1]' \
+ --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0;txq1;rxq0;rxq1]' \
+ --iova=va -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx --lcore-dma=[lcore2@wq0.0,lcore2@wq1.0,lcore2@wq1.1]
3. Launch virtio-user1 by below command::
@@ -834,7 +910,7 @@ non-mergeable path and multi-queues when vhost uses the asynchronous enqueue ope
4. Attach pdump secondary process to primary process of virtio-user1 by same file-prefix::
- <dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=virtio1 -- --pdump 'device_id=net_virtio_user1,queue=0,rx-dev=/tmp/pdump-virtio-rx.pcap,mbuf-size=8000'
+ <dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=virtio1 -- --pdump 'device_id=net_virtio_user1,queue=*,rx-dev=/tmp/pdump-virtio-rx.pcap,mbuf-size=8000'
5. Launch virtio-user0 and send packets::
@@ -862,10 +938,10 @@ non-mergeable path and multi-queues when vhost uses the asynchronous enqueue ope
8. Quit and relaunch vhost with diff channel by below command::
- <dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 \
- --vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[txq0;txq1]' \
- --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0;txq1]' \
- --iova=va -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx --lcore-dma=[lcore2@wq0.3,lcore2@wq2.4,lcore2@wq4.5]
+ <dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 --no-pci \
+ --vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[txq0;rxq1]' \
+ --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0;rxq1]' \
+ --iova=va -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx --lcore-dma=[lcore2@wq0.3,lcore2@wq1.4,lcore2@wq1.5]
9. Rerun step 4.
@@ -884,28 +960,25 @@ non-mergeable path and multi-queues when vhost uses the asynchronous enqueue ope
11. Rerun step 6.
-Test Case 12: VM2VM vhost-user/virtio-user split ring inorder mergeable path and multi-queues non-indirect descriptor with dsa kernel driver
+Test Case 13: VM2VM split ring inorder mergeable path and multi-queues test non-indirect descriptor and payload check with dsa kernel driver
---------------------------------------------------------------------------------------------------------------------------------------------
This case uses testpmd to test the payload is valid and non-indirect descriptor after packets forwarding in vhost-user/virtio-user
-split ring inorder mergeable path and multi-queues when vhost uses the asynchronous enqueue operations with dsa kernel driver.
+split ring inorder mergeable path and multi-queues when vhost uses the asynchronous operations with dsa kernel driver.
-1. bind 4 dsa device to idxd like common step 2::
+1. bind 1 dsa device to idxd like common step 2::
ls /dev/dsa #check wq configure, reset if exist
- <dpdk dir># ./usertools/dpdk-devbind.py -u 6a:01.0 6f:01.0 74:01.0 79:01.0
- <dpdk dir># ./usertools/dpdk-devbind.py -b idxd 6a:01.0 6f:01.0 74:01.0 79:01.0
- <dpdk dir># ./<dpdk build dir>drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 0
- <dpdk dir># ./<dpdk build dir>drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 2
- <dpdk dir># ./<dpdk build dir>drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 4
- <dpdk dir># ./<dpdk build dir>drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 6
+ <dpdk dir># ./usertools/dpdk-devbind.py -u 6a:01.0
+ <dpdk dir># ./usertools/dpdk-devbind.py -b idxd 6a:01.0
+ <dpdk dir># ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 4 0
ls /dev/dsa #check wq configure success
2. Launch vhost by below command::
- <dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 \
- --vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[txq0;txq1]' \
- --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0;txq1]' \
- --iova=va -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=256 --rxd=256 --no-flush-rx --lcore-dma=[lcore2@wq0.0,lcore2@wq2.1,lcore2@wq4.2,lcore2@wq6.3]
+ <dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 --no-pci \
+ --vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[txq0;txq1;rxq0;rxq1]' \
+ --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0;txq1;rxq0;rxq1]' \
+ --iova=va -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=256 --rxd=256 --no-flush-rx --lcore-dma=[lcore2@wq0.0,lcore2@wq0.1,lcore2@wq0.2,lcore2@wq0.3]
3. Launch virtio-user1 by below command::
@@ -917,7 +990,7 @@ split ring inorder mergeable path and multi-queues when vhost uses the asynchron
4. Attach pdump secondary process to primary process of virtio-user1 by same file-prefix::
- <dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=virtio1 -- --pdump 'device_id=net_virtio_user1,queue=0,rx-dev=/tmp/pdump-virtio-rx.pcap,mbuf-size=8000'
+ <dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=virtio1 -- --pdump 'device_id=net_virtio_user1,queue=*,rx-dev=/tmp/pdump-virtio-rx.pcap,mbuf-size=8000'
5. Launch virtio-user0 and send packets::
@@ -937,23 +1010,43 @@ split ring inorder mergeable path and multi-queues when vhost uses the asynchron
6. Start vhost, then quit pdump and three testpmd, about split virtqueue inorder mergeable path, it use the direct descriptors, the 8k length pkt will occupies 5 ring:2000,2000,2000,2000 will need 4 consequent ring,
still need one ring put header. So check 504 packets and 48128 bytes received by virtio-user1 and 502 packets with 64 length and 2 packets with 8K length in pdump-virtio-rx.pcap.
-7. Clear virtio-user1 port stats::
+Test Case 14: VM2VM split ring mergeable path and multi-queues test indirect descriptor and payload check with dsa kernel driver
+---------------------------------------------------------------------------------------------------------------------------------
+This case uses testpmd to test the payload is valid and indirect descriptor after packets forwarding in vhost-user/virtio-user
+split ring mergeable path and multi-queues when vhost uses the asynchronous operations with dsa kernel driver.
- testpmd>stop
- testpmd>clear port stats all
- testpmd>start
+1. bind 2 dsa device to idxd like common step 2::
-8. Quit and relaunch vhost with diff channel by below command::
+ ls /dev/dsa #check wq configure, reset if exist
+ <dpdk dir># ./usertools/dpdk-devbind.py -u 6a:01.0 6f:01.0
+ <dpdk dir># ./usertools/dpdk-devbind.py -b idxd 6a:01.0 6f:01.0
+ <dpdk dir># ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 4 0
+ <dpdk dir># ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 1
+ ls /dev/dsa #check wq configure success
+2. Launch vhost by below command::
- <dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 \
- --vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[txq0;txq1]' \
- --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0;txq1]' \
- --iova=va -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=256 --rxd=256 --no-flush-rx --lcore-dma=[lcore2@wq0.3,lcore2@wq2.4,lcore2@wq4.3,lcore2@wq6.4]
+ <dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 --no-pci \
+ --vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[txq0;txq1;rxq0;rxq1]' \
+ --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0;txq1;rxq0;rxq1]' \
+ --iova=va -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=256 --rxd=256 --no-flush-rx --lcore-dma=[lcore2@wq0.0,lcore2@wq1.1,lcore2@wq1.2,lcore2@wq1.3]
-9. Rerun step 4.
+3. Launch virtio-user1 by below command::
-10. virtio-user0 send packets::
+ <dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 7-8 --no-pci --file-prefix=virtio1 \
+ --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=2,server=1,packed_vq=0,mrg_rxbuf=1,in_order=0,queue_size=4096 \
+ -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=256 --rxd=256
+ testpmd>set fwd rxonly
+ testpmd>start
+
+4. Attach pdump secondary process to primary process of virtio-user1 by same file-prefix::
+ <dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=virtio1 -- --pdump 'device_id=net_virtio_user1,queue=*,rx-dev=/tmp/pdump-virtio-rx.pcap,mbuf-size=8000'
+
+5. Launch virtio-user0 and send packets::
+
+ <dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \
+ --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net0,queues=2,server=1,packed_vq=0,mrg_rxbuf=1,in_order=0,queue_size=256 \
+ -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=256 --rxd=256
testpmd>set burst 1
testpmd>start tx_first 27
testpmd>stop
@@ -964,60 +1057,59 @@ still need one ring put header. So check 504 packets and 48128 bytes received by
testpmd>start tx_first 1
testpmd>stop
-11. Rerun step 6.
+6. Start vhost, then quit pdump and three testpmd, about split virtqueue inorder mergeable path, it use the indirect descriptors, the 8k length pkt will just occupies one ring.
+So check 512 packets and 112128 bytes received by virtio-user1 and 502 packets with 64 length and 10 packets with 8K length in pdump-virtio-rx.pcap.
-Test Case 13: VM2VM vhost-user/virtio-user split ring mergeable path and multi-queues indirect descriptor with dsa kernel driver
-----------------------------------------------------------------------------------------------------------------------------------
-This case uses testpmd to test the payload is valid and indirect descriptor after packets forwarding in vhost-user/virtio-user
-split ring mergeable path and multi-queues when vhost uses the asynchronous enqueue operations with dsa kernel driver.
+Test Case 15: VM2VM split ring vectorized path and multi-queues payload check with vhost async operation and dsa kernel driver
+-------------------------------------------------------------------------------------------------------------------------------
+This case uses testpmd to test the payload is valid after packets forwarding in vhost-user/virtio-user split ring
+vectorized path and multi-queues when vhost uses the asynchronous operations with dsa kernel driver.
-1. bind 4 dsa device to idxd like common step 2::
+1. bind 2 dsa ports to idxd::
- ls /dev/dsa #check wq configure, reset if exist
- <dpdk dir># ./usertools/dpdk-devbind.py -u 6a:01.0 6f:01.0 74:01.0 79:01.0
- <dpdk dir># ./usertools/dpdk-devbind.py -b idxd 6a:01.0 6f:01.0 74:01.0 79:01.0
- <dpdk dir># ./<dpdk build dir>drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 0
- <dpdk dir># ./<dpdk build dir>drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 2
- <dpdk dir># ./<dpdk build dir>drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 4
- <dpdk dir># ./<dpdk build dir>drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 6
+ ls /dev/dsa #check wq configure, reset if exist
+ <dpdk dir># ./usertools/dpdk-devbind.py -u 6a:01.0 6f:01.0
+ <dpdk dir># ./usertools/dpdk-devbind.py -b idxd 6a:01.0 6f:01.0
+ <dpdk dir># ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 0
+ <dpdk dir># ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 1
ls /dev/dsa #check wq configure success
2. Launch vhost by below command::
- <dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 \
- --vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[txq0;txq1]' \
- --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0;txq1]' \
- --iova=va -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=256 --rxd=256 --no-flush-rx --lcore-dma=[lcore2@wq0.0,lcore2@wq2.1,lcore2@wq4.2,lcore2@wq6.3]
+ <dpdk dir>#./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 --no-pci \
+ --vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[txq0;txq1;rxq0;rxq1]' \
+ --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0;txq1;rxq0;rxq1]' \
+ --iova=va -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx --lcore-dma=[lcore2@wq0.0,lcore2@wq0.1,lcore2@wq0.2]
3. Launch virtio-user1 by below command::
- <dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 7-8 --no-pci --file-prefix=virtio1 \
- --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=2,server=1,packed_vq=0,mrg_rxbuf=1,in_order=0,queue_size=4096 \
- -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=256 --rxd=256
+ <dpdk dir>#./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 7-8 --no-pci --file-prefix=virtio1 \
+ --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=2,server=1,packed_vq=0,mrg_rxbuf=0,in_order=0,vectorized=1,queue_size=4096 \
+ -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096
testpmd>set fwd rxonly
testpmd>start
4. Attach pdump secondary process to primary process of virtio-user1 by same file-prefix::
- <dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=virtio1 -- --pdump 'device_id=net_virtio_user1,queue=0,rx-dev=/tmp/pdump-virtio-rx.pcap,mbuf-size=8000'
+ <dpdk dir>#./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=virtio1 -- --pdump 'device_id=net_virtio_user1,queue=*,rx-dev=/tmp/pdump-virtio-rx.pcap,mbuf-size=8000'
5. Launch virtio-user0 and send packets::
- <dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \
- --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net0,queues=2,server=1,packed_vq=0,mrg_rxbuf=1,in_order=0,queue_size=256 \
- -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=256 --rxd=256
+ <dpdk dir>#./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \
+ --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net0,queues=2,server=1,packed_vq=0,mrg_rxbuf=0,in_order=1,vectorized=1,queue_size=4096 \
+ -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096
testpmd>set burst 1
+ testpmd>set txpkts 64,128,256,512
testpmd>start tx_first 27
testpmd>stop
testpmd>set burst 32
testpmd>start tx_first 7
testpmd>stop
- testpmd>set txpkts 2000,2000,2000,2000
+ testpmd>set txpkts 64
testpmd>start tx_first 1
testpmd>stop
-6. Start vhost, then quit pdump and three testpmd, about split virtqueue inorder mergeable path, it use the indirect descriptors, the 8k length pkt will just occupies one ring.
-So check 512 packets and 112128 bytes received by virtio-user1 and 502 packets with 64 length and 10 packets with 8K length in pdump-virtio-rx.pcap.
+6. Start vhost testpmd, quit pdump and check virtio-user1 RX-packets is 566 and RX-bytes is 486016 and 502 packets with 960 length and 64 packets with 64 length in pdump-virtio-rx.pcap.
7. Clear virtio-user1 port stats::
@@ -1027,48 +1119,48 @@ So check 512 packets and 112128 bytes received by virtio-user1 and 502 packets w
8. Quit and relaunch vhost with diff channel by below command::
- <dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 \
- --vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[txq0;txq1]' \
- --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0;txq1]' \
- --iova=va -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=256 --rxd=256 --no-flush-rx --lcore-dma=[lcore2@wq0.3,lcore2@wq2.4,lcore2@wq4.3,lcore2@wq6.4]
+ <dpdk dir>#./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 --no-pci \
+ --vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[txq0;rxq1]' \
+ --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0;rxq1]' \
+ --iova=va -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx --lcore-dma=[lcore2@wq0.3,lcore2@wq1.0,lcore2@wq1.1]
9. Rerun step 4.
-10. Virtio-user0 send packets::
+10. virtio-user0 send packets::
testpmd>set burst 1
+ testpmd>set txpkts 64,128,256,512
testpmd>start tx_first 27
testpmd>stop
testpmd>set burst 32
testpmd>start tx_first 7
testpmd>stop
- testpmd>set txpkts 2000,2000,2000,2000
+ testpmd>set txpkts 64
testpmd>start tx_first 1
testpmd>stop
11. Rerun step 6.
-Test Case 14: VM2VM vhost-user/virtio-user packed ring non-mergeable path and multi-queues payload check with dsa kernel driver
-----------------------------------------------------------------------------------------------------------------------------------
+Test Case 16: VM2VM packed ring non-mergeable path and multi-queues payload check with dsa kernel driver
+---------------------------------------------------------------------------------------------------------
This case uses testpmd to test the payload is valid after packets forwarding in vhost-user/virtio-user packed ring
-non-mergeable path and multi-queues when vhost uses the asynchronous enqueue operations with dsa kernel driver.
+non-mergeable path and multi-queues when vhost uses the asynchronous operations with dsa kernel driver.
-1. bind 3 dsa device to idxd like common step 2::
+1. bind 2 dsa device to idxd like common step 2::
ls /dev/dsa #check wq configure, reset if exist
- <dpdk dir># ./usertools/dpdk-devbind.py -u 6a:01.0 6f:01.0 74:01.0
- <dpdk dir># ./usertools/dpdk-devbind.py -b idxd 6a:01.0 6f:01.0 74:01.0
- <dpdk dir># ./<dpdk build dir>drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 0
- <dpdk dir># ./<dpdk build dir>drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 2
- <dpdk dir># ./<dpdk build dir>drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 4
+ <dpdk dir># ./usertools/dpdk-devbind.py -u 6a:01.0 6f:01.0
+ <dpdk dir># ./usertools/dpdk-devbind.py -b idxd 6a:01.0 6f:01.0
+ <dpdk dir># ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 0
+ <dpdk dir># ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 2
ls /dev/dsa #check wq configure success
2. Launch vhost by below command::
- <dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 \
- --vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[txq0;txq1]' \
- --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0;txq1]' \
- --iova=va -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx --lcore-dma=[lcore2@wq0.0,lcore2@wq2.1,lcore2@wq4.2]
+ <dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 --no-pci \
+ --vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[txq0;txq1;rxq0;rxq1]' \
+ --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0;txq1;rxq0;rxq1]' \
+ --iova=va -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx --lcore-dma=[lcore2@wq0.0,lcore2@wq0.1,lcore2@wq0.2]
3. Launch virtio-user1 by below command::
@@ -1081,7 +1173,7 @@ non-mergeable path and multi-queues when vhost uses the asynchronous enqueue ope
4. Attach pdump secondary process to primary process of virtio-user1 by same file-prefix::
- <dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=virtio1 -- --pdump 'device_id=net_virtio_user1,queue=0,rx-dev=/tmp/pdump-virtio-rx.pcap,mbuf-size=8000'
+ <dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=virtio1 -- --pdump 'device_id=net_virtio_user1,queue=*,rx-dev=/tmp/pdump-virtio-rx.pcap,mbuf-size=8000'
5. Launch virtio-user0 and send packets::
@@ -1109,10 +1201,10 @@ non-mergeable path and multi-queues when vhost uses the asynchronous enqueue ope
8. Quit and relaunch vhost with diff channel by below command::
- <dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 \
+ <dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 --no-pci \
--vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[txq0;txq1]' \
--vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0;txq1]' \
- --iova=va -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx --lcore-dma=[lcore2@wq0.4,lcore2@wq2.5,lcore2@wq4.6]
+ --iova=va -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx --lcore-dma=[lcore2@wq0.1,lcore2@wq1.0,lcore2@wq1.1]
9. Rerun step 4.
@@ -1131,28 +1223,26 @@ non-mergeable path and multi-queues when vhost uses the asynchronous enqueue ope
11. Rerun step 6.
-Test Case 15: VM2VM vhost-user/virtio-user packed ring inorder non-mergeable path and multi-queues payload check with dsa kernel driver
-----------------------------------------------------------------------------------------------------------------------------------------
+Test Case 17: VM2VM packed ring inorder non-mergeable path and multi-queues payload check with dsa kernel driver
+------------------------------------------------------------------------------------------------------------------
This case uses testpmd to test the payload is valid after packets forwarding in vhost-user/virtio-user packed ring inorder
-non-mergeable path and multi-queues when vhost uses the asynchronous enqueue operations with dsa kernel driver.
+non-mergeable path and multi-queues when vhost uses the asynchronous operations with dsa kernel driver.
-1. bind 4 dsa device to idxd like common step 2::
+1. bind 2 dsa device to idxd like common step 2::
ls /dev/dsa #check wq configure, reset if exist
- <dpdk dir># ./usertools/dpdk-devbind.py -u 6a:01.0 6f:01.0 74:01.0 79:01.0
- <dpdk dir># ./usertools/dpdk-devbind.py -b idxd 6a:01.0 6f:01.0 74:01.0 79:01.0
- <dpdk dir># ./<dpdk build dir>drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 0
- <dpdk dir># ./<dpdk build dir>drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 2
- <dpdk dir># ./<dpdk build dir>drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 4
- <dpdk dir># ./<dpdk build dir>drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 6
+ <dpdk dir># ./usertools/dpdk-devbind.py -u 6a:01.0 6f:01.0
+ <dpdk dir># ./usertools/dpdk-devbind.py -b idxd 6a:01.0 6f:01.0
+ <dpdk dir># ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 0
+ <dpdk dir># ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 1
ls /dev/dsa #check wq configure success
2. Launch vhost by below command::
- <dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 \
- --vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[txq0;txq1]' \
- --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0;txq1]' \
- --iova=va -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx --lcore-dma=[lcore2@wq0.0,lcore2@wq2.0,lcore2@wq4.1,lcore2@wq6.1]
+ <dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 --no-pci \
+ --vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[txq0;txq1;rxq0;rxq1]' \
+ --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0;txq1;rxq0;rxq1]' \
+ --iova=va -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx --lcore-dma=[lcore2@wq0.0,lcore2@wq0.1,lcore2@wq0.2,lcore2@wq0.3]
3. Launch virtio-user1 by below command::
@@ -1164,7 +1254,7 @@ non-mergeable path and multi-queues when vhost uses the asynchronous enqueue ope
4. Attach pdump secondary process to primary process of virtio-user1 by same file-prefix::
- <dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=virtio1 -- --pdump 'device_id=net_virtio_user1,queue=0,rx-dev=/tmp/pdump-virtio-rx.pcap,mbuf-size=8000'
+ <dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=virtio1 -- --pdump 'device_id=net_virtio_user1,queue=*,rx-dev=/tmp/pdump-virtio-rx.pcap,mbuf-size=8000'
5. Launch virtio-user0 and send packets::
@@ -1180,6 +1270,7 @@ non-mergeable path and multi-queues when vhost uses the asynchronous enqueue ope
testpmd>stop
testpmd>set txpkts 64
testpmd>start tx_first 1
+ testpmd>stop
6. Start vhost testpmd, quit pdump and check virtio-user1 RX-packets is 566 and RX-bytes is 486016 and 502 packets with 960 length and 64 packets with 64 length in pdump-virtio-rx.pcap.
@@ -1191,16 +1282,15 @@ non-mergeable path and multi-queues when vhost uses the asynchronous enqueue ope
8. Quit and relaunch vhost with diff channel by below command::
- <dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 \
- --vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[txq0;txq1]' \
- --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0;txq1]' \
- --iova=va -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx --lcore-dma=[lcore2@wq0.5,lcore2@wq2.6,lcore2@wq4.5,lcore2@wq6.6]
+ <dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 --no-pci \
+ --vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[txq0;txq1;rxq0;rxq1]' \
+ --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0;txq1;rxq0;rxq1]' \
+ --iova=va -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx --lcore-dma=[lcore2@wq0.0,lcore2@wq1.0,lcore2@wq1.1,lcore2@wq1.2]
9. Rerun step 4.
10. virtio-user0 send packets::
- testpmd>stop
testpmd>set burst 1
testpmd>set txpkts 64,128,256,512
testpmd>start tx_first 27
@@ -1213,31 +1303,30 @@ non-mergeable path and multi-queues when vhost uses the asynchronous enqueue ope
11. Rerun step 6.
-Test Case 16: VM2VM vhost-user/virtio-user packed ring mergeable path and multi-queues payload check with dsa kernel driver
------------------------------------------------------------------------------------------------------------------------------
+Test Case 18: VM2VM packed ring mergeable path and multi-queues payload check with dsa kernel driver
+-----------------------------------------------------------------------------------------------------
This case uses testpmd to test the payload is valid after packets forwarding in vhost-user/virtio-user packed ring
-mergeable path and multi-queues when vhost uses the asynchronous enqueue operations with dsa kernel driver.
+mergeable path and multi-queues when vhost uses the asynchronous operations with dsa kernel driver.
1. bind 2 dsa device to idxd::
ls /dev/dsa #check wq configure, reset if exist
<dpdk dir># ./usertools/dpdk-devbind.py -u 6a:01.0 6f:01.0
<dpdk dir># ./usertools/dpdk-devbind.py -b idxd 6a:01.0 6f:01.0
- <dpdk dir># ./<dpdk build dir>drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 0
- <dpdk dir># ./<dpdk build dir>drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 2
+ <dpdk dir># ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 0
+ <dpdk dir># ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 2
ls /dev/dsa #check wq configure success
2. Launch vhost by below command::
- <dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 \
- --vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[txq0;txq1]' \
- --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0;txq1]' \
- --iova=va -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx --lcore-dma=[lcore2@wq0.0,lcore2@wq0.1,lcore2@wq0.2,lcore2@wq2.3,lcore2@wq2.4]
+ <dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 --no-pci \
+ --vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[txq0;txq1;rxq0;rxq1]' \
+ --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0;txq1;rxq0;rxq1]' \
+ --iova=va -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx --lcore-dma=[lcore2@wq0.0,lcore2@wq0.1,lcore2@wq0.2,lcore2@wq1.3,lcore2@wq1.4]
3. Launch virtio-user1 by below command::
- <dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 7-8 \
- --no-pci --file-prefix=virtio1 \
+ <dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 7-8 --no-pci --file-prefix=virtio1 \
--vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=2,server=1,packed_vq=1,mrg_rxbuf=1,in_order=0,queue_size=4096 \
-- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096
testpmd>set fwd rxonly
@@ -1245,7 +1334,7 @@ mergeable path and multi-queues when vhost uses the asynchronous enqueue operati
4. Attach pdump secondary process to primary process of virtio-user1 by same file-prefix::
- <dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=virtio1 -- --pdump 'device_id=net_virtio_user1,queue=0,rx-dev=/tmp/pdump-virtio-rx.pcap,mbuf-size=8000'
+ <dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=virtio1 -- --pdump 'device_id=net_virtio_user1,queue=*,rx-dev=/tmp/pdump-virtio-rx.pcap,mbuf-size=8000'
5. Launch virtio-user0 and send packets::
@@ -1271,10 +1360,10 @@ mergeable path and multi-queues when vhost uses the asynchronous enqueue operati
8. Quit and relaunch vhost with diff channel by below command::
- <dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 \
- --vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[txq0;txq1]' \
- --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0;txq1]' \
- --iova=va -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx --lcore-dma=[lcore2@wq0.6,lcore2@wq0.7,lcore2@wq2.3,lcore2@wq2.4,lcore2@wq2.5]
+ <dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 --no-pci \
+ --vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[txq0;txq1;rxq0;rxq1]' \
+ --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0;txq1;rxq0;rxq1]' \
+ --iova=va -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx --lcore-dma=[lcore2@wq1.3,lcore2@wq1.4,lcore2@wq1.5]
9. Rerun step 4.
@@ -1291,32 +1380,26 @@ mergeable path and multi-queues when vhost uses the asynchronous enqueue operati
11. Rerun step 6.
-Test Case 17: VM2VM vhost-user/virtio-user packed ring inorder mergeable path and multi-queues payload check with dsa kernel driver
-------------------------------------------------------------------------------------------------------------------------------------
+Test Case 19: VM2VM packed ring inorder mergeable path and multi-queues payload check with dsa kernel driver
+-------------------------------------------------------------------------------------------------------------
This case uses testpmd to test the payload is valid after packets forwarding in vhost-user/virtio-user packed ring inorder
-mergeable path and multi-queues when vhost uses the asynchronous enqueue operations with dsa kernel driver.
+mergeable path and multi-queues when vhost uses the asynchronous operations with dsa kernel driver.
-1. bind 8 dsa device to idxd like common step 2::
+1. bind 2 dsa device to idxd like common step 2::
ls /dev/dsa #check wq configure, reset if exist
- <dpdk dir># ./usertools/dpdk-devbind.py -u 6a:01.0 6f:01.0 74:01.0 79:01.0 e7:01.0 ec:01.0 f1:01.0 f6:01.0
- <dpdk dir># ./usertools/dpdk-devbind.py -b idxd 6a:01.0 6f:01.0 74:01.0 79:01.0 e7:01.0 ec:01.0 f1:01.0 f6:01.0
- <dpdk dir># ./<dpdk build dir>drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 0
- <dpdk dir># ./<dpdk build dir>drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 2
- <dpdk dir># ./<dpdk build dir>drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 4
- <dpdk dir># ./<dpdk build dir>drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 6
- <dpdk dir># ./<dpdk build dir>drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 8
- <dpdk dir># ./<dpdk build dir>drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 10
- <dpdk dir># ./<dpdk build dir>drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 12
- <dpdk dir># ./<dpdk build dir>drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 14
+ <dpdk dir># ./usertools/dpdk-devbind.py -u 6a:01.0 6f:01.0
+ <dpdk dir># ./usertools/dpdk-devbind.py -b idxd 6a:01.0 6f:01.0
+ <dpdk dir># ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 0
+ <dpdk dir># ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 1
ls /dev/dsa #check wq configure success
2. Launch vhost by below command::
- <dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 \
- --vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[txq0;txq1]' \
- --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0;txq1]' \
- --iova=va -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx --lcore-dma=[lcore2@wq0.0,lcore2@wq2.1,lcore2@wq4.2,lcore2@wq6.3,lcore2@wq8.4,lcore2@wq10.5,lcore2@wq12.6,lcore2@wq14.7]
+ <dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 --no-pci \
+ --vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[txq0;txq1;rxq0;rxq1]' \
+ --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0;txq1;rxq0;rxq1]' \
+ --iova=va -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx --lcore-dma=[lcore2@wq0.0,lcore2@wq1.0]
3. Launch virtio-user1 by below command::
@@ -1328,7 +1411,7 @@ mergeable path and multi-queues when vhost uses the asynchronous enqueue operati
4. Attach pdump secondary process to primary process of virtio-user1 by same file-prefix::
- <dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=virtio1 -- --pdump 'device_id=net_virtio_user1,queue=0,rx-dev=/tmp/pdump-virtio-rx.pcap,mbuf-size=8000'
+ <dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=virtio1 -- --pdump 'device_id=net_virtio_user1,queue=*,rx-dev=/tmp/pdump-virtio-rx.pcap,mbuf-size=8000'
5. Launch virtio-user0 and send packets::
@@ -1354,10 +1437,10 @@ mergeable path and multi-queues when vhost uses the asynchronous enqueue operati
8. Quit and relaunch vhost with diff channel by below command::
- <dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 \
- --vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[txq0;txq1]' \
- --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0;txq1]' \
- --iova=va -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx --lcore-dma=[lcore2@wq0.7,lcore2@wq2.6,lcore2@wq4.5,lcore2@wq6.4,lcore2@wq8.3,lcore2@wq10.2,lcore2@wq12.1,lcore2@wq14.0]
+ <dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 --no-pci \
+ --vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[txq0;txq1;rxq0;rxq1]' \
+ --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0;txq1;rxq0;rxq1]' \
+ --iova=va -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx --lcore-dma=[lcore2@wq0.0,lcore2@wq0.1,lcore2@wq0.2,lcore2@wq0.3]
9. Rerun step 4.
@@ -1374,28 +1457,27 @@ mergeable path and multi-queues when vhost uses the asynchronous enqueue operati
11. Rerun step 6.
-Test Case 18: VM2VM vhost-user/virtio-user packed ring vectorized-tx path and multi-queues indirect descriptor with dsa kernel driver
--------------------------------------------------------------------------------------------------------------------------------------
+Test Case 20: VM2VM packed ring vectorized-tx path and multi-queues test indirect descriptor and payload check with dsa kernel driver
+--------------------------------------------------------------------------------------------------------------------------------------
This case uses testpmd to test the payload is valid and indirect descriptor after packets forwarding in vhost-user/virtio-user
-packed ring vectorized-tx path and multi-queues when vhost uses the asynchronous enqueue operations with dsa kernel driver.
+packed ring vectorized-tx path and multi-queues when vhost uses the asynchronous operations with dsa kernel driver.
-1. bind 4 dsa device to idxd like common step 2::
+1. Bind 2 dsa device to idxd like common step 2::
ls /dev/dsa #check wq configure, reset if exist
- <dpdk dir># ./usertools/dpdk-devbind.py -u 6a:01.0 6f:01.0 74:01.0 79:01.0
- <dpdk dir># ./usertools/dpdk-devbind.py -b idxd 6a:01.0 6f:01.0 74:01.0 79:01.0
- <dpdk dir># ./<dpdk build dir>drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 0
- <dpdk dir># ./<dpdk build dir>drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 2
- <dpdk dir># ./<dpdk build dir>drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 4
- <dpdk dir># ./<dpdk build dir>drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 6
+ <dpdk dir># ./usertools/dpdk-devbind.py -u 6a:01.0 6f:01.0
+ <dpdk dir># ./usertools/dpdk-devbind.py -b idxd 6a:01.0 6f:01.0
+ <dpdk dir># ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 0
+ <dpdk dir># ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 2
ls /dev/dsa #check wq configure success
2. Launch vhost by below command::
- <dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 \
- --vdev 'eth_vhost0,iface=vhost-net,queues=2,client=1,dmas=[txq0;txq1],dma_ring_size=2048' --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0;txq1],dma_ring_size=2048' \
+ <dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 --no-pci \
+ --vdev 'eth_vhost0,iface=vhost-net,queues=2,client=1,dmas=[txq0;txq1;rxq0;rxq1]' \
+ --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0;txq1;rxq0;rxq1]' \
--iova=va -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=256 --rxd=256 --no-flush-rx \
- --lcore-dma=[lcore11@wq0.0,lcore11@wq2.1,lcore11@wq4.2,lcore11@wq6.3]
+ --lcore-dma=[lcore11@wq0.0,lcore11@wq0.1,lcore11@wq1.0,lcore11@wq1.1]
3. Launch virtio-user1 by below command::
@@ -1407,7 +1489,7 @@ packed ring vectorized-tx path and multi-queues when vhost uses the asynchronous
4. Attach pdump secondary process to primary process by same file-prefix::
- <dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=virtio1 -- --pdump 'device_id=net_virtio_user1,queue=*,rx-dev=./pdump-virtio-rx.pcap,mbuf-size=8000'
+ <dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=virtio1 -- --pdump 'device_id=net_virtio_user1,queue=*,rx-dev=/tmp/pdump-virtio-rx.pcap,mbuf-size=8000'
5. Launch virtio-user0 and send 8k length packets::
@@ -1428,27 +1510,79 @@ packed ring vectorized-tx path and multi-queues when vhost uses the asynchronous
6. Start vhost, then quit pdump and three testpmd, about packed virtqueue vectorized-tx path, it use the indirect descriptors, the 8k length pkt will just occupies one ring.
So check 512 packets and 112128 bytes received by virtio-user1 and 502 packets with 64 length and 10 packets with 8K length in pdump-virtio-rx.pcap.
-Test Case 19: VM2VM vhost-user/virtio-user packed ring inorder mergeable path and multi-queues payload check with dsa dpdk and kernel driver
----------------------------------------------------------------------------------------------------------------------------------------------
+Test Case 21: VM2VM split ring mergeable path and multi-queues test indirect descriptor with dsa dpdk and kernel driver
+-------------------------------------------------------------------------------------------------------------------------
+This case uses testpmd to test the payload is valid and indirect descriptor after packets forwarding in vhost-user/virtio-user
+split ring mergeable path and multi-queues when vhost uses the asynchronous operations with dsa dpdk and kernel driver.
+
+1. bind 2 dsa ports to idxd and 2 dsa ports to vfio-pci::
+
+ ls /dev/dsa #check wq configure, reset if exist
+ <dpdk dir># ./usertools/dpdk-devbind.py -u e7:01.0 ec:01.0 f1:01.0 f6:01.0
+ <dpdk dir># ./usertools/dpdk-devbind.py -b idxd e7:01.0 ec:01.0
+ <dpdk dir># ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 1 0
+ <dpdk dir># ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 2 1
+ ls /dev/dsa #check wq configure success
+ <dpdk dir># ./usertools/dpdk-devbind.py -b vfio-pci f1:01.0 f6:01.0
+
+2. Launch vhost by below command::
+
+ ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 -a 0000:f1:01.0,max_queues=1 -a 0000:f6:01.0,max_queues=1 \
+ --vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[txq0;txq1;rxq1]' \
+ --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq1;rxq0;rxq1]' \
+ --iova=va -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=256 --rxd=256 --no-flush-rx --lcore-dma=[lcore2@wq0.0,lcore2@wq1.0,lcore2@0000:f1:01.0-q0,lcore2@0000:f6:01.0-q0]
+
+3. Launch virtio-user1 by below command::
+
+ ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 7-8 --no-pci --file-prefix=virtio1 \
+ --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=2,server=1,packed_vq=0,mrg_rxbuf=1,in_order=0,queue_size=4096 \
+ -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=256 --rxd=256
+ testpmd>set fwd rxonly
+ testpmd>start
+
+4. Attach pdump secondary process to primary process of virtio-user1 by same file-prefix::
+
+ ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=virtio1 -- --pdump 'device_id=net_virtio_user1,queue=*,rx-dev=/tmp/pdump-virtio-rx.pcap,mbuf-size=8000'
+
+5. Launch virtio-user0 and send packets::
+
+ ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \
+ --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net0,queues=2,server=1,packed_vq=0,mrg_rxbuf=1,in_order=0,queue_size=256 \
+ -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=256 --rxd=256
+ testpmd>set burst 1
+ testpmd>start tx_first 27
+ testpmd>stop
+ testpmd>set burst 32
+ testpmd>start tx_first 7
+ testpmd>stop
+ testpmd>set txpkts 2000,2000,2000,2000
+ testpmd>start tx_first 1
+ testpmd>stop
+
+6. Start vhost, then quit pdump and three testpmd, about split virtqueue inorder mergeable path, it use the indirect descriptors, the 8k length pkt will just occupies one ring.
+So check 512 packets and 112128 bytes received by virtio-user1 and 502 packets with 64 length and 10 packets with 8K length in pdump-virtio-rx.pcap.
+
+Test Case 22: VM2VM packed ring inorder mergeable path and multi-queues payload check with dsa dpdk and kernel driver
+-----------------------------------------------------------------------------------------------------------------------
This case uses testpmd to test the payload is valid after packets forwarding in vhost-user/virtio-user packed ring inorder
-mergeable path and multi-queues when vhost uses the asynchronous enqueue operations with dsa dpdk and kernel driver.
+mergeable path and multi-queues when vhost uses the asynchronous operations with dsa dpdk and kernel driver.
1. bind 2 dsa device to vfio-pci and 2 dsa port to idxd like common step 1-2::
ls /dev/dsa #check wq configure, reset if exist
- <dpdk dir># ./usertools/dpdk-devbind.py -u 6a:01.0 6f:01.0 e7:01.0 ec:01.0
- <dpdk dir># ./usertools/dpdk-devbind.py -b idxd 6a:01.0 6f:01.0
- <dpdk dir># ./<dpdk build dir>drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 0
- <dpdk dir># ./<dpdk build dir>drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 2
+ <dpdk dir># ./usertools/dpdk-devbind.py -u e7:01.0 ec:01.0 f1:01.0 f6:01.0
+ <dpdk dir># ./usertools/dpdk-devbind.py -b idxd e7:01.0 ec:01.0
+ <dpdk dir># ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 0
+ <dpdk dir># ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 1
ls /dev/dsa #check wq configure success
- <dpdk dir># ./usertools/dpdk-devbind.py -b vfio-pci e7:01.0 ec:01.0
+ <dpdk dir># ./usertools/dpdk-devbind.py -b vfio-pci f1:01.0 f6:01.0
2. Launch vhost by below command::
- <dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 -a 0000:e7:01.0 -a 0000:ec:01.0 \
- --vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[txq0;txq1]' \
- --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0;txq1]' \
- --iova=va -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx --lcore-dma=[lcore2@0000:e7:01.0-q0,lcore2@0000:ec:01.0-q1,lcore2@wq0.0,lcore2@wq2.0]
+ <dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 -a 0000:f1:01.0 -a 0000:f6:01.0 \
+ --vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[txq0;txq1;rxq0;rxq1]' \
+ --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0;txq1;rxq0;rxq1]' \
+ --iova=va -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx --lcore-dma=[lcore2@0000:f1:01.0-q0,lcore2@0000:f6:01.0-q1,lcore2@wq0.0,lcore2@wq1.0]
3. Launch virtio-user1 by below command::
@@ -1460,7 +1594,7 @@ mergeable path and multi-queues when vhost uses the asynchronous enqueue operati
4. Attach pdump secondary process to primary process of virtio-user1 by same file-prefix::
- <dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=virtio1 -- --pdump 'device_id=net_virtio_user1,queue=0,rx-dev=/tmp/dsa-va-rx.pcap,mbuf-size=8000'
+ <dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=virtio1 -- --pdump 'device_id=net_virtio_user1,queue=*,rx-dev=/tmp/pdump-virtio-rx.pcap,mbuf-size=8000'
5. Launch virtio-user0 and send packets::
@@ -1486,10 +1620,10 @@ mergeable path and multi-queues when vhost uses the asynchronous enqueue operati
8. Quit and relaunch vhost with diff dsa channel by below command::
- <dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 \
- --vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[txq0;txq1]' \
- --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0;txq1]' \
- --iova=va -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx --lcore-dma=[lcore2@0000:e7:01.0-q2,lcore2@0000:e7:01.0-q5,lcore2@0000:ec:01.0-q4,lcore2@wq0.1,lcore2@wq0.3]
+ <dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 -a 0000:f1:01.0 -a 0000:f6:01.0 \
+ --vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[txq0;txq1;rxq0;rxq1]' \
+ --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0;txq1;rxq0;rxq1]' \
+ --iova=va -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx --lcore-dma=[lcore2@0000:f1:01.0-q2,lcore2@0000:f1:01.0-q5,lcore2@0000:f6:01.0-q4,lcore2@wq0.1,lcore2@wq0.3]
9. Rerun step 4.
--
2.25.1
^ permalink raw reply [flat|nested] 3+ messages in thread
* [dts][PATCH V1 1/1] test_plans/vm2vm_virtio_user_dsa_test_plan: modify test plan to test vhost async dequeue
2022-09-06 11:20 ` [dts][PATCH V1 1/1] test_plans/vm2vm_virtio_user_dsa_test_plan: modify test plan to test vhost async dequeue Xingguang He
@ 2022-10-09 10:10 ` lijuan.tu
0 siblings, 0 replies; 3+ messages in thread
From: lijuan.tu @ 2022-10-09 10:10 UTC (permalink / raw)
To: dts, Xingguang He; +Cc: Xingguang He
On Tue, 6 Sep 2022 11:20:25 +0000, Xingguang He <xingguang.he@intel.com> wrote:
> From DPDK-22.07, vhost async dequeue is supported in both split and
> packed ring, so modify vm2vm_virtio_user_dsa_test_plan to test vhost
> async dequeue feature.
>
> Signed-off-by: Xingguang He <xingguang.he@intel.com>
Applied, thanks
^ permalink raw reply [flat|nested] 3+ messages in thread
end of thread, other threads:[~2022-10-09 10:10 UTC | newest]
Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-09-06 11:20 [dts][PATCH V1 0/1] modify test plan to test vhost async Xingguang He
2022-09-06 11:20 ` [dts][PATCH V1 1/1] test_plans/vm2vm_virtio_user_dsa_test_plan: modify test plan to test vhost async dequeue Xingguang He
2022-10-09 10:10 ` lijuan.tu
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).