From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id C01A2A0542; Tue, 6 Sep 2022 13:18:35 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id BAB4F4021D; Tue, 6 Sep 2022 13:18:35 +0200 (CEST) Received: from mga12.intel.com (mga12.intel.com [192.55.52.136]) by mails.dpdk.org (Postfix) with ESMTP id 9B8014021D for ; Tue, 6 Sep 2022 13:18:33 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1662463113; x=1693999113; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=MvrKCDlRepglI0ZNZttDXhH7mOzYavhr9g7zP3Xlbq4=; b=T1v2SVuKFhJD4bDVCmX31p5k7QJLaGdY/c+4DnbCnpkXxuzModMezrJd 9ZY7nWux29xcCQi+xjgVrXeT09Q5OhdDZF0eehTVlzsHWRlJFlpwBfm6x C5sshmIXwlcGdaK3iVaVCTwHkUqIdXvgeX/xdwg+cyJsOhuCndZBdcZSE bd+4h9UFgeppda2TzfH3/b8y89k2oqw/LhXJ+3QdafS67Zw/2zOkIjh1O ClO3ANnZyqeycXjNU9nFpB4jZCVxUKdqc6fW95Mh3xzt0aoAa1pwF/Av3 GGh+lmaLNAH5Nl2uPA+FYGmXnqRq1M/3r2y/hR3/+6aTDkCJBspHUU16t g==; X-IronPort-AV: E=McAfee;i="6500,9779,10461"; a="276305798" X-IronPort-AV: E=Sophos;i="5.93,294,1654585200"; d="scan'208,223";a="276305798" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 06 Sep 2022 04:18:33 -0700 X-IronPort-AV: E=Sophos;i="5.93,294,1654585200"; d="scan'208,223";a="644130967" Received: from dpdk-xingguang-icelake.sh.intel.com ([10.67.119.147]) by orsmga008-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 06 Sep 2022 04:18:31 -0700 From: Xingguang He To: dts@dpdk.org Cc: Xingguang He Subject: [dts][PATCH V1 1/1] test_plans/loopback_virtio_user_server_mode_dsa_test_plan: modify test plan to test vhost async dequeue Date: Tue, 6 Sep 2022 11:18:24 +0000 Message-Id: <20220906111824.1135920-2-xingguang.he@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220906111824.1135920-1-xingguang.he@intel.com> References: <20220906111824.1135920-1-xingguang.he@intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-BeenThere: dts@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: test suite reviews and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dts-bounces@dpdk.org >From DPDK-22.07, vhost async dequeue is supported in both split and packed ring, so modify loopback_virtio_user_server_mode_dsa_test_plan to test vhost async dequeue feature. Signed-off-by: Xingguang He --- ..._virtio_user_server_mode_dsa_test_plan.rst | 315 +++++++++--------- 1 file changed, 159 insertions(+), 156 deletions(-) diff --git a/test_plans/loopback_virtio_user_server_mode_dsa_test_plan.rst b/test_plans/loopback_virtio_user_server_mode_dsa_test_plan.rst index 8e5bdf3a..a96ce539 100644 --- a/test_plans/loopback_virtio_user_server_mode_dsa_test_plan.rst +++ b/test_plans/loopback_virtio_user_server_mode_dsa_test_plan.rst @@ -10,7 +10,7 @@ Description Vhost asynchronous data path leverages DMA devices to offload memory copies from the CPU and it is implemented in an asynchronous way. In addition, vhost supports M:N mapping between vrings and DMA virtual channels. Specifically, one vring can use multiple different DMA -channels and one DMA channel can be shared by multiple vrings at the same time. Vhost enqueue operation with CBDMA channels is supported +channels and one DMA channel can be shared by multiple vrings at the same time. Vhost enqueue and dequeue operation with CBDMA channels is supported in both split and packed ring. This document provides the test plan for testing the following features when Vhost-user using asynchronous data path with @@ -31,6 +31,8 @@ Note: exceed IOMMU's max capability, better to use 1G guest hugepage. 2. DPDK local patch that about vhost pmd is needed when testing Vhost asynchronous data path with testpmd, and the suite has not yet been automated. +Prerequisites +============= Topology -------- Test flow: Vhost-user <-> Virtio-user @@ -39,8 +41,11 @@ General set up -------------- 1. Compile DPDK:: - # CC=gcc meson --werror -Denable_kmods=True -Dlibdir=lib -Dexamples=all --default-library= + # CC=gcc meson --werror -Denable_kmods=True -Dlibdir=lib -Dexamples=all --default-library=static # ninja -C -j 110 + For example: + CC=gcc meson --werror -Denable_kmods=True -Dlibdir=lib -Dexamples=all --default-library=static x86_64-native-linuxapp-gcc + ninja -C x86_64-native-linuxapp-gcc -j 110 2. Get the PCI device ID and DSA device ID of DUT, for example, 0000:4f:00.1 is PCI device ID, 0000:6a:01.0 - 0000:f6:01.0 are DSA device IDs:: @@ -82,13 +87,13 @@ Common steps 2. Bind DSA devices to kernel idxd driver, and configure Work Queue (WQ):: - # ./usertools/dpdk-devbind.py -b idxd - # ./drivers/dma/idxd/dpdk_idxd_cfg.py -q + # ./usertools/dpdk-devbind.py -b idxd + # ./drivers/dma/idxd/dpdk_idxd_cfg.py -q .. note:: Better to reset WQ when need operate DSA devices that bound to idxd drvier: - # ./drivers/dma/idxd/dpdk_idxd_cfg.py --reset + # ./drivers/dma/idxd/dpdk_idxd_cfg.py --reset You can check it by 'ls /dev/dsa' numDevices: number of devices, where 0<=numDevices<=7, corresponding to 0000:6a:01.0 - 0000:f6:01.0 numWq: Number of workqueues per DSA endpoint, where 1<=numWq<=8 @@ -96,24 +101,24 @@ Common steps For example, bind 2 DMA devices to idxd driver and configure WQ: # ./usertools/dpdk-devbind.py -b idxd 6a:01.0 6f:01.0 - # ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 1 0 - # ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 4 2 - Check WQ by 'ls /dev/dsa' and can find "wq0.0 wq2.0 wq2.1 wq2.2 wq2.3" + # ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 1 0 + # ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 4 1 + Check WQ by 'ls /dev/dsa' and can find "wq0.0 wq1.0 wq1.1 wq1.2 wq1.3" -Test Case 1: loopback split ring server mode large chain packets stress test with dsa dpdk driver +Test Case 1: Loopback split ring server mode large chain packets stress test with dsa dpdk driver --------------------------------------------------------------------------------------------------- This is a stress test case about forwading large chain packets in loopback vhost-user/virtio-user split ring with server mode -when vhost uses the asynchronous enqueue operations with dsa dpdk driver. Both iova as VA and PA mode test. +when vhost uses the asynchronous enqueue and dequeue operations with dsa dpdk driver. Both iova as VA and PA mode test. 1. Bind 1 dsa device to vfio-pci like common step 1:: - # ./usertools/dpdk-devbind.py -b vfio-pci f6:01.0 + # ./usertools/dpdk-devbind.py -b vfio-pci e7:01.0 2. Launch vhost:: - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --file-prefix=vhost -a 0000:f6:01.0,max_queues=1 \ - --vdev 'net_vhost0,iface=vhost-net0,queues=1,dmas=[txq0],client=1' \ - --iova=va -- -i --nb-cores=1 --txd=1024 --rxd=1024 --mbuf-size=45535 --lcore-dma=[lcore3@0000:f6:01.0-q0] + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --file-prefix=vhost -a 0000:e7:01.0,max_queues=1 \ + --vdev 'net_vhost0,iface=vhost-net0,queues=1,dmas=[txq0;rxq0],client=1' \ + --iova=va -- -i --nb-cores=1 --txd=1024 --rxd=1024 --mbuf-size=45535 --lcore-dma=[lcore3@0000:e7:01.0-q0] 3. launch virtio and start testpmd:: @@ -130,30 +135,30 @@ when vhost uses the asynchronous enqueue operations with dsa dpdk driver. Both i 5. Stop and quit vhost testpmd and relaunch vhost with pa mode by below command:: - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --file-prefix=vhost -a 0000:f6:01.0,max_queues=4 \ - --vdev 'net_vhost0,iface=vhost-net0,queues=1,dmas=[txq0],client=1' \ - --iova=pa -- -i --nb-cores=1 --txd=1024 --rxd=1024 --mbuf-size=45535 --lcore-dma=[lcore3@0000:f6:01.0-q0] + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --file-prefix=vhost -a 0000:e7:01.0,max_queues=4 \ + --vdev 'net_vhost0,iface=vhost-net0,queues=1,dmas=[txq0;rxq0],client=1' \ + --iova=pa -- -i --nb-cores=1 --txd=1024 --rxd=1024 --mbuf-size=45535 --lcore-dma=[lcore3@0000:e7:01.0-q0] -6. rerun step 4. +6. Rerun step 4. -Test Case 2: loopback packed ring server mode large chain packets stress test with dsa dpdk driver +Test Case 2: Loopback packed ring server mode large chain packets stress test with dsa dpdk driver ---------------------------------------------------------------------------------------------------- This is a stress test case about forwading large chain packets in loopback vhost-user/virtio-user packed ring with server mode -when vhost uses the asynchronous enqueue operations with dsa dpdk driver. Both iova as VA and PA mode test. +when vhost uses the asynchronous operations with dsa dpdk driver. Both iova as VA and PA mode test. 1. Bind 1 dsa port to vfio-pci as common step 1:: - # ./usertools/dpdk-devbind.py -b vfio-pci f6:01.0 + # ./usertools/dpdk-devbind.py -b vfio-pci f1:01.0 2. Launch vhost:: - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-3 -n 4 -a 0000:6f:01.0,max_queues=1 \ - --vdev 'eth_vhost0,iface=vhost-net0,queues=1,dmas=[txq0],client=1' \ - --iova=va -- -i --nb-cores=1 --mbuf-size=45535 --lcore-dma=[lcore3@0000:6f:01.0-q0] + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-3 -n 4 -a 0000:f1:01.0,max_queues=1 \ + --vdev 'eth_vhost0,iface=vhost-net0,queues=1,dmas=[txq0;rxq0],client=1' \ + --iova=va -- -i --nb-cores=1 --mbuf-size=45535 --lcore-dma=[lcore3@0000:f1:01.0-q0] 3. launch virtio and start testpmd:: - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 5-6 -n 4 --file-prefix=testpmd0 --no-pci \ + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 5-6 -n 4 --file-prefix=testpmd0 --no-pci \ --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=1,mrg_rxbuf=1,in_order=0,vectorized=1,packed_vq=1,queue_size=2048,server=1 \ -- -i --rxq=1 --txq=1 --txd=2048 --rxd=2048 --nb-cores=1 testpmd>start @@ -166,35 +171,36 @@ when vhost uses the asynchronous enqueue operations with dsa dpdk driver. Both i 5. Stop and quit vhost testpmd and relaunch vhost with pa mode by below command:: - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-3 -n 4 -a 0000:6f:01.0,max_queues=1 \ - --vdev 'eth_vhost0,iface=vhost-net0,queues=1,dmas=[txq0],client=1' \ - --iova=va -- -i --nb-cores=1 --mbuf-size=45535 --lcore-dma=[lcore3@0000:6f:01.0-q0] + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-3 -n 4 -a 0000:f1:01.0,max_queues=1 \ + --vdev 'eth_vhost0,iface=vhost-net0,queues=1,dmas=[txq0;rxq0],client=1' \ + --iova=va -- -i --nb-cores=1 --mbuf-size=45535 --lcore-dma=[lcore3@0000:f1:01.0-q0] -6. rerun step 3. +6. Rerun step 4. -Test Case 3: loopback split ring all path server mode and multi-queues payload check with dsa dpdk driver +Test Case 3: Loopback split ring all path server mode and multi-queues payload check with dsa dpdk driver ----------------------------------------------------------------------------------------------------------- This case tests the payload is valid after forwading large chain packets in loopback vhost-user/virtio-user split ring -all path multi-queues with server mode when vhost uses the asynchronous enqueue operations with dsa dpdk driver. Both iova as VA and PA mode test. +all path multi-queues with server mode when vhost uses the asynchronous enqueue and dequeue operations with dsa dpdk driver. +Both iova as VA and PA mode test. 1. bind 3 dsa port to vfio-pci like common step 1:: ls /dev/dsa #check wq configure, reset if exist - ./usertools/dpdk-devbind.py -u 6a:01.0 6f:01.0 74:01.0 - ./usertools/dpdk-devbind.py -b vfio-pci 6a:01.0 6f:01.0 74:01.0 + ./usertools/dpdk-devbind.py -u e7:01.0 ec:01.0 f1:01.0 + ./usertools/dpdk-devbind.py -b vfio-pci e7:01.0 ec:01.0 f1:01.0 2. Launch vhost:: - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 -a 0000:6a:01.0,max_queues=4 \ - --vdev 'eth_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6]' \ - --iova=va -- -i --nb-cores=5 --rxq=8 --txq=8 --txd=1024 --rxd=1024 \ - --lcore-dma=[lcore11@0000:6a:01.0-q0,lcore12@0000:6a:01.0-q0,lcore13@0000:6a:01.0-q1,lcore13@0000:6a:01.0-q2,lcore14@0000:6a:01.0-q1,lcore14@0000:6a:01.0-q2,lcore15@0000:6a:01.0-q1,lcore15@0000:6a:01.0-q2] + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 -a 0000:e7:01.0,max_queues=4 \ + --vdev 'eth_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]' \ + --iova=va -- -i --nb-cores=4 --rxq=8 --txq=8 --txd=1024 --rxd=1024 \ + --lcore-dma=[lcore11@0000:e7:01.0-q0,lcore12@0000:e7:01.0-q0,lcore13@0000:e7:01.0-q1,lcore13@0000:e7:01.0-q2,lcore14@0000:e7:01.0-q1,lcore14@0000:e7:01.0-q2] 3. Launch virtio-user with split ring mergeable inorder path:: - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 30,31 -n 4 --file-prefix=virtio-user0 --no-pci \ + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-6 -n 4 --file-prefix=virtio-user0 --no-pci \ --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=8,mrg_rxbuf=1,in_order=1,server=1 \ - -- -i --nb-cores=1 --rxq=8 --txq=8 --txd=1024 --rxd=1024 + -- -i --nb-cores=4 --rxq=8 --txq=8 --txd=1024 --rxd=1024 testpmd>set fwd csum testpmd>start @@ -219,9 +225,9 @@ all path multi-queues with server mode when vhost uses the asynchronous enqueue 8. Quit and relaunch virtio with split ring mergeable path as below:: - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 30,31 -n 4 --file-prefix=virtio-user0 --no-pci \ + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-6 -n 4 --file-prefix=virtio-user0 --no-pci \ --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=8,mrg_rxbuf=1,in_order=0,server=1 \ - -- -i --nb-cores=1 --rxq=8 --txq=8 --txd=1024 --rxd=1024 + -- -i --nb-cores=4 --rxq=8 --txq=8 --txd=1024 --rxd=1024 testpmd>set fwd csum testpmd>start @@ -229,9 +235,9 @@ all path multi-queues with server mode when vhost uses the asynchronous enqueue 10. Quit and relaunch virtio with split ring non-mergeable path as below:: - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 30,31 -n 4 --file-prefix=virtio-user0 --no-pci \ + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-6 -n 4 --file-prefix=virtio-user0 --no-pci \ --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=8,mrg_rxbuf=0,in_order=0,server=1 \ - -- -i --enable-hw-vlan-strip --nb-cores=1 --rxq=8 --txq=8 --txd=1024 --rxd=1024 + -- -i --enable-hw-vlan-strip --nb-cores=4 --rxq=8 --txq=8 --txd=1024 --rxd=1024 testpmd>set fwd csum testpmd>start @@ -252,9 +258,9 @@ all path multi-queues with server mode when vhost uses the asynchronous enqueue 15. Quit and relaunch virtio with split ring inorder non-mergeable path as below:: - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 30,31 -n 4 --file-prefix=virtio-user0 --no-pci \ + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-6 -n 4 --file-prefix=virtio-user0 --no-pci \ --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=8,mrg_rxbuf=0,in_order=1,server=1 \ - -- -i --nb-cores=1 --rxq=8 --txq=8 --txd=1024 --rxd=1024 + -- -i --nb-cores=4 --rxq=8 --txq=8 --txd=1024 --rxd=1024 testpmd>set fwd csum testpmd>start @@ -262,9 +268,9 @@ all path multi-queues with server mode when vhost uses the asynchronous enqueue 17. Quit and relaunch virtio with split ring vectorized path as below:: - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 30,31 -n 4 --file-prefix=virtio-user0 --no-pci \ + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-6 -n 4 --file-prefix=virtio-user0 --no-pci \ --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=8,mrg_rxbuf=0,in_order=0,vectorized=1,server=1 \ - -- -i --nb-cores=1 --rxq=8 --txq=8 --txd=1024 --rxd=1024 + -- -i --nb-cores=4 --rxq=8 --txq=8 --txd=1024 --rxd=1024 testpmd>set fwd csum testpmd>start @@ -272,45 +278,45 @@ all path multi-queues with server mode when vhost uses the asynchronous enqueue 19. Quit and relaunch vhost with diff channel:: - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 -a 0000:6a:01.0 -a 0000:6f:01.0 -a 0000:74:01.0 \ - --vdev 'eth_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6]' \ + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 -a 0000:e7:01.0 -a 0000:ec:01.0 -a 0000:f1:01.0 \ + --vdev 'eth_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]' \ --iova=va -- -i --nb-cores=5 --rxq=8 --txq=8 --txd=1024 --rxd=1024 \ - --lcore-dma=[lcore11@0000:6a:01.0-q0,lcore12@0000:6a:01.0-q0,lcore13@0000:6f:01.0-q1,lcore13@0000:74:01.0-q2,lcore14@0000:6f:01.0-q1,lcore14@0000:74:01.0-q2,lcore15@0000:6f:01.0-q1,lcore15@0000:74:01.0-q2] + --lcore-dma=[lcore11@0000:e7:01.0-q0,lcore12@0000:e7:01.0-q0,lcore13@0000:ec:01.0-q1,lcore13@0000:f1:01.0-q2,lcore14@0000:ec:01.0-q1,lcore14@0000:f1:01.0-q2,lcore15@0000:ec:01.0-q1,lcore15@0000:f1:01.0-q2] 20. Rerun steps 11-14. 21. Quit and relaunch vhost w/ iova=pa:: - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 -a 0000:6a:01.0 \ - --vdev 'eth_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6]' \ + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 -a 0000:e7:01.0 \ + --vdev 'eth_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]' \ --iova=pa -- -i --nb-cores=5 --rxq=8 --txq=8 --txd=1024 --rxd=1024 \ - --lcore-dma=[lcore11@0000:6a:01.0-q0,lcore12@0000:6a:01.0-q0,lcore13@0000:6a:01.0-q1,lcore13@0000:6a:01.0-q2,lcore14@0000:6a:01.0-q1,lcore14@0000:6a:01.0-q2,lcore15@0000:6a:01.0-q1,lcore15@0000:6a:01.0-q2] + --lcore-dma=[lcore11@0000:e7:01.0-q0,lcore12@0000:e7:01.0-q0,lcore13@0000:e7:01.0-q1,lcore13@0000:e7:01.0-q2,lcore14@0000:e7:01.0-q1,lcore14@0000:e7:01.0-q2,lcore15@0000:e7:01.0-q1,lcore15@0000:e7:01.0-q2] 22. Rerun steps 11-14. -Test Case 4: loopback packed ring all path server mode and multi-queues payload check with dsa dpdk driver +Test Case 4: Loopback packed ring all path server mode and multi-queues payload check with dsa dpdk driver ------------------------------------------------------------------------------------------------------------ This case tests the payload is valid after forwading large chain packets in loopback vhost-user/virtio-user packed ring -all path multi-queues with server mode when vhost uses the asynchronous enqueue operations with dsa dpdk driver. Both iova as VA and PA mode test. +all path multi-queues with server mode when vhost uses the asynchronous enqueue and dequeue operations with dsa dpdk driver. Both iova as VA and PA mode test. -1. bind 8 dsa port to vfio-pci like common step 1:: +1. bind 2 dsa port to vfio-pci like common step 1:: ls /dev/dsa #check wq configure, reset if exist - ./usertools/dpdk-devbind.py -u 6a:01.0 6f:01.0 74:01.0 79:01.0 e7:01.0 ec:01.0 f1:01.0 f6:01.0 - ./usertools/dpdk-devbind.py -b vfio-pci 6a:01.0 6f:01.0 74:01.0 79:01.0 e7:01.0 ec:01.0 f1:01.0 f6:01.0 + ./usertools/dpdk-devbind.py -u e7:01.0 ec:01.0 + ./usertools/dpdk-devbind.py -b vfio-pci e7:01.0 ec:01.0 2. Launch vhost:: - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 -a 0000:6a:01.0 \ - --vdev 'eth_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq6;txq7]' \ - --iova=va -- -i --nb-cores=5 --rxq=8 --txq=8 --txd=1024 --rxd=1024 \ - --lcore-dma=[lcore11@0000:6a:01.0-q0,lcore11@0000:6a:01.0-q7,lcore12@0000:6a:01.0-q1,lcore12@0000:6a:01.0-q2,lcore12@0000:6a:01.0-q3,lcore13@0000:6a:01.0-q2,lcore13@0000:6a:01.0-q3,lcore13@0000:6a:01.0-q4,lcore14@0000:6a:01.0-q2,lcore14@0000:6a:01.0-q3,lcore14@0000:6a:01.0-q4,lcore14@0000:6a:01.0-q5,lcore15@0000:6a:01.0-q0,lcore15@0000:6a:01.0-q1,lcore15@0000:6a:01.0-q2,lcore15@0000:6a:01.0-q3,lcore15@0000:6a:01.0-q4,lcore15@0000:6a:01.0-q5,lcore15@0000:6a:01.0-q6,lcore15@0000:6a:01.0-q7] + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 -a 0000:e7:01.0,max_queues=4 \ + --vdev 'eth_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]' \ + --iova=va -- -i --nb-cores=4 --rxq=8 --txq=8 --txd=1024 --rxd=1024 \ + --lcore-dma=[lcore11@0000:e7:01.0-q0,lcore12@0000:e7:01.0-q1,lcore13@0000:e7:01.0-q2,lcore14@0000:e7:01.0-q3] 3. Launch virtio-user with packed ring mergeable inorder path:: - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 5-6 -n 4 --file-prefix=virtio-user0 --no-pci \ + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-6 -n 4 --file-prefix=virtio-user0 --no-pci \ --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=8,mrg_rxbuf=1,in_order=1,packed_vq=1,server=1 \ - -- -i --nb-cores=1 --rxq=8 --txq=8 --txd=1024 --rxd=1024 + -- -i --nb-cores=4 --rxq=8 --txq=8 --txd=1024 --rxd=1024 testpmd>set fwd csum testpmd>start @@ -335,9 +341,9 @@ all path multi-queues with server mode when vhost uses the asynchronous enqueue 8. Quit and relaunch virtio with packed ring mergeable path as below:: - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 5-6 -n 4 --file-prefix=virtio-user0 --no-pci \ + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-6 -n 4 --file-prefix=virtio-user0 --no-pci \ --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=8,mrg_rxbuf=1,in_order=0,packed_vq=1,server=1 \ - -- -i --nb-cores=1 --rxq=8 --txq=8 --txd=1024 --rxd=1024 + -- -i --nb-cores=4 --rxq=8 --txq=8 --txd=1024 --rxd=1024 testpmd>set fwd csum testpmd>start @@ -345,9 +351,9 @@ all path multi-queues with server mode when vhost uses the asynchronous enqueue 10. Quit and relaunch virtio with packed ring non-mergeable path as below:: - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 5-6 -n 4 --file-prefix=virtio-user0 --no-pci \ + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-6 -n 4 --file-prefix=virtio-user0 --no-pci \ --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=8,mrg_rxbuf=0,in_order=0,packed_vq=1,server=1 \ - -- -i --nb-cores=1 --rxq=8 --txq=8 --txd=1024 --rxd=1024 + -- -i --nb-cores=4 --rxq=8 --txq=8 --txd=1024 --rxd=1024 testpmd>set fwd csum testpmd>start @@ -368,9 +374,9 @@ all path multi-queues with server mode when vhost uses the asynchronous enqueue 15. Quit and relaunch virtio with packed ring inorder non-mergeable path as below:: - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 5-6 -n 4 --file-prefix=virtio-user0 --no-pci \ + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-6 -n 4 --file-prefix=virtio-user0 --no-pci \ --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=8,mrg_rxbuf=0,in_order=1,packed_vq=1,server=1 \ - -- -i --nb-cores=1 --rxq=8 --txq=8 --txd=1024 --rxd=1024 + -- -i --nb-cores=4 --rxq=8 --txq=8 --txd=1024 --rxd=1024 testpmd>set fwd csum testpmd>start @@ -378,9 +384,9 @@ all path multi-queues with server mode when vhost uses the asynchronous enqueue 17. Quit and relaunch virtio with packed ring vectorized path as below:: - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 5-6 -n 4 --file-prefix=virtio-user0 --no-pci --force-max-simd-bitwidth=512 \ + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-6 -n 4 --file-prefix=virtio-user0 --no-pci --force-max-simd-bitwidth=512 \ --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=8,mrg_rxbuf=0,in_order=1,packed_vq=1,vectorized=1,server=1 \ - -- -i --nb-cores=1 --rxq=8 --txq=8 --txd=1024 --rxd=1024 + -- -i --nb-cores=4 --rxq=8 --txq=8 --txd=1024 --rxd=1024 testpmd>set fwd csum testpmd>start @@ -388,9 +394,9 @@ all path multi-queues with server mode when vhost uses the asynchronous enqueue 19. Quit and relaunch virtio with packed ring vectorized path and ring size is not power of 2 as below:: - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 5-6 -n 4 --file-prefix=virtio-user0 --no-pci --force-max-simd-bitwidth=512 \ + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-6 -n 4 --file-prefix=virtio-user0 --no-pci --force-max-simd-bitwidth=512 \ --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=8,mrg_rxbuf=0,in_order=1,packed_vq=1,vectorized=1,queue_size=1025,server=1 \ - -- -i --nb-cores=1 --rxq=8 --txq=8 --txd=1025 --rxd=1025 + -- -i --nb-cores=4 --rxq=8 --txq=8 --txd=1025 --rxd=1025 testpmd>set fwd csum testpmd>start @@ -398,39 +404,39 @@ all path multi-queues with server mode when vhost uses the asynchronous enqueue 21. Quit and relaunch vhost with diff channel:: - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 -a 0000:6a:01.0 -a 0000:6f:01.0 -a 0000:74:01.0 -a 0000:79:01.0 -a 0000:e7:01.0 -a 0000:ec:01.0 -a 0000:f1:01.0 -a 0000:f6:01.0 \ - --vdev 'eth_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq6;txq7]' \ - --iova=va -- -i --nb-cores=5 --rxq=8 --txq=8 --txd=1024 --rxd=1024 \ - --lcore-dma=[lcore11@0000:6a:01.0-q0,lcore11@0000:f6:01.0-q7,lcore12@0000:6f:01.0-q1,lcore12@0000:74:01.0-q2,lcore12@0000:79:01.0-q3,lcore13@0000:74:01.0-q2,lcore13@0000:79:01.0-q3,lcore13@0000:e7:01.0-q4,lcore14@0000:74:01.0-q2,lcore14@0000:79:01.0-q3,lcore14@0000:e7:01.0-q4,lcore14@0000:ec:01.0-q5,lcore15@0000:6a:01.0-q0,lcore15@0000:6f:01.0-q1,lcore15@0000:74:01.0-q2,lcore15@0000:79:01.0-q3,lcore15@0000:e7:01.0-q4,lcore15@0000:ec:01.0-q5,lcore15@0000:f1:01.0-q6,lcore15@0000:f6:01.0-q7] + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 -a 0000:e7:01.0,max_queues=2 -a 0000:ec:01.0,max_queues=2 \ + --vdev 'eth_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]' \ + --iova=va -- -i --nb-cores=1 --rxq=8 --txq=8 --txd=1024 --rxd=1024 \ + --lcore-dma=[lcore11@0000:e7:01.0-q0,lcore11@0000:ec:01.0-q1] 22. Rerun steps 11-14. 23. Quit and relaunch vhost w/ iova=pa:: - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 -a 0000:6a:01.0 \ - --vdev 'eth_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq6;txq7]' \ - --iova=pa -- -i --nb-cores=5 --rxq=8 --txq=8 --txd=1024 --rxd=1024 \ - --lcore-dma=[lcore11@0000:6a:01.0-q0,lcore11@0000:6a:01.0-q7,lcore12@0000:6a:01.0-q1,lcore12@0000:6a:01.0-q2,lcore12@0000:6a:01.0-q3,lcore13@0000:6a:01.0-q2,lcore13@0000:6a:01.0-q3,lcore13@0000:6a:01.0-q4,lcore14@0000:6a:01.0-q2,lcore14@0000:6a:01.0-q3,lcore14@0000:6a:01.0-q4,lcore14@0000:6a:01.0-q5,lcore15@0000:6a:01.0-q0,lcore15@0000:6a:01.0-q1,lcore15@0000:6a:01.0-q2,lcore15@0000:6a:01.0-q3,lcore15@0000:6a:01.0-q4,lcore15@0000:6a:01.0-q5,lcore15@0000:6a:01.0-q6,lcore15@0000:6a:01.0-q7] + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 -a 0000:e7:01.0,max_queues=4 \ + --vdev 'eth_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]' \ + --iova=pa -- -i --nb-cores=1 --rxq=8 --txq=8 --txd=1024 --rxd=1024 \ + --lcore-dma=[lcore11@0000:e7:01.0-q1,lcore11@0000:e7:01.0-q3] -24. Rerun steps 3-6. +24. Rerun steps 11-14. -Test Case 5: loopback split ring server mode large chain packets stress test with dsa kernel driver +Test Case 5: Loopback split ring server mode large chain packets stress test with dsa kernel driver --------------------------------------------------------------------------------------------------- This is a stress test case about forwading large chain packets in loopback vhost-user/virtio-user split ring with server mode -when vhost uses the asynchronous enqueue operations with dsa kernel driver. +when vhost uses the asynchronous enqueue and dequeue operations with dsa kernel driver. 1. Bind 1 dsa device to idxd like common step 2:: ls /dev/dsa #check wq configure, reset if exist ./usertools/dpdk-devbind.py -u 6a:01.0 ./usertools/dpdk-devbind.py -b idxd 6a:01.0 - # ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 4 0 + # ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 4 0 ls /dev/dsa #check wq configure success 2. Launch vhost:: - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --file-prefix=vhost \ - --vdev 'net_vhost0,iface=vhost-net0,queues=1,dmas=[txq0],client=1' \ + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --file-prefix=vhost --no-pci \ + --vdev 'net_vhost0,iface=vhost-net0,queues=1,dmas=[txq0;rxq0],client=1' \ --iova=va -- -i --nb-cores=1 --txd=1024 --rxd=1024 --mbuf-size=45535 --lcore-dma=[lcore3@wq0.2] 3. launch virtio and start testpmd:: @@ -446,23 +452,23 @@ when vhost uses the asynchronous enqueue operations with dsa kernel driver. testpmd>start tx_first 32 testpmd>show port stats all -Test Case 6: loopback packed ring server mode large chain packets stress test with dsa kernel driver +Test Case 6: Loopback packed ring server mode large chain packets stress test with dsa kernel driver ----------------------------------------------------------------------------------------------------- This is a stress test case about forwading large chain packets in loopback vhost-user/virtio-user packed ring with server mode -when vhost uses the asynchronous enqueue operations with dsa kernel driver. +when vhost uses the asynchronous operations with dsa kernel driver. 1. Bind 1 dsa port to idxd like common step 2:: ls /dev/dsa #check wq configure, reset if exist ./usertools/dpdk-devbind.py -u 6a:01.0 ./usertools/dpdk-devbind.py -b idxd 6a:01.0 - # ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 1 0 + # ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 1 0 ls /dev/dsa #check wq configure success 2. Launch vhost:: - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-3 -n 4 \ - --vdev 'eth_vhost0,iface=vhost-net0,queues=1,dmas=[txq0],client=1' \ + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-3 -n 4 --no-pci \ + --vdev 'eth_vhost0,iface=vhost-net0,queues=1,dmas=[txq0;rxq0],client=1' \ --iova=va -- -i --nb-cores=1 --mbuf-size=45535 --lcore-dma=[lcore3@wq0.0] 3. launch virtio and start testpmd:: @@ -478,25 +484,24 @@ when vhost uses the asynchronous enqueue operations with dsa kernel driver. testpmd>start tx_first 32 testpmd>show port stats all -Test Case 7: loopback split ring all path server mode and multi-queues payload check with dsa kernel driver +Test Case 7: Loopback split ring all path server mode and multi-queues payload check with dsa kernel driver ------------------------------------------------------------------------------------------------------------- This case tests the payload is valid after forwading large chain packets in loopback vhost-user/virtio-user split ring -all path multi-queues with server mode when vhost uses the asynchronous enqueue operations with dsa kernel driver. +all path multi-queues with server mode when vhost uses the asynchronous enqueue and dequeue operations with dsa kernel driver. -1. bind 3 dsa port to idxd like common step 2:: +1. bind 2 dsa port to idxd like common step 2:: ls /dev/dsa #check wq configure, reset if exist - ./usertools/dpdk-devbind.py -u 6a:01.0 6f:01.0 74:01.0 - ./usertools/dpdk-devbind.py -b idxd 6a:01.0 6f:01.0 74:01.0 - # ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 0 - # ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 2 - # ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 4 + ./usertools/dpdk-devbind.py -u 6a:01.0 6f:01.0 + ./usertools/dpdk-devbind.py -b idxd 6a:01.0 6f:01.0 + # ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 0 + # ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 1 ls /dev/dsa #check wq configure success 2. Launch vhost:: - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 \ - --vdev 'eth_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6]' \ + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 --no-pci \ + --vdev 'eth_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]' \ --iova=va -- -i --nb-cores=5 --rxq=8 --txq=8 --txd=1024 --rxd=1024 \ --lcore-dma=[lcore11@wq0.0,lcore12@wq0.0,lcore13@wq0.1,lcore13@wq0.2,lcore14@wq0.1,lcore14@wq0.2,lcore15@wq0.1,lcore15@wq0.2] @@ -520,6 +525,7 @@ all path multi-queues with server mode when vhost uses the asynchronous enqueue testpmd> set txpkts 64,64,64,2000,2000,2000 testpmd> set burst 1 testpmd> start tx_first 1 + testpmd> show port stats all testpmd> stop 6. Quit pdump, check all the packets length are 6192 Byte in the pcap file and the payload in receive packets are same. @@ -552,6 +558,7 @@ all path multi-queues with server mode when vhost uses the asynchronous enqueue testpmd> set txpkts 64,128,256,512 testpmd> set burst 1 testpmd> start tx_first 1 + testpmd> show port stats all testpmd> stop 13. Quit pdump, check all the packets length are 960 Byte in the pcap file and the payload in receive packets are same. @@ -580,45 +587,39 @@ all path multi-queues with server mode when vhost uses the asynchronous enqueue 19. Quit and relaunch vhost with diff channel:: - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 -a 0000:6a:01.0 -a 0000:6f:01.0 -a 0000:74:01.0 \ - --vdev 'eth_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6]' \ + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 --no-pci \ + --vdev 'eth_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[rxq0;rxq1;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]' \ --iova=va -- -i --nb-cores=5 --rxq=8 --txq=8 --txd=1024 --rxd=1024 \ - --lcore-dma=[lcore11@wq0.0,lcore12@wq0.0,lcore13@wq2.1,lcore13@wq4.2,lcore14@wq2.1,lcore14@wq4.2,lcore15@wq2.1,lcore15@wq4.2] + --lcore-dma=[lcore11@wq0.0,lcore12@wq0.0,lcore13@wq0.1,lcore13@wq1.0,lcore14@wq0.1,lcore14@wq1.0,lcore15@wq0.1,lcore15@wq1.0] 20. Rerun steps 11-14. -Test Case 8: loopback packed ring all path server mode and multi-queues payload check with dsa kernel driver +Test Case 8: Loopback packed ring all path server mode and multi-queues payload check with dsa kernel driver ------------------------------------------------------------------------------------------------------------- This case tests the payload is valid after forwading large chain packets in loopback vhost-user/virtio-user packed ring -all path multi-queues with server mode when vhost uses the asynchronous enqueue operations with dsa kernel driver. +all path multi-queues with server mode when vhost uses the asynchronous enqueue and dequeue operations with dsa kernel driver. 1. bind 8 dsa port to idxd like common step 2:: ls /dev/dsa #check wq configure, reset if exist - ./usertools/dpdk-devbind.py -u 6a:01.0 6f:01.0 74:01.0 79:01.0 e7:01.0 ec:01.0 f1:01.0 f6:01.0 - ./usertools/dpdk-devbind.py -b idxd 6a:01.0 6f:01.0 74:01.0 79:01.0 e7:01.0 ec:01.0 f1:01.0 f6:01.0 - # ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 0 - # ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 2 - # ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 4 - # ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 6 - # ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 8 - # ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 10 - # ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 12 - # ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 14 + ./usertools/dpdk-devbind.py -u 6a:01.0 6f:01.0 + ./usertools/dpdk-devbind.py -b idxd 6a:01.0 6f:01.0 + # ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 4 0 + # ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 4 1 ls /dev/dsa #check wq configure success 2. Launch vhost:: - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 \ - --vdev 'eth_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq6;txq7]' \ - --iova=va -- -i --nb-cores=5 --rxq=8 --txq=8 --txd=1024 --rxd=1024 \ - --lcore-dma=[lcore11@wq0.0,lcore11@wq0.7,lcore12@wq0.1,lcore12@wq0.2,lcore12@wq0.3,lcore13@wq0.2,lcore13@wq0.3,lcore13@wq0.4,lcore14@wq0.2,lcore14@wq0.3,lcore14@wq0.4,lcore14@wq0.5,lcore15@wq0.0,lcore15@wq0.1,lcore15@wq0.2,lcore15@wq0.3,lcore15@wq0.4,lcore15@wq0.5,lcore15@wq0.6,lcore15@wq0.7] + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-14 -n 4 --no-pci \ + --vdev 'eth_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]' \ + --iova=va -- -i --nb-cores=4 --rxq=8 --txq=8 --txd=1024 --rxd=1024 \ + --lcore-dma=[lcore11@wq0.0,lcore12@wq0.1,lcore13@wq0.2,lcore14@wq0.3] 3. Launch virtio-user with packed ring mergeable inorder path:: - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 5-6 -n 4 --file-prefix=virtio-user0 --no-pci \ + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-6 -n 4 --file-prefix=virtio-user0 --no-pci \ --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=8,mrg_rxbuf=1,in_order=1,packed_vq=1,server=1 \ - -- -i --nb-cores=1 --rxq=8 --txq=8 --txd=1024 --rxd=1024 + -- -i --nb-cores=4 --rxq=8 --txq=8 --txd=1024 --rxd=1024 testpmd>set fwd csum testpmd>start @@ -634,6 +635,7 @@ all path multi-queues with server mode when vhost uses the asynchronous enqueue testpmd> set txpkts 64,64,64,2000,2000,2000 testpmd> set burst 1 testpmd> start tx_first 1 + testpmd> show port stats all testpmd> stop 6. Quit pdump, check all the packets length are 6192 Byte in the pcap file and the payload in receive packets are same. @@ -642,9 +644,9 @@ all path multi-queues with server mode when vhost uses the asynchronous enqueue 8. Quit and relaunch virtio with packed ring mergeable path as below:: - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 5-6 -n 4 --file-prefix=virtio-user0 --no-pci \ + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-6 -n 4 --file-prefix=virtio-user0 --no-pci \ --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=8,mrg_rxbuf=1,in_order=0,packed_vq=1,server=1 \ - -- -i --nb-cores=1 --rxq=8 --txq=8 --txd=1024 --rxd=1024 + -- -i --nb-cores=4 --rxq=8 --txq=8 --txd=1024 --rxd=1024 testpmd>set fwd csum testpmd>start @@ -652,9 +654,9 @@ all path multi-queues with server mode when vhost uses the asynchronous enqueue 10. Quit and relaunch virtio with packed ring non-mergeable path as below:: - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 5-6 -n 4 --file-prefix=virtio-user0 --no-pci \ + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-6 -n 4 --file-prefix=virtio-user0 --no-pci \ --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=8,mrg_rxbuf=0,in_order=0,packed_vq=1,server=1 \ - -- -i --nb-cores=1 --rxq=8 --txq=8 --txd=1024 --rxd=1024 + -- -i --nb-cores=4 --rxq=8 --txq=8 --txd=1024 --rxd=1024 testpmd>set fwd csum testpmd>start @@ -666,6 +668,7 @@ all path multi-queues with server mode when vhost uses the asynchronous enqueue testpmd> set txpkts 64,128,256,512 testpmd> set burst 1 testpmd> start tx_first 1 + testpmd> show port stats all testpmd> stop 13. Quit pdump, check all the packets length are 960 Byte in the pcap file and the payload in receive packets are same. @@ -674,9 +677,9 @@ all path multi-queues with server mode when vhost uses the asynchronous enqueue 15. Quit and relaunch virtio with packed ring inorder non-mergeable path as below:: - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 5-6 -n 4 --file-prefix=virtio-user0 --no-pci \ + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-6 -n 4 --file-prefix=virtio-user0 --no-pci \ --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=8,mrg_rxbuf=0,in_order=1,packed_vq=1,server=1 \ - -- -i --nb-cores=1 --rxq=8 --txq=8 --txd=1024 --rxd=1024 + -- -i --nb-cores=4 --rxq=8 --txq=8 --txd=1024 --rxd=1024 testpmd>set fwd csum testpmd>start @@ -684,9 +687,9 @@ all path multi-queues with server mode when vhost uses the asynchronous enqueue 17. Quit and relaunch virtio with packed ring vectorized path as below:: - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 5-6 -n 4 --file-prefix=virtio-user0 --no-pci --force-max-simd-bitwidth=512 \ + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-6 -n 4 --file-prefix=virtio-user0 --no-pci --force-max-simd-bitwidth=512 \ --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=8,mrg_rxbuf=0,in_order=1,packed_vq=1,vectorized=1,server=1 \ - -- -i --nb-cores=1 --rxq=8 --txq=8 --txd=1024 --rxd=1024 + -- -i --nb-cores=4 --rxq=8 --txq=8 --txd=1024 --rxd=1024 testpmd>set fwd csum testpmd>start @@ -694,9 +697,9 @@ all path multi-queues with server mode when vhost uses the asynchronous enqueue 19. Quit and relaunch virtio with packed ring vectorized path and ring size is not power of 2 as below:: - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 5-6 -n 4 --file-prefix=virtio-user0 --no-pci --force-max-simd-bitwidth=512 \ + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-6 -n 4 --file-prefix=virtio-user0 --no-pci --force-max-simd-bitwidth=512 \ --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=8,mrg_rxbuf=0,in_order=1,packed_vq=1,vectorized=1,queue_size=1025,server=1 \ - -- -i --nb-cores=1 --rxq=8 --txq=8 --txd=1025 --rxd=1025 + -- -i --nb-cores=2 --rxq=8 --txq=8 --txd=1025 --rxd=1025 testpmd>set fwd csum testpmd>start @@ -704,36 +707,34 @@ all path multi-queues with server mode when vhost uses the asynchronous enqueue 21. Quit and relaunch vhost with diff channel:: - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 \ - --vdev 'eth_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq6;txq7]' \ - --iova=va -- -i --nb-cores=5 --rxq=8 --txq=8 --txd=1024 --rxd=1024 \ - --lcore-dma=[lcore11@wq0.0,lcore11@wq14.7,lcore12@wq2.1,lcore12@wq4.2,lcore12@wq6.3,lcore13@wq4.2,lcore13@wq6.3,lcore13@wq8.4,lcore14@wq4.2,lcore14@wq6.3,lcore14@wq8.4,lcore14@wq10.5,lcore15@wq0.0,lcore15@wq2.1,lcore15@wq4.2,lcore15@wq6.3,lcore15@wq8.4,lcore15@wq10.5,lcore15@wq12.6,lcore15@wq14.7] + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-14 -n 4 --no-pci \ + --vdev 'eth_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7;rxq0;rxq1;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]' \ + --iova=va -- -i --nb-cores=4 --rxq=8 --txq=8 --txd=1024 --rxd=1024 \ + --lcore-dma=[lcore11@wq0.0,lcore11@wq1.0,lcore12@wq0.1,lcore12@wq1.1,lcore13@wq0.2,lcore13@wq1.2,lcore14@wq0.3,lcore14@wq1.3] 22. Rerun steps 3-6. -Test Case 9: loopback split and packed ring server mode multi-queues and mergeable path payload check with dsa dpdk and kernel driver +Test Case 9: Loopback split and packed ring server mode multi-queues and mergeable path payload check with dsa dpdk and kernel driver -------------------------------------------------------------------------------------------------------------------------------------- This case tests the payload is valid after forwading large chain packets in loopback vhost-user/virtio-user split and packed ring -multi-queues with server mode when vhost uses the asynchronous enqueue operations with dsa kernel driver. +multi-queues with server mode when vhost uses the asynchronous enqueue and dequeue operations with dsa kernel driver. -1. bind 4 dsa device to idxd and 2 dsa device to vfio-pci like common step 1-2:: +1. bind 2 dsa device to idxd and 2 dsa device to vfio-pci like common step 1-2:: ls /dev/dsa #check wq configure, reset if exist - ./usertools/dpdk-devbind.py -u 6a:01.0 6f:01.0 74:01.0 79:01.0 - ./usertools/dpdk-devbind.py -b idxd 6a:01.0 6f:01.0 74:01.0 79:01.0 - ./usertools/dpdk-devbind.py -b vfio-pci e7:01.0 ec:01.0 - # ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 1 0 - # ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 1 2 - # ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 1 4 - # ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 1 6 + ./usertools/dpdk-devbind.py -u 6a:01.0 6f:01.0 f1:01.0 f6:01.0 + ./usertools/dpdk-devbind.py -b idxd 6a:01.0 6f:01.0 f1:01.0 f6:01.0 + ./usertools/dpdk-devbind.py -b vfio-pci f1:01.0 f6:01.0 + # ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 0 + # ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 1 ls /dev/dsa #check wq configure success 2. Launch vhost:: - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-3 -n 4 --file-prefix=vhost -a 0000:e7:01.0,max_queues=4 -a 0000:ec:01.0,max_queues=4 \ - --vdev 'eth_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6]' \ + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-3 -n 4 --file-prefix=vhost -a 0000:f1:01.0,max_queues=4 -a 0000:f6:01.0,max_queues=4 \ + --vdev 'eth_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7;rxq0;rxq1;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]' \ --iova=va -- -i --nb-cores=1 --rxq=8 --txq=8 --txd=1024 --rxd=1024 \ - --lcore-dma=[lcore3@wq0.0,lcore3@wq2.0,lcore3@wq4.0,lcore3@wq6.0,lcore3@0000:e7:01.0-q0,lcore3@0000:e7:01.0-q2,lcore3@0000:ec:01.0-q3] + --lcore-dma=[lcore3@wq0.0,lcore3@wq0.1,lcore3@wq1.0,lcore3@wq1.1,lcore3@0000:f1:01.0-q0,lcore3@0000:f1:01.0-q2,lcore3@0000:f6:01.0-q3] 3. Launch virtio-user with split ring mergeable inorder path:: @@ -751,10 +752,12 @@ multi-queues with server mode when vhost uses the asynchronous enqueue operation 5. Send large pkts from vhost, check loopback performance can get expected and each queue can receive packets:: - testpmd>set fwd csum - testpmd>set txpkts 64,64,64,2000,2000,2000 - testpmd>set burst 1 - testpmd>start tx_first 1 + testpmd>set fwd csum + testpmd>set txpkts 64,64,64,2000,2000,2000 + testpmd>set burst 1 + testpmd>start tx_first 1 + testpmd>show port stats all + testpmd>stop 6. Quit pdump and chcek all the packets length is 6192 and the payload of all packets are same in the pcap file. @@ -788,4 +791,4 @@ multi-queues with server mode when vhost uses the asynchronous enqueue operation testpmd>set fwd csum testpmd>start -13. Stop vhost and rerun step 4-7. \ No newline at end of file +13. Stop vhost and rerun step 4-7. -- 2.25.1