From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 3CC61A00C2; Wed, 30 Nov 2022 07:30:54 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 3614940A79; Wed, 30 Nov 2022 07:30:54 +0100 (CET) Received: from mga18.intel.com (mga18.intel.com [134.134.136.126]) by mails.dpdk.org (Postfix) with ESMTP id 6E0A54014F for ; Wed, 30 Nov 2022 07:30:51 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1669789851; x=1701325851; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=MGlcaWe51rKGxRcfSRJCS23An/WtDxl6HGBsSvX7WCE=; b=VCGxytWF/xTxikZ6dMSOR5fRNgC+xzI/Ne1rQfMEimyiLDRJ+33s0OZ1 B9h3jopLiPjEcLM3W23rO+XRz/3GhEtFl5wU/V5PSYpP9Xkkg/rtzmykI 4sisumyjlEEUXwm+2478oG3IbdgoE9kk2xhYE272fUK8X+SF0y4ZuVsxc MFgOhFmUoi8v+7ldzgK5zQi530SUEIu2c+X9ueMH1p7/x3w+W/VKCk/9h aGQzsx39cLeRwRljX1mpp3xpZluQ7Pg6XcoLwlNdnBWqSum+Ff8gBmYAk CQzAH/T4S+tk/W6gVNyJpLQONpPrdwbIDmIPj0FtnBp6wXvVLoPRPN1/6 A==; X-IronPort-AV: E=McAfee;i="6500,9779,10546"; a="298686699" X-IronPort-AV: E=Sophos;i="5.96,205,1665471600"; d="scan'208";a="298686699" Received: from orsmga001.jf.intel.com ([10.7.209.18]) by orsmga106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 29 Nov 2022 22:30:47 -0800 X-IronPort-AV: E=McAfee;i="6500,9779,10546"; a="676713127" X-IronPort-AV: E=Sophos;i="5.96,205,1665471600"; d="scan'208";a="676713127" Received: from unknown (HELO localhost.localdomain) ([10.239.252.222]) by orsmga001-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 29 Nov 2022 22:30:45 -0800 From: Wei Ling To: dts@dpdk.org Cc: Wei Ling Subject: [dts][PATCH V2 1/2] test_plans/loopback_virtio_user_server_mode_dsa_test_plan: modify and add new testcases Date: Wed, 30 Nov 2022 14:25:09 +0800 Message-Id: <20221130062509.1164474-1-weix.ling@intel.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-BeenThere: dts@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: test suite reviews and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dts-bounces@dpdk.org Modify and add new testcases to cover the virtio loopback topo with the DSA driver. Signed-off-by: Wei Ling --- ..._virtio_user_server_mode_dsa_test_plan.rst | 1212 ++++++++++++----- 1 file changed, 875 insertions(+), 337 deletions(-) diff --git a/test_plans/loopback_virtio_user_server_mode_dsa_test_plan.rst b/test_plans/loopback_virtio_user_server_mode_dsa_test_plan.rst index 55c71c99..ecd2c34f 100644 --- a/test_plans/loopback_virtio_user_server_mode_dsa_test_plan.rst +++ b/test_plans/loopback_virtio_user_server_mode_dsa_test_plan.rst @@ -1,38 +1,47 @@ .. SPDX-License-Identifier: BSD-3-Clause Copyright(c) 2022 Intel Corporation -======================================================================= +===================================================================== Loopback vhost-user/virtio-user server mode with DSA driver test plan -======================================================================= +===================================================================== Description =========== -Vhost asynchronous data path leverages DMA devices to offload memory copies from the CPU and it is implemented in an asynchronous way. -In addition, vhost supports M:N mapping between vrings and DMA virtual channels. Specifically, one vring can use multiple different DMA -channels and one DMA channel can be shared by multiple vrings at the same time. Vhost enqueue and dequeue operation with CBDMA channels is supported -in both split and packed ring. +Vhost asynchronous data path leverages DMA devices to offload memory copies from the CPU and it is implemented in an +asynchronous way. DPDK Vhost with DSA acceleration supports M:N mapping between virtqueues and DSA WQs. Specifically, +one DSA WQ can be used by multiple virtqueues and one virtqueue can offload copies to multiple DSA WQs at the same time. +Vhost async enqueue and async dequeue operation is supported in both split and packed ring. This document provides the test plan for testing the following features when Vhost-user using asynchronous data path with -DSA driver (kernel IDXD driver and DPDK vfio-pci driver) in loopback virtio-user topology. -1. Virtio-user server mode is a feature to enable virtio-user as the server, vhost as the client, thus after vhost-user is killed then relaunched, -virtio-user can reconnect back to vhost-user again; at another hand, virtio-user also can reconnect back to vhost-user after virtio-user is killed. -This feature test need cover different rx/tx paths with virtio 1.0 and virtio 1.1, includes split ring mergeable, non-mergeable, inorder mergeable, -inorder non-mergeable, vector_rx path and packed ring mergeable, non-mergeable, inorder non-mergeable, inorder mergeable, vectorized path. +DSA driver (kernel IDXD driver and DPDK vfio-pci driver) in loopback vhost-user/virtio-user topology. +1. Virtio-user server mode is a feature to enable virtio-user as the server, vhost as the client, thus after vhost-user +is killed then relaunched, virtio-user can reconnect back to vhost-user again; at another hand, virtio-user also can +reconnect back to vhost-user after virtio-user is killed. This feature test cover different rx/tx paths with virtio 1.0 +and virtio 1.1, includes split ring mergeable, non-mergeable, inorder mergeable, inorder non-mergeable, vector_rx path +and packed ring mergeable, non-mergeable, inorder non-mergeable, inorder mergeable, vectorized path. 2. Check payload valid after packets forwarding many times. 3. Stress test with large chain packets. -IOMMU impact: -If iommu off, idxd can work with iova=pa -If iommu on, kernel dsa driver only can work with iova=va by program IOMMU, can't use iova=pa(fwd not work due to pkts payload wrong). +.. note:: + + 1.When DMA devices are bound to vfio driver, VA mode is the default and recommended. For PA mode, page by page mapping may + exceed IOMMU's max capability, better to use 1G guest hugepage. + 2.DPDK local patch that about vhost pmd is needed when testing Vhost asynchronous data path with testpmd. In this patch, + we enable asynchronous data path for vhostpmd. Asynchronous data path is enabled per tx/rx queue, and users need to specify + the DMA device used by the tx/rx queue. Each tx/rx queue only supports to use one DMA device (This is limited by the + implementation of vhostpmd), but one DMA device can be shared among multiple tx/rx queues of different vhost PMD ports. -Note: -1. When DMA devices are bound to vfio driver, VA mode is the default and recommended. For PA mode, page by page mapping may -exceed IOMMU's max capability, better to use 1G guest hugepage. -2. DPDK local patch that about vhost pmd is needed when testing Vhost asynchronous data path with testpmd, and the suite has not yet been automated. +Two PMD parameters are added: +- dmas: specify the used DMA device for a tx/rx queue.(Default: no queues enable asynchronous data path) +- dma-ring-size: DMA ring size.(Default: 4096). + +Here is an example: +--vdev 'eth_vhost0,iface=./s0,dmas=[txq0@0000:00.01.0;rxq0@0000:00.01.1],dma-ring-size=2048' Prerequisites ============= + Topology -------- Test flow: Vhost-user <-> Virtio-user @@ -47,14 +56,10 @@ General set up CC=gcc meson --werror -Denable_kmods=True -Dlibdir=lib -Dexamples=all --default-library=static x86_64-native-linuxapp-gcc ninja -C x86_64-native-linuxapp-gcc -j 110 -2. Get the PCI device ID and DSA device ID of DUT, for example, 0000:4f:00.1 is PCI device ID, 0000:6a:01.0 - 0000:f6:01.0 are DSA device IDs:: +2. Get the PCI device of DUT, for example, 0000:6a:01.0 - 0000:f6:01.0 are DSA devices:: # ./usertools/dpdk-devbind.py -s - Network devices using kernel driver - =================================== - 0000:4f:00.1 'Ethernet Controller E810-C for QSFP 1592' drv=ice unused=vfio-pci - DMA devices using kernel driver =============================== 0000:6a:01.0 'Device 0b25' drv=idxd unused=vfio-pci @@ -73,136 +78,118 @@ Common steps ------------ 1. Bind DSA devices to DPDK vfio-pci driver:: - # ./usertools/dpdk-devbind.py -b vfio-pci + # ./usertools/dpdk-devbind.py -b vfio-pci - For example, bind 2 DMA devices to vfio-pci driver: + For example, bind 2 DSA devices to vfio-pci driver: # ./usertools/dpdk-devbind.py -b vfio-pci 0000:e7:01.0 0000:ec:01.0 .. note:: - One DPDK DSA device can create 8 WQ at most. Below is an example, where DPDK DSA device will create one and - eight WQ for DSA deivce 0000:e7:01.0 and 0000:ec:01.0. The value of “max_queues” is 1~8: + One DPDK DSA device can create 8 WQ at most. Below is an example, where DPDK DSA device will create one WQ for deivce + 0000:e7:01.0 and eight WQs for 0000:ec:01.0. The value range of “max_queues” is 1~8 and the default value is 8: # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 4 -n 4 -a 0000:e7:01.0,max_queues=1 -- -i # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 4 -n 4 -a 0000:ec:01.0,max_queues=8 -- -i 2. Bind DSA devices to kernel idxd driver, and configure Work Queue (WQ):: - # ./usertools/dpdk-devbind.py -b idxd - # ./drivers/dma/idxd/dpdk_idxd_cfg.py -q + # ./usertools/dpdk-devbind.py -b idxd + # ./drivers/dma/idxd/dpdk_idxd_cfg.py -q .. note:: + dsa_idx: Index of DSA devices, where 0<=dsa_idx<=7, corresponding to 0000:6a:01.0 - 0000:f6:01.0 + wq_num: Number of work queues configured per DSA instance, where 1<=wq_num<=8 + Better to reset WQ when need operate DSA devices that bound to idxd drvier: - # ./drivers/dma/idxd/dpdk_idxd_cfg.py --reset + # ./drivers/dma/idxd/dpdk_idxd_cfg.py --reset You can check it by 'ls /dev/dsa' - numDevices: number of devices, where 0<=numDevices<=7, corresponding to 0000:6a:01.0 - 0000:f6:01.0 - numWq: Number of workqueues per DSA endpoint, where 1<=numWq<=8 - - For example, bind 2 DMA devices to idxd driver and configure WQ: + For example, bind 2 DSA devices to idxd driver and configure WQ: # ./usertools/dpdk-devbind.py -b idxd 6a:01.0 6f:01.0 # ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 1 0 # ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 4 1 Check WQ by 'ls /dev/dsa' and can find "wq0.0 wq1.0 wq1.1 wq1.2 wq1.3" Test Case 1: Loopback split ring server mode large chain packets stress test with dsa dpdk driver ---------------------------------------------------------------------------------------------------- +------------------------------------------------------------------------------------------------- This is a stress test case about forwading large chain packets in loopback vhost-user/virtio-user split ring with server mode -when vhost uses the asynchronous enqueue and dequeue operations with dsa dpdk driver. Both iova as VA and PA mode test. +when vhost uses the asynchronous enqueue and dequeue operations with dsa dpdk driver. -1. Bind 1 dsa device to vfio-pci like common step 1:: +1. Bind 1 DSA device to vfio-pci like common step 1:: # ./usertools/dpdk-devbind.py -b vfio-pci e7:01.0 -2. Launch vhost:: +2. Launch vhost-user:: # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --file-prefix=vhost -a 0000:e7:01.0,max_queues=1 \ - --vdev 'net_vhost0,iface=vhost-net0,queues=1,dmas=[txq0;rxq0],client=1' \ - --iova=va -- -i --nb-cores=1 --txd=1024 --rxd=1024 --mbuf-size=45535 --lcore-dma=[lcore3@0000:e7:01.0-q0] + --vdev 'net_vhost0,iface=vhost-net0,queues=1,dmas=[txq0@0000:e7:01.0-q0;rxq0@0000:e7:01.0-q0],client=1' \ + --iova=va -- -i --nb-cores=1 --mbuf-size=65535 -3. launch virtio and start testpmd:: +3. Launch virtio-user and start testpmd:: - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 5-6 -n 4 --file-prefix=testpmd0 --no-pci \ + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 5-6 -n 4 --file-prefix=testpmd0 --no-pci \ --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=1,mrg_rxbuf=1,in_order=0,vectorized=1,queue_size=2048,server=1 \ -- -i --rxq=1 --txq=1 --txd=2048 --rxd=2048 --nb-cores=1 - testpmd>start - -4. Send large pkts from vhost and check the stats:: - - testpmd>set txpkts 45535,45535,45535,45535,45535 - testpmd>start tx_first 32 - testpmd>show port stats all + testpmd> start -5. Stop and quit vhost testpmd and relaunch vhost with pa mode by below command:: +4. Send large packets from vhost and check packets can loop normally:: - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --file-prefix=vhost -a 0000:e7:01.0,max_queues=4 \ - --vdev 'net_vhost0,iface=vhost-net0,queues=1,dmas=[txq0;rxq0],client=1' \ - --iova=pa -- -i --nb-cores=1 --txd=1024 --rxd=1024 --mbuf-size=45535 --lcore-dma=[lcore3@0000:e7:01.0-q0] - -6. Rerun step 4. + testpmd> set txpkts 65535,65535 + testpmd> start tx_first 32 + testpmd> show port stats all Test Case 2: Loopback packed ring server mode large chain packets stress test with dsa dpdk driver ----------------------------------------------------------------------------------------------------- +-------------------------------------------------------------------------------------------------- This is a stress test case about forwading large chain packets in loopback vhost-user/virtio-user packed ring with server mode -when vhost uses the asynchronous operations with dsa dpdk driver. Both iova as VA and PA mode test. +when vhost uses the asynchronous enqueue and dequeue operations with dsa dpdk driver. -1. Bind 1 dsa port to vfio-pci as common step 1:: +1. Bind 1 DSA device to vfio-pci as common step 1:: - # ./usertools/dpdk-devbind.py -b vfio-pci f1:01.0 + # ./usertools/dpdk-devbind.py -b vfio-pci e7:01.0 -2. Launch vhost:: +2. Launch vhost-user:: # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-3 -n 4 -a 0000:f1:01.0,max_queues=1 \ - --vdev 'eth_vhost0,iface=vhost-net0,queues=1,dmas=[txq0;rxq0],client=1' \ - --iova=va -- -i --nb-cores=1 --mbuf-size=45535 --lcore-dma=[lcore3@0000:f1:01.0-q0] + --vdev 'eth_vhost0,iface=vhost-net0,queues=1,dmas=[txq0@0000:e7:01.0-q0;rxq0@0000:e7:01.0-q0],client=1' \ + --iova=va -- -i --nb-cores=1 --mbuf-size=65535 -3. launch virtio and start testpmd:: +3. Launch virtio-user and start testpmd:: - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 5-6 -n 4 --file-prefix=testpmd0 --no-pci \ + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 5-6 -n 4 --file-prefix=testpmd0 --no-pci \ --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=1,mrg_rxbuf=1,in_order=0,vectorized=1,packed_vq=1,queue_size=2048,server=1 \ -- -i --rxq=1 --txq=1 --txd=2048 --rxd=2048 --nb-cores=1 - testpmd>start - -4. Send large pkts from vhost and check the stats:: + testpmd> start - testpmd>set txpkts 45535,45535,45535,45535,45535 - testpmd>start tx_first 32 - testpmd>show port stats all +4. Send large packets from vhost and check packets can loop normally:: -5. Stop and quit vhost testpmd and relaunch vhost with pa mode by below command:: - - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-3 -n 4 -a 0000:f1:01.0,max_queues=1 \ - --vdev 'eth_vhost0,iface=vhost-net0,queues=1,dmas=[txq0;rxq0],client=1' \ - --iova=va -- -i --nb-cores=1 --mbuf-size=45535 --lcore-dma=[lcore3@0000:f1:01.0-q0] - -6. Rerun step 4. + testpmd> set txpkts 65535,65535 + testpmd> start tx_first 32 + testpmd> show port stats all -Test Case 3: Loopback split ring all path server mode and multi-queues payload check with dsa dpdk driver ------------------------------------------------------------------------------------------------------------ +Test Case 3: Loopback split ring inorder mergeable path multi-queues payload check with server mode and dsa dpdk driver +----------------------------------------------------------------------------------------------------------------------- This case tests the payload is valid after forwading large chain packets in loopback vhost-user/virtio-user split ring -all path multi-queues with server mode when vhost uses the asynchronous enqueue and dequeue operations with dsa dpdk driver. -Both iova as VA and PA mode test. +inorder mergeable path multi-queues with server mode when vhost uses the asynchronous enqueue and dequeue operations with dsa dpdk driver. -1. bind 3 dsa port to vfio-pci like common step 1:: +1. Bind 1 DSA device to vfio-pci like common step 1:: ls /dev/dsa #check wq configure, reset if exist - ./usertools/dpdk-devbind.py -u e7:01.0 ec:01.0 f1:01.0 - ./usertools/dpdk-devbind.py -b vfio-pci e7:01.0 ec:01.0 f1:01.0 + ./usertools/dpdk-devbind.py -u e7:01.0 + ./usertools/dpdk-devbind.py -b vfio-pci e7:01.0 -2. Launch vhost:: +2. Launch vhost-user:: - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 -a 0000:e7:01.0,max_queues=4 \ - --vdev 'eth_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]' \ - --iova=va -- -i --nb-cores=4 --rxq=8 --txq=8 --txd=1024 --rxd=1024 \ - --lcore-dma=[lcore11@0000:e7:01.0-q0,lcore12@0000:e7:01.0-q0,lcore13@0000:e7:01.0-q1,lcore13@0000:e7:01.0-q2,lcore14@0000:e7:01.0-q1,lcore14@0000:e7:01.0-q2] + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 -a 0000:e7:01.0,max_queues=8 \ + --vdev 'eth_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[txq0@0000:e7:01.0-q0;txq1@0000:e7:01.0-q0;txq2@0000:e7:01.0-q0;txq3@0000:e7:01.0-q0;txq4@0000:e7:01.0-q1;txq5@0000:e7:01.0-q1;rxq2@0000:e7:01.0-q2;rxq3@0000:e7:01.0-q2;rxq4@0000:e7:01.0-q3;rxq5@0000:e7:01.0-q3;rxq6@0000:e7:01.0-q3;rxq7@0000:e7:01.0-q3]' \ + --iova=va -- -i --nb-cores=4 --rxq=8 --txq=8 --txd=1024 --rxd=1024 -3. Launch virtio-user with split ring mergeable inorder path:: +3. Launch virtio-user with split ring inorder mergeable path:: # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-6 -n 4 --file-prefix=virtio-user0 --no-pci \ --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=8,mrg_rxbuf=1,in_order=1,server=1 \ -- -i --nb-cores=4 --rxq=8 --txq=8 --txd=1024 --rxd=1024 - testpmd>set fwd csum - testpmd>start + testpmd> set fwd csum + testpmd> start 4. Attach pdump secondary process to primary process by same file-prefix:: @@ -210,9 +197,9 @@ Both iova as VA and PA mode test. --pdump 'device_id=net_virtio_user0,queue=0,rx-dev=./pdump-virtio-rx-0.pcap,mbuf-size=8000' \ --pdump 'device_id=net_virtio_user0,queue=3,rx-dev=./pdump-virtio-rx-3.pcap,mbuf-size=8000' -5. Send large pkts from vhost, check loopback performance can get expected and each queue can receive packets:: +5. Send large packets from vhost, check loopback performance can get expected and each queue can receive packets:: - testpmd> set fwd csum + testpmd> set fwd mac testpmd> set txpkts 64,64,64,2000,2000,2000 testpmd> set burst 1 testpmd> start tx_first 1 @@ -221,104 +208,198 @@ Both iova as VA and PA mode test. 6. Quit pdump, check all the packets length are 6192 Byte in the pcap file and the payload in receive packets are same. -7. Quit and relaunch vhost and rerun step 4-6. +Test Case 4: Loopback split ring mergeable path multi-queues payload check with server mode and dsa dpdk driver +--------------------------------------------------------------------------------------------------------------- +This case tests the payload is valid after forwading large chain packets in loopback vhost-user/virtio-user split ring +mergeable path multi-queues with server mode when vhost uses the asynchronous enqueue and dequeue operations with dsa dpdk driver. + +1. Bind 1 DSA device to vfio-pci like common step 1:: + + ls /dev/dsa #check wq configure, reset if exist + ./usertools/dpdk-devbind.py -u e7:01.0 + ./usertools/dpdk-devbind.py -b vfio-pci e7:01.0 + +2. Launch vhost-user:: -8. Quit and relaunch virtio with split ring mergeable path as below:: + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 -a 0000:e7:01.0,max_queues=8 \ + --vdev 'eth_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[txq0@0000:e7:01.0-q0;txq1@0000:e7:01.0-q0;txq2@0000:e7:01.0-q0;txq3@0000:e7:01.0-q0;txq4@0000:e7:01.0-q1;txq5@0000:e7:01.0-q1;rxq2@0000:e7:01.0-q2;rxq3@0000:e7:01.0-q2;rxq4@0000:e7:01.0-q3;rxq5@0000:e7:01.0-q3;rxq6@0000:e7:01.0-q3;rxq7@0000:e7:01.0-q3]' \ + --iova=va -- -i --nb-cores=4 --rxq=8 --txq=8 --txd=1024 --rxd=1024 + +3. Launch virtio-user with split ring mergeable path:: # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-6 -n 4 --file-prefix=virtio-user0 --no-pci \ --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=8,mrg_rxbuf=1,in_order=0,server=1 \ -- -i --nb-cores=4 --rxq=8 --txq=8 --txd=1024 --rxd=1024 - testpmd>set fwd csum - testpmd>start + testpmd> set fwd csum + testpmd> start + +4. Attach pdump secondary process to primary process by same file-prefix:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=virtio-user0 -- \ + --pdump 'device_id=net_virtio_user0,queue=0,rx-dev=./pdump-virtio-rx-0.pcap,mbuf-size=8000' \ + --pdump 'device_id=net_virtio_user0,queue=3,rx-dev=./pdump-virtio-rx-3.pcap,mbuf-size=8000' + +5. Send large packets from vhost, check loopback performance can get expected and each queue can receive packets:: + + testpmd> set fwd mac + testpmd> set txpkts 64,64,64,2000,2000,2000 + testpmd> set burst 1 + testpmd> start tx_first 1 + testpmd> show port stats all + testpmd> stop + +6. Quit pdump, check all the packets length are 6192 Byte in the pcap file and the payload in receive packets are same. + +Test Case 5: Loopback split ring non-mergeable path multi-queues payload check with server mode and dsa dpdk driver +------------------------------------------------------------------------------------------------------------------- +This case tests the payload is valid after forwading large chain packets in loopback vhost-user/virtio-user split ring +non-mergeable path multi-queues with server mode when vhost uses the asynchronous enqueue and dequeue operations with dsa dpdk driver. + +1. Bind 1 DSA device to vfio-pci like common step 1:: + + ls /dev/dsa #check wq configure, reset if exist + ./usertools/dpdk-devbind.py -u e7:01.0 + ./usertools/dpdk-devbind.py -b vfio-pci -9. Rerun steps 4-7. +2. Launch vhost-user:: -10. Quit and relaunch virtio with split ring non-mergeable path as below:: + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 -a 0000:e7:01.0,max_queues=8 \ + --vdev 'eth_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[txq0@0000:e7:01.0-q0;txq1@0000:e7:01.0-q0;txq2@0000:e7:01.0-q0;txq3@0000:e7:01.0-q0;txq4@0000:e7:01.0-q1;txq5@0000:e7:01.0-q1;rxq2@0000:e7:01.0-q2;rxq3@0000:e7:01.0-q2;rxq4@0000:e7:01.0-q3;rxq5@0000:e7:01.0-q3;rxq6@0000:e7:01.0-q3;rxq7@0000:e7:01.0-q3]' \ + --iova=va -- -i --nb-cores=4 --rxq=8 --txq=8 --txd=1024 --rxd=1024 + +3. Launch virtio-user with split ring non-mergeable path:: # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-6 -n 4 --file-prefix=virtio-user0 --no-pci \ --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=8,mrg_rxbuf=0,in_order=0,server=1 \ -- -i --enable-hw-vlan-strip --nb-cores=4 --rxq=8 --txq=8 --txd=1024 --rxd=1024 - testpmd>set fwd csum - testpmd>start + testpmd> set fwd csum + testpmd> start + +4. Attach pdump secondary process to primary process by same file-prefix:: -11. Rerun step 4. + # ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=virtio-user0 -- \ + --pdump 'device_id=net_virtio_user0,queue=0,rx-dev=./pdump-virtio-rx-0.pcap,mbuf-size=8000' \ + --pdump 'device_id=net_virtio_user0,queue=3,rx-dev=./pdump-virtio-rx-3.pcap,mbuf-size=8000' -12. Send pkts from vhost, check loopback performance can get expected and each queue can receive packets:: +5. Send large packets from vhost, check loopback performance can get expected and each queue can receive packets:: - testpmd> set fwd csum + testpmd> set fwd mac testpmd> set txpkts 64,128,256,512 testpmd> set burst 1 testpmd> start tx_first 1 testpmd> show port stats all testpmd> stop -13. Quit pdump, check all the packets length are 960 Byte in the pcap file and the payload in receive packets are same. +6. Quit pdump, check all the packets length are 960 Byte in the pcap file and the payload in receive packets are same. -14. Quit and relaunch vhost and rerun step 11-13. +Test Case 6: Loopback split ring inorder non-mergeable path multi-queues payload check with server mode and dsa dpdk driver +--------------------------------------------------------------------------------------------------------------------------- +This case tests the payload is valid after forwading large chain packets in loopback vhost-user/virtio-user split ring +inorder non-mergeable path multi-queues with server mode when vhost uses the asynchronous enqueue and dequeue operations with dsa dpdk driver. + +1. Bind 1 DSA device to vfio-pci like common step 1:: + + ls /dev/dsa #check wq configure, reset if exist + ./usertools/dpdk-devbind.py -u e7:01.0 + ./usertools/dpdk-devbind.py -b vfio-pci e7:01.0 -15. Quit and relaunch virtio with split ring inorder non-mergeable path as below:: +2. Launch vhost-user:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 -a 0000:e7:01.0,max_queues=8 \ + --vdev 'eth_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[txq0@0000:e7:01.0-q0;txq1@0000:e7:01.0-q0;txq2@0000:e7:01.0-q0;txq3@0000:e7:01.0-q0;txq4@0000:e7:01.0-q1;txq5@0000:e7:01.0-q1;rxq2@0000:e7:01.0-q2;rxq3@0000:e7:01.0-q2;rxq4@0000:e7:01.0-q3;rxq5@0000:e7:01.0-q3;rxq6@0000:e7:01.0-q3;rxq7@0000:e7:01.0-q3]' \ + --iova=va -- -i --nb-cores=4 --rxq=8 --txq=8 --txd=1024 --rxd=1024 + +3. Launch virtio-user with split ring inorder non-mergeable path:: # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-6 -n 4 --file-prefix=virtio-user0 --no-pci \ --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=8,mrg_rxbuf=0,in_order=1,server=1 \ -- -i --nb-cores=4 --rxq=8 --txq=8 --txd=1024 --rxd=1024 - testpmd>set fwd csum - testpmd>start + testpmd> set fwd csum + testpmd> start + +4. Attach pdump secondary process to primary process by same file-prefix:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=virtio-user0 -- \ + --pdump 'device_id=net_virtio_user0,queue=0,rx-dev=./pdump-virtio-rx-0.pcap,mbuf-size=8000' \ + --pdump 'device_id=net_virtio_user0,queue=3,rx-dev=./pdump-virtio-rx-3.pcap,mbuf-size=8000' + +5. Send large packets from vhost, check loopback performance can get expected and each queue can receive packets:: + + testpmd> set fwd mac + testpmd> set txpkts 64,128,256,512 + testpmd> set burst 1 + testpmd> start tx_first 1 + testpmd> show port stats all + testpmd> stop + +6. Quit pdump, check all the packets length are 960 Byte in the pcap file and the payload in receive packets are same. + +Test Case 7: Loopback split ring vectorized path multi-queues payload check with server mode and dsa dpdk driver +---------------------------------------------------------------------------------------------------------------- +This case tests the payload is valid after forwading large chain packets in loopback vhost-user/virtio-user split ring +vectorized path multi-queues with server mode when vhost uses the asynchronous enqueue and dequeue operations with dsa dpdk driver. + +1. Bind 1 DSA device to vfio-pci like common step 1:: -16. Rerun step 11-14. + ls /dev/dsa #check wq configure, reset if exist + ./usertools/dpdk-devbind.py -u e7:01.0 + ./usertools/dpdk-devbind.py -b vfio-pci + +2. Launch vhost-user:: -17. Quit and relaunch virtio with split ring vectorized path as below:: + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 -a 0000:e7:01.0,max_queues=8 \ + --vdev 'eth_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[txq0@0000:e7:01.0-q0;txq1@0000:e7:01.0-q0;txq2@0000:e7:01.0-q0;txq3@0000:e7:01.0-q0;txq4@0000:e7:01.0-q1;txq5@0000:e7:01.0-q1;rxq2@0000:e7:01.0-q2;rxq3@0000:e7:01.0-q2;rxq4@0000:e7:01.0-q3;rxq5@0000:e7:01.0-q3;rxq6@0000:e7:01.0-q3;rxq7@0000:e7:01.0-q3]' \ + --iova=va -- -i --nb-cores=4 --rxq=8 --txq=8 --txd=1024 --rxd=1024 + +3. Launch virtio-user with split ring vectorized path:: # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-6 -n 4 --file-prefix=virtio-user0 --no-pci \ --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=8,mrg_rxbuf=0,in_order=0,vectorized=1,server=1 \ -- -i --nb-cores=4 --rxq=8 --txq=8 --txd=1024 --rxd=1024 - testpmd>set fwd csum - testpmd>start - -18. Rerun step 11-14. - -19. Quit and relaunch vhost with diff channel:: + testpmd> set fwd csum + testpmd> start - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 -a 0000:e7:01.0 -a 0000:ec:01.0 -a 0000:f1:01.0 \ - --vdev 'eth_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]' \ - --iova=va -- -i --nb-cores=5 --rxq=8 --txq=8 --txd=1024 --rxd=1024 \ - --lcore-dma=[lcore11@0000:e7:01.0-q0,lcore12@0000:e7:01.0-q0,lcore13@0000:ec:01.0-q1,lcore13@0000:f1:01.0-q2,lcore14@0000:ec:01.0-q1,lcore14@0000:f1:01.0-q2,lcore15@0000:ec:01.0-q1,lcore15@0000:f1:01.0-q2] +4. Attach pdump secondary process to primary process by same file-prefix:: -20. Rerun steps 11-14. + # ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=virtio-user0 -- \ + --pdump 'device_id=net_virtio_user0,queue=0,rx-dev=./pdump-virtio-rx-0.pcap,mbuf-size=8000' \ + --pdump 'device_id=net_virtio_user0,queue=3,rx-dev=./pdump-virtio-rx-3.pcap,mbuf-size=8000' -21. Quit and relaunch vhost w/ iova=pa:: +5. Send large packets from vhost, check loopback performance can get expected and each queue can receive packets:: - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 -a 0000:e7:01.0 \ - --vdev 'eth_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]' \ - --iova=pa -- -i --nb-cores=5 --rxq=8 --txq=8 --txd=1024 --rxd=1024 \ - --lcore-dma=[lcore11@0000:e7:01.0-q0,lcore12@0000:e7:01.0-q0,lcore13@0000:e7:01.0-q1,lcore13@0000:e7:01.0-q2,lcore14@0000:e7:01.0-q1,lcore14@0000:e7:01.0-q2,lcore15@0000:e7:01.0-q1,lcore15@0000:e7:01.0-q2] + testpmd> set fwd mac + testpmd> set txpkts 64,128,256,512 + testpmd> set burst 1 + testpmd> start tx_first 1 + testpmd> show port stats all + testpmd> stop -22. Rerun steps 11-14. +6. Quit pdump, check all the packets length are 960 Byte in the pcap file and the payload in receive packets are same. -Test Case 4: Loopback packed ring all path server mode and multi-queues payload check with dsa dpdk driver ------------------------------------------------------------------------------------------------------------- +Test Case 8: Loopback packed ring inorder mergeable path multi-queues payload check with server mode and dsa dpdk driver +------------------------------------------------------------------------------------------------------------------------ This case tests the payload is valid after forwading large chain packets in loopback vhost-user/virtio-user packed ring -all path multi-queues with server mode when vhost uses the asynchronous enqueue and dequeue operations with dsa dpdk driver. Both iova as VA and PA mode test. +inorder mergeable path multi-queues with server mode when vhost uses the asynchronous enqueue and dequeue operations with dsa dpdk driver. -1. bind 2 dsa port to vfio-pci like common step 1:: +1. Bind 1 DSA device to vfio-pci like common step 1:: ls /dev/dsa #check wq configure, reset if exist - ./usertools/dpdk-devbind.py -u e7:01.0 ec:01.0 - ./usertools/dpdk-devbind.py -b vfio-pci e7:01.0 ec:01.0 + ./usertools/dpdk-devbind.py -u e7:01.0 + ./usertools/dpdk-devbind.py -b vfio-pci e7:01.0 -2. Launch vhost:: +2. Launch vhost-user:: # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 -a 0000:e7:01.0,max_queues=4 \ - --vdev 'eth_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]' \ - --iova=va -- -i --nb-cores=4 --rxq=8 --txq=8 --txd=1024 --rxd=1024 \ - --lcore-dma=[lcore11@0000:e7:01.0-q0,lcore12@0000:e7:01.0-q1,lcore13@0000:e7:01.0-q2,lcore14@0000:e7:01.0-q3] + --vdev 'eth_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[txq0@0000:e7:01.0-q0;txq1@0000:e7:01.0-q0;txq2@0000:e7:01.0-q0;txq3@0000:e7:01.0-q0;txq4@0000:e7:01.0-q1;txq5@0000:e7:01.0-q1;rxq2@0000:e7:01.0-q2;rxq3@0000:e7:01.0-q2;rxq4@0000:e7:01.0-q3;rxq5@0000:e7:01.0-q3;rxq6@0000:e7:01.0-q3;rxq7@0000:e7:01.0-q3]' \ + --iova=va -- -i --nb-cores=4 --rxq=8 --txq=8 --txd=1024 --rxd=1024 -3. Launch virtio-user with packed ring mergeable inorder path:: +3. Launch virtio-user with packed ring inorder mergeable path:: # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-6 -n 4 --file-prefix=virtio-user0 --no-pci \ --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=8,mrg_rxbuf=1,in_order=1,packed_vq=1,server=1 \ -- -i --nb-cores=4 --rxq=8 --txq=8 --txd=1024 --rxd=1024 - testpmd>set fwd csum - testpmd>start + testpmd> set fwd csum + testpmd> start 4. Attach pdump secondary process to primary process by same file-prefix:: @@ -326,9 +407,9 @@ all path multi-queues with server mode when vhost uses the asynchronous enqueue --pdump 'device_id=net_virtio_user0,queue=0,rx-dev=./pdump-virtio-rx-0.pcap,mbuf-size=8000' \ --pdump 'device_id=net_virtio_user0,queue=3,rx-dev=./pdump-virtio-rx-3.pcap,mbuf-size=8000' -5. Send large pkts from vhost, check loopback performance can get expected and each queue can receive packets:: +5. Send large packets from vhost, check loopback performance can get expected and each queue can receive packets:: - testpmd> set fwd csum + testpmd> set fwd mac testpmd> set txpkts 64,64,64,2000,2000,2000 testpmd> set burst 1 testpmd> start tx_first 1 @@ -337,127 +418,254 @@ all path multi-queues with server mode when vhost uses the asynchronous enqueue 6. Quit pdump, check all the packets length are 6192 Byte in the pcap file and the payload in receive packets are same. -7. Quit and relaunch vhost and rerun step 4-6. +Test Case 9: Loopback packed ring mergeable path multi-queues payload check with server mode and dsa dpdk driver +---------------------------------------------------------------------------------------------------------------- +This case tests the payload is valid after forwading large chain packets in loopback vhost-user/virtio-user packed ring +mergeable path multi-queues with server mode when vhost uses the asynchronous enqueue and dequeue operations with dsa dpdk driver. + +1. Bind 1 DSA device to vfio-pci like common step 1:: + + ls /dev/dsa #check wq configure, reset if exist + ./usertools/dpdk-devbind.py -u e7:01.0 + ./usertools/dpdk-devbind.py -b vfio-pci e7:01.0 + +2. Launch vhost-user:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 -a 0000:e7:01.0,max_queues=4 \ + --vdev 'eth_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[txq0@0000:e7:01.0-q0;txq1@0000:e7:01.0-q0;txq2@0000:e7:01.0-q0;txq3@0000:e7:01.0-q0;txq4@0000:e7:01.0-q1;txq5@0000:e7:01.0-q1;rxq2@0000:e7:01.0-q2;rxq3@0000:e7:01.0-q2;rxq4@0000:e7:01.0-q3;rxq5@0000:e7:01.0-q3;rxq6@0000:e7:01.0-q3;rxq7@0000:e7:01.0-q3]' \ + --iova=va -- -i --nb-cores=4 --rxq=8 --txq=8 --txd=1024 --rxd=1024 -8. Quit and relaunch virtio with packed ring mergeable path as below:: +3. Launch virtio-user with packed ring mergeable path:: # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-6 -n 4 --file-prefix=virtio-user0 --no-pci \ --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=8,mrg_rxbuf=1,in_order=0,packed_vq=1,server=1 \ -- -i --nb-cores=4 --rxq=8 --txq=8 --txd=1024 --rxd=1024 - testpmd>set fwd csum - testpmd>start + testpmd> set fwd mac + testpmd> start + +4. Attach pdump secondary process to primary process by same file-prefix:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=virtio-user0 -- \ + --pdump 'device_id=net_virtio_user0,queue=0,rx-dev=./pdump-virtio-rx-0.pcap,mbuf-size=8000' \ + --pdump 'device_id=net_virtio_user0,queue=3,rx-dev=./pdump-virtio-rx-3.pcap,mbuf-size=8000' + +5. Send large packets from vhost, check loopback performance can get expected and each queue can receive packets:: -9. Rerun steps 4-7. + testpmd> set fwd csum + testpmd> set txpkts 64,64,64,2000,2000,2000 + testpmd> set burst 1 + testpmd> start tx_first 1 + testpmd> show port stats all + testpmd> stop -10. Quit and relaunch virtio with packed ring non-mergeable path as below:: +6. Quit pdump, check all the packets length are 6192 Byte in the pcap file and the payload in receive packets are same. + +Test Case 10: Loopback packed ring non-mergeable path multi-queues payload check with server mode and dsa dpdk driver +--------------------------------------------------------------------------------------------------------------------- +This case tests the payload is valid after forwading large chain packets in loopback vhost-user/virtio-user packed ring +non-mergeable path multi-queues with server mode when vhost uses the asynchronous enqueue and dequeue operations with dsa dpdk driver. + +1. Bind 1 DSA device to vfio-pci like common step 1:: + + ls /dev/dsa #check wq configure, reset if exist + ./usertools/dpdk-devbind.py -u e7:01.0 + ./usertools/dpdk-devbind.py -b vfio-pci e7:01.0 + +2. Launch vhost-user:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 -a 0000:e7:01.0,max_queues=4 \ + --vdev 'eth_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[txq0@0000:e7:01.0-q0;txq1@0000:e7:01.0-q0;txq2@0000:e7:01.0-q0;txq3@0000:e7:01.0-q0;txq4@0000:e7:01.0-q1;txq5@0000:e7:01.0-q1;rxq2@0000:e7:01.0-q2;rxq3@0000:e7:01.0-q2;rxq4@0000:e7:01.0-q3;rxq5@0000:e7:01.0-q3;rxq6@0000:e7:01.0-q3;rxq7@0000:e7:01.0-q3]' \ + --iova=va -- -i --nb-cores=4 --rxq=8 --txq=8 --txd=1024 --rxd=1024 + +3. Launch virtio-user with packed ring non-mergeable path:: # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-6 -n 4 --file-prefix=virtio-user0 --no-pci \ --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=8,mrg_rxbuf=0,in_order=0,packed_vq=1,server=1 \ -- -i --nb-cores=4 --rxq=8 --txq=8 --txd=1024 --rxd=1024 - testpmd>set fwd csum - testpmd>start + testpmd> set fwd csum + testpmd> start + +4. Attach pdump secondary process to primary process by same file-prefix:: -11. Rerun step 4. + # ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=virtio-user0 -- \ + --pdump 'device_id=net_virtio_user0,queue=0,rx-dev=./pdump-virtio-rx-0.pcap,mbuf-size=8000' \ + --pdump 'device_id=net_virtio_user0,queue=3,rx-dev=./pdump-virtio-rx-3.pcap,mbuf-size=8000' -12. Send pkts from vhost, check loopback performance can get expected and each queue can receive packets:: +5. Send large packets from vhost, check loopback performance can get expected and each queue can receive packets:: - testpmd> set fwd csum + testpmd> set fwd mac testpmd> set txpkts 64,128,256,512 testpmd> set burst 1 testpmd> start tx_first 1 testpmd> show port stats all testpmd> stop -13. Quit pdump, check all the packets length are 960 Byte in the pcap file and the payload in receive packets are same. +6. Quit pdump, check all the packets length are 960 Byte in the pcap file and the payload in receive packets are same. + +Test Case 11: Loopback packed ring inorder non-mergeable path multi-queues payload check with server mode and dsa dpdk driver +----------------------------------------------------------------------------------------------------------------------------- +This case tests the payload is valid after forwading large chain packets in loopback vhost-user/virtio-user packed ring +inorder non-mergeable path multi-queues with server mode when vhost uses the asynchronous enqueue and dequeue operations with dsa dpdk driver. + +1. Bind 1 DSA device to vfio-pci like common step 1:: + + ls /dev/dsa #check wq configure, reset if exist + ./usertools/dpdk-devbind.py -u e7:01.0 + ./usertools/dpdk-devbind.py -b vfio-pci e7:01.0 -14. Quit and relaunch vhost and rerun step 11-13. +2. Launch vhost-user:: -15. Quit and relaunch virtio with packed ring inorder non-mergeable path as below:: + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 -a 0000:e7:01.0,max_queues=4 \ + --vdev 'eth_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[txq0@0000:e7:01.0-q0;txq1@0000:e7:01.0-q0;txq2@0000:e7:01.0-q0;txq3@0000:e7:01.0-q0;txq4@0000:e7:01.0-q1;txq5@0000:e7:01.0-q1;rxq2@0000:e7:01.0-q2;rxq3@0000:e7:01.0-q2;rxq4@0000:e7:01.0-q3;rxq5@0000:e7:01.0-q3;rxq6@0000:e7:01.0-q3;rxq7@0000:e7:01.0-q3]' \ + --iova=va -- -i --nb-cores=4 --rxq=8 --txq=8 --txd=1024 --rxd=1024 + +3. Launch virtio-user with packed ring inorder non-mergeable path:: # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-6 -n 4 --file-prefix=virtio-user0 --no-pci \ --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=8,mrg_rxbuf=0,in_order=1,packed_vq=1,server=1 \ -- -i --nb-cores=4 --rxq=8 --txq=8 --txd=1024 --rxd=1024 - testpmd>set fwd csum - testpmd>start + testpmd> set fwd csum + testpmd> start + +4. Attach pdump secondary process to primary process by same file-prefix:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=virtio-user0 -- \ + --pdump 'device_id=net_virtio_user0,queue=0,rx-dev=./pdump-virtio-rx-0.pcap,mbuf-size=8000' \ + --pdump 'device_id=net_virtio_user0,queue=3,rx-dev=./pdump-virtio-rx-3.pcap,mbuf-size=8000' + +5. Send large packets from vhost, check loopback performance can get expected and each queue can receive packets:: + + testpmd> set fwd mac + testpmd> set txpkts 64,128,256,512 + testpmd> set burst 1 + testpmd> start tx_first 1 + testpmd> show port stats all + testpmd> stop + +6. Quit pdump, check all the packets length are 960 Byte in the pcap file and the payload in receive packets are same. + +Test Case 12: Loopback packed ring vectorized path multi-queues payload check with server mode and dsa dpdk driver +------------------------------------------------------------------------------------------------------------------ +This case tests the payload is valid after forwading large chain packets in loopback vhost-user/virtio-user packed ring +vectorized path multi-queues with server mode when vhost uses the asynchronous enqueue and dequeue operations with dsa dpdk driver. + +1. Bind 1 DSA device to vfio-pci like common step 1:: -16. Rerun step 11-14. + ls /dev/dsa #check wq configure, reset if exist + ./usertools/dpdk-devbind.py -u e7:01.0 + ./usertools/dpdk-devbind.py -b vfio-pci e7:01.0 + +2. Launch vhost-user:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 -a 0000:e7:01.0,max_queues=4 \ + --vdev 'eth_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[txq0@0000:e7:01.0-q0;txq1@0000:e7:01.0-q0;txq2@0000:e7:01.0-q0;txq3@0000:e7:01.0-q0;txq4@0000:e7:01.0-q1;txq5@0000:e7:01.0-q1;rxq2@0000:e7:01.0-q2;rxq3@0000:e7:01.0-q2;rxq4@0000:e7:01.0-q3;rxq5@0000:e7:01.0-q3;rxq6@0000:e7:01.0-q3;rxq7@0000:e7:01.0-q3]' \ + --iova=va -- -i --nb-cores=4 --rxq=8 --txq=8 --txd=1024 --rxd=1024 -17. Quit and relaunch virtio with packed ring vectorized path as below:: +3. Launch virtio-user with packed ring vectorized path:: # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-6 -n 4 --file-prefix=virtio-user0 --no-pci --force-max-simd-bitwidth=512 \ --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=8,mrg_rxbuf=0,in_order=1,packed_vq=1,vectorized=1,server=1 \ -- -i --nb-cores=4 --rxq=8 --txq=8 --txd=1024 --rxd=1024 - testpmd>set fwd csum - testpmd>start + testpmd> set fwd csum + testpmd> start + +4. Attach pdump secondary process to primary process by same file-prefix:: -18. Rerun step 11-14. + # ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=virtio-user0 -- \ + --pdump 'device_id=net_virtio_user0,queue=0,rx-dev=./pdump-virtio-rx-0.pcap,mbuf-size=8000' \ + --pdump 'device_id=net_virtio_user0,queue=3,rx-dev=./pdump-virtio-rx-3.pcap,mbuf-size=8000' -19. Quit and relaunch virtio with packed ring vectorized path and ring size is not power of 2 as below:: +5. Send large packets from vhost, check loopback performance can get expected and each queue can receive packets:: - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-6 -n 4 --file-prefix=virtio-user0 --no-pci --force-max-simd-bitwidth=512 \ - --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=8,mrg_rxbuf=0,in_order=1,packed_vq=1,vectorized=1,queue_size=1025,server=1 \ - -- -i --nb-cores=4 --rxq=8 --txq=8 --txd=1025 --rxd=1025 - testpmd>set fwd csum - testpmd>start + testpmd> set fwd mac + testpmd> set txpkts 64,128,256,512 + testpmd> set burst 1 + testpmd> start tx_first 1 + testpmd> show port stats all + testpmd> stop -20. Rerun step 11-14. +6. Quit pdump, check all the packets length are 960 Byte in the pcap file and the payload in receive packets are same. -21. Quit and relaunch vhost with diff channel:: +Test Case 13: Loopback packed ring vectorized path and ring size is not power of 2 multi-queues payload check with server mode and dsa dpdk driver +-------------------------------------------------------------------------------------------------------------------------------------------------- +This case tests the payload is valid after forwading large chain packets in loopback vhost-user/virtio-user packed ring +vectorized path multi-queues with server mode and ring size is not power of 2 when vhost uses the asynchronous operations with dsa dpdk driver. - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 -a 0000:e7:01.0,max_queues=2 -a 0000:ec:01.0,max_queues=2 \ - --vdev 'eth_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]' \ - --iova=va -- -i --nb-cores=1 --rxq=8 --txq=8 --txd=1024 --rxd=1024 \ - --lcore-dma=[lcore11@0000:e7:01.0-q0,lcore11@0000:ec:01.0-q1] +1. Bind 1 DSA device to vfio-pci like common step 1:: -22. Rerun steps 11-14. + ls /dev/dsa #check wq configure, reset if exist + ./usertools/dpdk-devbind.py -u e7:01.0 + ./usertools/dpdk-devbind.py -b vfio-pci e7:01.0 -23. Quit and relaunch vhost w/ iova=pa:: +2. Launch vhost-user:: # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 -a 0000:e7:01.0,max_queues=4 \ - --vdev 'eth_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]' \ - --iova=pa -- -i --nb-cores=1 --rxq=8 --txq=8 --txd=1024 --rxd=1024 \ - --lcore-dma=[lcore11@0000:e7:01.0-q1,lcore11@0000:e7:01.0-q3] + --vdev 'eth_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[txq0@0000:e7:01.0-q0;txq1@0000:e7:01.0-q0;txq2@0000:e7:01.0-q0;txq3@0000:e7:01.0-q0;txq4@0000:e7:01.0-q1;txq5@0000:e7:01.0-q1;rxq2@0000:e7:01.0-q2;rxq3@0000:e7:01.0-q2;rxq4@0000:e7:01.0-q3;rxq5@0000:e7:01.0-q3;rxq6@0000:e7:01.0-q3;rxq7@0000:e7:01.0-q3]' \ + --iova=va -- -i --nb-cores=4 --rxq=8 --txq=8 --txd=1024 --rxd=1024 -24. Rerun steps 11-14. +3. Launch virtio-user with packed ring vectorized path and ring size is not power of 2:: -Test Case 5: Loopback split ring server mode large chain packets stress test with dsa kernel driver ---------------------------------------------------------------------------------------------------- + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-6 -n 4 --file-prefix=virtio-user0 --no-pci --force-max-simd-bitwidth=512 \ + --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=8,mrg_rxbuf=0,in_order=1,packed_vq=1,vectorized=1,queue_size=1025,server=1 \ + -- -i --nb-cores=4 --rxq=8 --txq=8 --txd=1025 --rxd=1025 + testpmd> set fwd csum + testpmd> start + +4. Attach pdump secondary process to primary process by same file-prefix:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=virtio-user0 -- \ + --pdump 'device_id=net_virtio_user0,queue=0,rx-dev=./pdump-virtio-rx-0.pcap,mbuf-size=8000' \ + --pdump 'device_id=net_virtio_user0,queue=3,rx-dev=./pdump-virtio-rx-3.pcap,mbuf-size=8000' + +5. Send large packets from vhost, check loopback performance can get expected and each queue can receive packets:: + + testpmd> set fwd mac + testpmd> set txpkts 64,128,256,512 + testpmd> set burst 1 + testpmd> start tx_first 1 + testpmd> show port stats all + testpmd> stop + +6. Quit pdump, check all the packets length are 960 Byte in the pcap file and the payload in receive packets are same. + +Test Case 14: Loopback split ring server mode large chain packets stress test with dsa kernel driver +---------------------------------------------------------------------------------------------------- This is a stress test case about forwading large chain packets in loopback vhost-user/virtio-user split ring with server mode when vhost uses the asynchronous enqueue and dequeue operations with dsa kernel driver. -1. Bind 1 dsa device to idxd like common step 2:: +1. Bind 1 DSA device to idxd like common step 2:: ls /dev/dsa #check wq configure, reset if exist ./usertools/dpdk-devbind.py -u 6a:01.0 ./usertools/dpdk-devbind.py -b idxd 6a:01.0 - # ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 4 0 + # ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 1 0 ls /dev/dsa #check wq configure success -2. Launch vhost:: +2. Launch vhost-user:: # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --file-prefix=vhost --no-pci \ - --vdev 'net_vhost0,iface=vhost-net0,queues=1,dmas=[txq0;rxq0],client=1' \ - --iova=va -- -i --nb-cores=1 --txd=1024 --rxd=1024 --mbuf-size=45535 --lcore-dma=[lcore3@wq0.2] + --vdev 'net_vhost0,iface=vhost-net0,queues=1,dmas=[txq0@wq0.0;rxq0@wq0.0],client=1' \ + --iova=va -- -i --nb-cores=1 --txd=1024 --rxd=1024 --mbuf-size=65535 3. launch virtio and start testpmd:: # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 5-6 -n 4 --file-prefix=testpmd0 --no-pci \ --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=1,mrg_rxbuf=1,in_order=0,vectorized=1,queue_size=2048,server=1 \ -- -i --rxq=1 --txq=1 --txd=2048 --rxd=2048 --nb-cores=1 - testpmd>start + testpmd> start -4. Send large pkts from vhost:: +4. Send large packets from vhost and check the stats, packets can loop normally:: - testpmd>set txpkts 45535,45535,45535,45535,45535 - testpmd>start tx_first 32 - testpmd>show port stats all + testpmd> set txpkts 65535,65535 + testpmd> start tx_first 32 + testpmd> show port stats all -Test Case 6: Loopback packed ring server mode large chain packets stress test with dsa kernel driver +Test Case 15: Loopback packed ring server mode large chain packets stress test with dsa kernel driver ----------------------------------------------------------------------------------------------------- This is a stress test case about forwading large chain packets in loopback vhost-user/virtio-user packed ring with server mode when vhost uses the asynchronous operations with dsa kernel driver. -1. Bind 1 dsa port to idxd like common step 2:: +1. Bind 1 DSA device to idxd like common step 2:: ls /dev/dsa #check wq configure, reset if exist ./usertools/dpdk-devbind.py -u 6a:01.0 @@ -465,31 +673,31 @@ when vhost uses the asynchronous operations with dsa kernel driver. # ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 1 0 ls /dev/dsa #check wq configure success -2. Launch vhost:: +2. Launch vhost-user:: # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-3 -n 4 --no-pci \ - --vdev 'eth_vhost0,iface=vhost-net0,queues=1,dmas=[txq0;rxq0],client=1' \ - --iova=va -- -i --nb-cores=1 --mbuf-size=45535 --lcore-dma=[lcore3@wq0.0] + --vdev 'eth_vhost0,iface=vhost-net0,queues=1,dmas=[txq0@wq0.0;rxq0@wq0.0],client=1' \ + --iova=va -- -i --nb-cores=1 --mbuf-size=65535 3. launch virtio and start testpmd:: # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 5-6 -n 4 --file-prefix=testpmd0 --no-pci \ --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=1,mrg_rxbuf=1,in_order=0,vectorized=1,packed_vq=1,queue_size=2048,server=1 \ -- -i --rxq=1 --txq=1 --txd=2048 --rxd=2048 --nb-cores=1 - testpmd>start + testpmd> start -4. Send large pkts from vhost and check the stats:: +4. Send large packets from vhost and check the stats, packets can loop normally:: - testpmd>set txpkts 45535,45535,45535,45535,45535 - testpmd>start tx_first 32 - testpmd>show port stats all + testpmd> set txpkts 65535,65535 + testpmd> start tx_first 32 + testpmd> show port stats all -Test Case 7: Loopback split ring all path server mode and multi-queues payload check with dsa kernel driver -------------------------------------------------------------------------------------------------------------- +Test Case 16: Loopback split ring inorder mergeable path multi-queues payload check with server mode and dsa kernel driver +-------------------------------------------------------------------------------------------------------------------------- This case tests the payload is valid after forwading large chain packets in loopback vhost-user/virtio-user split ring -all path multi-queues with server mode when vhost uses the asynchronous enqueue and dequeue operations with dsa kernel driver. +inorder mergeable path multi-queues with server mode when vhost uses the asynchronous operations with dsa kernel driver. -1. bind 2 dsa port to idxd like common step 2:: +1. Bind 2 DSA device to idxd like common step 2:: ls /dev/dsa #check wq configure, reset if exist ./usertools/dpdk-devbind.py -u 6a:01.0 6f:01.0 @@ -498,20 +706,19 @@ all path multi-queues with server mode when vhost uses the asynchronous enqueue # ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 1 ls /dev/dsa #check wq configure success -2. Launch vhost:: +2. Launch vhost-user:: # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 --no-pci \ - --vdev 'eth_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]' \ - --iova=va -- -i --nb-cores=5 --rxq=8 --txq=8 --txd=1024 --rxd=1024 \ - --lcore-dma=[lcore11@wq0.0,lcore12@wq0.0,lcore13@wq0.1,lcore13@wq0.2,lcore14@wq0.1,lcore14@wq0.2,lcore15@wq0.1,lcore15@wq0.2] + --vdev 'eth_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[txq0@wq0.0;txq1@wq0.0;txq2@wq0.0;txq3@wq0.0;txq4@wq0.1;txq5@wq0.1;rxq2@wq1.0;rxq3@wq1.0;rxq4@wq1.1;rxq5@wq1.1;rxq6@wq1.1;rxq7@wq1.1]' \ + --iova=va -- -i --nb-cores=5 --rxq=8 --txq=8 --txd=1024 --rxd=1024 -3. Launch virtio-user with split ring mergeable inorder path:: +3. Launch virtio-user with split ring inorder mergeable path:: # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 5-6 -n 4 --file-prefix=virtio-user0 --no-pci \ --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=8,mrg_rxbuf=1,in_order=1,server=1 \ -- -i --nb-cores=1 --rxq=8 --txq=8 --txd=1024 --rxd=1024 - testpmd>set fwd csum - testpmd>start + testpmd> set fwd csum + testpmd> start 4. Attach pdump secondary process to primary process by same file-prefix:: @@ -519,9 +726,9 @@ all path multi-queues with server mode when vhost uses the asynchronous enqueue --pdump 'device_id=net_virtio_user0,queue=0,rx-dev=./pdump-virtio-rx-0.pcap,mbuf-size=8000' \ --pdump 'device_id=net_virtio_user0,queue=3,rx-dev=./pdump-virtio-rx-3.pcap,mbuf-size=8000' -5. Send large pkts from vhost, check loopback performance can get expected and each queue can receive packets:: +5. Send large packets from vhost, check loopback performance can get expected and each queue can receive packets:: - testpmd> set fwd csum + testpmd> set fwd mac testpmd> set txpkts 64,64,64,2000,2000,2000 testpmd> set burst 1 testpmd> start tx_first 1 @@ -530,98 +737,253 @@ all path multi-queues with server mode when vhost uses the asynchronous enqueue 6. Quit pdump, check all the packets length are 6192 Byte in the pcap file and the payload in receive packets are same. -7. Quit and relaunch vhost and rerun step 4-6. +7. Quit and relaunch vhost w/ diff channel:: -8. Quit and relaunch virtio with split ring mergeable path as below:: + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 --no-pci \ + --vdev 'eth_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[txq0@wq0.0;txq1@wq0.1;txq2@wq0.2;txq3@wq0.3;txq4@wq0.4;txq5@wq0.5;rxq2@wq1.2;rxq3@wq1.3;rxq4@wq1.4;rxq5@wq1.5;rxq6@wq1.6;rxq7@wq1.7]' \ + --iova=va -- -i --nb-cores=5 --rxq=8 --txq=8 --txd=1024 --rxd=1024 + +8. Rerun step 4-6. + +Test Case 17: Loopback split ring mergeable path multi-queues payload check with server mode and dsa kernel driver +------------------------------------------------------------------------------------------------------------------ +This case tests the payload is valid after forwading large chain packets in loopback vhost-user/virtio-user split ring +mergeable path multi-queues with server mode when vhost uses the asynchronous operations with dsa kernel driver. + +1. Bind 2 DSA device to idxd like common step 2:: + + ls /dev/dsa #check wq configure, reset if exist + ./usertools/dpdk-devbind.py -u 6a:01.0 6f:01.0 + ./usertools/dpdk-devbind.py -b idxd 6a:01.0 6f:01.0 + # ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 0 + # ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 1 + ls /dev/dsa #check wq configure success + +2. Launch vhost-user:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 --no-pci \ + --vdev 'eth_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[txq0@wq0.0;txq1@wq0.0;txq2@wq0.0;txq3@wq0.0;txq4@wq0.1;txq5@wq0.1;rxq2@wq1.0;rxq3@wq1.0;rxq4@wq1.1;rxq5@wq1.1;rxq6@wq1.1;rxq7@wq1.1]' \ + --iova=va -- -i --nb-cores=5 --rxq=8 --txq=8 --txd=1024 --rxd=1024 + +3. Launch virtio-user with split ring mergeable path:: # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 5-6 -n 4 --file-prefix=virtio-user0 --no-pci \ --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=8,mrg_rxbuf=1,in_order=0,server=1 \ -- -i --nb-cores=1 --rxq=8 --txq=8 --txd=1024 --rxd=1024 - testpmd>set fwd csum - testpmd>start + testpmd> set fwd csum + testpmd> start + +4. Attach pdump secondary process to primary process by same file-prefix:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=virtio-user0 -- \ + --pdump 'device_id=net_virtio_user0,queue=0,rx-dev=./pdump-virtio-rx-0.pcap,mbuf-size=8000' \ + --pdump 'device_id=net_virtio_user0,queue=3,rx-dev=./pdump-virtio-rx-3.pcap,mbuf-size=8000' + +5. Send large packets from vhost, check loopback performance can get expected and each queue can receive packets:: + + testpmd> set fwd mac + testpmd> set txpkts 64,64,64,2000,2000,2000 + testpmd> set burst 1 + testpmd> start tx_first 1 + testpmd> show port stats all + testpmd> stop + +6. Quit pdump, check all the packets length are 6192 Byte in the pcap file and the payload in receive packets are same. + +7. Quit and relaunch vhost w/ diff channel:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 --no-pci \ + --vdev 'eth_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[txq0@wq0.0;txq1@wq0.1;txq2@wq0.2;txq3@wq0.3;txq4@wq0.4;txq5@wq0.5;rxq2@wq1.2;rxq3@wq1.3;rxq4@wq1.4;rxq5@wq1.5;rxq6@wq1.6;rxq7@wq1.7]' \ + --iova=va -- -i --nb-cores=5 --rxq=8 --txq=8 --txd=1024 --rxd=1024 + +8. Rerun step 4-6. + +Test Case 18: Loopback split ring non-mergeable path multi-queues payload check with server mode and dsa kernel driver +---------------------------------------------------------------------------------------------------------------------- +This case tests the payload is valid after forwading large chain packets in loopback vhost-user/virtio-user split ring +non-mergeable path multi-queues with server mode when vhost uses the asynchronous operations with dsa kernel driver. + +1. Bind 2 DSA device to idxd like common step 2:: -9. Rerun steps 4-7. + ls /dev/dsa #check wq configure, reset if exist + ./usertools/dpdk-devbind.py -u 6a:01.0 6f:01.0 + ./usertools/dpdk-devbind.py -b idxd 6a:01.0 6f:01.0 + # ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 0 + # ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 1 + ls /dev/dsa #check wq configure success + +2. Launch vhost-user:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 --no-pci \ + --vdev 'eth_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[txq0@wq0.0;txq1@wq0.0;txq2@wq0.0;txq3@wq0.0;txq4@wq0.1;txq5@wq0.1;rxq2@wq1.0;rxq3@wq1.0;rxq4@wq1.1;rxq5@wq1.1;rxq6@wq1.1;rxq7@wq1.1]' \ + --iova=va -- -i --nb-cores=5 --rxq=8 --txq=8 --txd=1024 --rxd=1024 -10. Quit and relaunch virtio with split ring non-mergeable path as below:: +3. Launch virtio-user with split ring non-mergeable path:: # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 5-6 -n 4 --file-prefix=virtio-user0 --no-pci \ --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=8,mrg_rxbuf=0,in_order=0,server=1 \ -- -i --enable-hw-vlan-strip --nb-cores=1 --rxq=8 --txq=8 --txd=1024 --rxd=1024 - testpmd>set fwd csum - testpmd>start + testpmd> set fwd csum + testpmd> start + +4. Attach pdump secondary process to primary process by same file-prefix:: -11. Rerun step 4. + # ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=virtio-user0 -- \ + --pdump 'device_id=net_virtio_user0,queue=0,rx-dev=./pdump-virtio-rx-0.pcap,mbuf-size=8000' \ + --pdump 'device_id=net_virtio_user0,queue=3,rx-dev=./pdump-virtio-rx-3.pcap,mbuf-size=8000' -12. Send pkts from vhost, check loopback performance can get expected and each queue can receive packets:: +5. Send large packets from vhost, check loopback performance can get expected and each queue can receive packets:: - testpmd> set fwd csum + testpmd> set fwd mac testpmd> set txpkts 64,128,256,512 testpmd> set burst 1 testpmd> start tx_first 1 testpmd> show port stats all testpmd> stop -13. Quit pdump, check all the packets length are 960 Byte in the pcap file and the payload in receive packets are same. +6. Quit pdump, check all the packets length are 960 Byte in the pcap file and the payload in receive packets are same. -14. Quit and relaunch vhost and rerun step 11-13. +7. Quit and relaunch vhost w/ diff channel:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 --no-pci \ + --vdev 'eth_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[txq0@wq0.0;txq1@wq0.1;txq2@wq0.2;txq3@wq0.3;txq4@wq0.4;txq5@wq0.5;rxq2@wq1.2;rxq3@wq1.3;rxq4@wq1.4;rxq5@wq1.5;rxq6@wq1.6;rxq7@wq1.7]' \ + --iova=va -- -i --nb-cores=5 --rxq=8 --txq=8 --txd=1024 --rxd=1024 + +8. Rerun step 4-6. + +Test Case 19: Loopback split ring inorder non-mergeable path multi-queues payload check with server mode and dsa kernel driver +------------------------------------------------------------------------------------------------------------------------------ +This case tests the payload is valid after forwading large chain packets in loopback vhost-user/virtio-user split ring +inorder non-mergeable path multi-queues with server mode when vhost uses the asynchronous operations with dsa kernel driver. + +1. Bind 2 DSA device to idxd like common step 2:: + + ls /dev/dsa #check wq configure, reset if exist + ./usertools/dpdk-devbind.py -u 6a:01.0 6f:01.0 + ./usertools/dpdk-devbind.py -b idxd 6a:01.0 6f:01.0 + # ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 0 + # ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 1 + ls /dev/dsa #check wq configure success + +2. Launch vhost-user:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 --no-pci \ + --vdev 'eth_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[txq0@wq0.0;txq1@wq0.0;txq2@wq0.0;txq3@wq0.0;txq4@wq0.1;txq5@wq0.1;rxq2@wq1.0;rxq3@wq1.0;rxq4@wq1.1;rxq5@wq1.1;rxq6@wq1.1;rxq7@wq1.1]' \ + --iova=va -- -i --nb-cores=5 --rxq=8 --txq=8 --txd=1024 --rxd=1024 -15. Quit and relaunch virtio with split ring inorder non-mergeable path as below:: +3. Launch virtio-user with split ring inorder non-mergeable path:: # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 5-6 -n 4 --file-prefix=virtio-user0 --no-pci \ --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=8,mrg_rxbuf=0,in_order=1,server=1 \ -- -i --nb-cores=1 --rxq=8 --txq=8 --txd=1024 --rxd=1024 - testpmd>set fwd csum - testpmd>start + testpmd> set fwd csum + testpmd> start + +4. Attach pdump secondary process to primary process by same file-prefix:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=virtio-user0 -- \ + --pdump 'device_id=net_virtio_user0,queue=0,rx-dev=./pdump-virtio-rx-0.pcap,mbuf-size=8000' \ + --pdump 'device_id=net_virtio_user0,queue=3,rx-dev=./pdump-virtio-rx-3.pcap,mbuf-size=8000' + +5. Send large packets from vhost, check loopback performance can get expected and each queue can receive packets:: + + testpmd> set fwd mac + testpmd> set txpkts 64,128,256,512 + testpmd> set burst 1 + testpmd> start tx_first 1 + testpmd> show port stats all + testpmd> stop + +6. Quit pdump, check all the packets length are 960 Byte in the pcap file and the payload in receive packets are same. + +7. Quit and relaunch vhost w/ diff channel:: -16. Rerun step 11-14. + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 --no-pci \ + --vdev 'eth_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[txq0@wq0.0;txq1@wq0.1;txq2@wq0.2;txq3@wq0.3;txq4@wq0.4;txq5@wq0.5;rxq2@wq1.2;rxq3@wq1.3;rxq4@wq1.4;rxq5@wq1.5;rxq6@wq1.6;rxq7@wq1.7]' \ + --iova=va -- -i --nb-cores=5 --rxq=8 --txq=8 --txd=1024 --rxd=1024 + +8. Rerun step 4-6. + +Test Case 20: Loopback split ring vectorized path multi-queues payload check with server mode and dsa kernel driver +------------------------------------------------------------------------------------------------------------------- +This case tests the payload is valid after forwading large chain packets in loopback vhost-user/virtio-user split ring +vectorized path multi-queues with server mode when vhost uses the asynchronous operations with dsa kernel driver. + +1. Bind 2 DSA device to idxd like common step 2:: + + ls /dev/dsa #check wq configure, reset if exist + ./usertools/dpdk-devbind.py -u 6a:01.0 6f:01.0 + ./usertools/dpdk-devbind.py -b idxd 6a:01.0 6f:01.0 + # ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 0 + # ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 1 + ls /dev/dsa #check wq configure success -17. Quit and relaunch virtio with split ring vectorized path as below:: +2. Launch vhost-user:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 --no-pci \ + --vdev 'eth_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[txq0@wq0.0;txq1@wq0.0;txq2@wq0.0;txq3@wq0.0;txq4@wq0.1;txq5@wq0.1;rxq2@wq1.0;rxq3@wq1.0;rxq4@wq1.1;rxq5@wq1.1;rxq6@wq1.1;rxq7@wq1.1]' \ + --iova=va -- -i --nb-cores=5 --rxq=8 --txq=8 --txd=1024 --rxd=1024 + +3. Launch virtio-user with split ring vectorized path:: # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 5-6 -n 4 --file-prefix=virtio-user0 --no-pci \ --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=8,mrg_rxbuf=0,in_order=0,vectorized=1,server=1 \ -- -i --nb-cores=1 --rxq=8 --txq=8 --txd=1024 --rxd=1024 - testpmd>set fwd csum - testpmd>start + testpmd> set fwd csum + testpmd> start -18. Rerun step 11-14. +4. Attach pdump secondary process to primary process by same file-prefix:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=virtio-user0 -- \ + --pdump 'device_id=net_virtio_user0,queue=0,rx-dev=./pdump-virtio-rx-0.pcap,mbuf-size=8000' \ + --pdump 'device_id=net_virtio_user0,queue=3,rx-dev=./pdump-virtio-rx-3.pcap,mbuf-size=8000' + +5. Send large packets from vhost, check loopback performance can get expected and each queue can receive packets:: + + testpmd> set fwd mac + testpmd> set txpkts 64,128,256,512 + testpmd> set burst 1 + testpmd> start tx_first 1 + testpmd> show port stats all + testpmd> stop -19. Quit and relaunch vhost with diff channel:: +6. Quit pdump, check all the packets length are 960 Byte in the pcap file and the payload in receive packets are same. + +7. Quit and relaunch vhost w/ diff channel:: # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 --no-pci \ - --vdev 'eth_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[rxq0;rxq1;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]' \ - --iova=va -- -i --nb-cores=5 --rxq=8 --txq=8 --txd=1024 --rxd=1024 \ - --lcore-dma=[lcore11@wq0.0,lcore12@wq0.0,lcore13@wq0.1,lcore13@wq1.0,lcore14@wq0.1,lcore14@wq1.0,lcore15@wq0.1,lcore15@wq1.0] + --vdev 'eth_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[txq0@wq0.0;txq1@wq0.1;txq2@wq0.2;txq3@wq0.3;txq4@wq0.4;txq5@wq0.5;rxq2@wq1.2;rxq3@wq1.3;rxq4@wq1.4;rxq5@wq1.5;rxq6@wq1.6;rxq7@wq1.7]' \ + --iova=va -- -i --nb-cores=5 --rxq=8 --txq=8 --txd=1024 --rxd=1024 -20. Rerun steps 11-14. +8. Rerun step 4-6. -Test Case 8: Loopback packed ring all path server mode and multi-queues payload check with dsa kernel driver -------------------------------------------------------------------------------------------------------------- +Test Case 21: Loopback packed ring inorder mergeable path multi-queues payload check with server mode and dsa kernel driver +--------------------------------------------------------------------------------------------------------------------------- This case tests the payload is valid after forwading large chain packets in loopback vhost-user/virtio-user packed ring -all path multi-queues with server mode when vhost uses the asynchronous enqueue and dequeue operations with dsa kernel driver. +inorder mergeable path multi-queues with server mode when vhost uses the asynchronous operations with dsa kernel driver. -1. bind 8 dsa port to idxd like common step 2:: +1. Bind 8 DSA device to idxd like common step 2:: ls /dev/dsa #check wq configure, reset if exist ./usertools/dpdk-devbind.py -u 6a:01.0 6f:01.0 ./usertools/dpdk-devbind.py -b idxd 6a:01.0 6f:01.0 - # ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 4 0 - # ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 4 1 + # ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 0 + # ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 1 ls /dev/dsa #check wq configure success -2. Launch vhost:: +2. Launch vhost-user:: # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-14 -n 4 --no-pci \ - --vdev 'eth_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]' \ - --iova=va -- -i --nb-cores=4 --rxq=8 --txq=8 --txd=1024 --rxd=1024 \ - --lcore-dma=[lcore11@wq0.0,lcore12@wq0.1,lcore13@wq0.2,lcore14@wq0.3] + --vdev 'eth_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[txq0@wq0.0;txq1@wq0.0;txq2@wq0.0;txq3@wq0.0;txq4@wq0.1;txq5@wq0.1;rxq2@wq1.0;rxq3@wq1.0;rxq4@wq1.1;rxq5@wq1.1;rxq6@wq1.1;rxq7@wq1.1]' \ + --iova=va -- -i --nb-cores=4 --rxq=8 --txq=8 --txd=1024 --rxd=1024 -3. Launch virtio-user with packed ring mergeable inorder path:: +3. Launch virtio-user with packed ring inorder mergeable path:: # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-6 -n 4 --file-prefix=virtio-user0 --no-pci \ --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=8,mrg_rxbuf=1,in_order=1,packed_vq=1,server=1 \ -- -i --nb-cores=4 --rxq=8 --txq=8 --txd=1024 --rxd=1024 - testpmd>set fwd csum - testpmd>start + testpmd> set fwd csum + testpmd> start 4. Attach pdump secondary process to primary process by same file-prefix:: @@ -629,9 +991,9 @@ all path multi-queues with server mode when vhost uses the asynchronous enqueue --pdump 'device_id=net_virtio_user0,queue=0,rx-dev=./pdump-virtio-rx-0.pcap,mbuf-size=8000' \ --pdump 'device_id=net_virtio_user0,queue=3,rx-dev=./pdump-virtio-rx-3.pcap,mbuf-size=8000' -5. Send large pkts from vhost, check loopback performance can get expected and each queue can receive packets:: +5. Send large packets from vhost, check loopback performance can get expected and each queue can receive packets:: - testpmd> set fwd csum + testpmd> set fwd mac testpmd> set txpkts 64,64,64,2000,2000,2000 testpmd> set burst 1 testpmd> start tx_first 1 @@ -640,109 +1002,305 @@ all path multi-queues with server mode when vhost uses the asynchronous enqueue 6. Quit pdump, check all the packets length are 6192 Byte in the pcap file and the payload in receive packets are same. -7. Quit and relaunch vhost and rerun step 4-6. +7. Quit and relaunch vhost w/ diff channel:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-14 -n 4 --no-pci \ + --vdev 'eth_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[txq0@wq0.0;txq1@wq0.1;txq2@wq0.2;txq3@wq0.3;txq4@wq0.4;txq5@wq0.5;rxq2@wq1.2;rxq3@wq1.3;rxq4@wq1.4;rxq5@wq1.5;rxq6@wq1.6;rxq7@wq1.7]' \ + --iova=va -- -i --nb-cores=4 --rxq=8 --txq=8 --txd=1024 --rxd=1024 + +8.Rerun step 4-6. + +Test Case 22: Loopback packed ring mergeable path multi-queues payload check with server mode and dsa kernel driver +------------------------------------------------------------------------------------------------------------------- +This case tests the payload is valid after forwading large chain packets in loopback vhost-user/virtio-user packed ring +mergeable path multi-queues with server mode when vhost uses the asynchronous operations with dsa kernel driver. + +1. Bind 8 DSA device to idxd like common step 2:: + + ls /dev/dsa #check wq configure, reset if exist + ./usertools/dpdk-devbind.py -u 6a:01.0 6f:01.0 + ./usertools/dpdk-devbind.py -b idxd 6a:01.0 6f:01.0 + # ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 0 + # ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 1 + ls /dev/dsa #check wq configure success -8. Quit and relaunch virtio with packed ring mergeable path as below:: +2. Launch vhost-user:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-14 -n 4 --no-pci \ + --vdev 'eth_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[txq0@wq0.0;txq1@wq0.0;txq2@wq0.0;txq3@wq0.0;txq4@wq0.1;txq5@wq0.1;rxq2@wq1.0;rxq3@wq1.0;rxq4@wq1.1;rxq5@wq1.1;rxq6@wq1.1;rxq7@wq1.1]' \ + --iova=va -- -i --nb-cores=4 --rxq=8 --txq=8 --txd=1024 --rxd=1024 + +3. Launch virtio-user with packed ring mergeable path:: # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-6 -n 4 --file-prefix=virtio-user0 --no-pci \ --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=8,mrg_rxbuf=1,in_order=0,packed_vq=1,server=1 \ -- -i --nb-cores=4 --rxq=8 --txq=8 --txd=1024 --rxd=1024 - testpmd>set fwd csum - testpmd>start + testpmd> set fwd csum + testpmd> start + +4. Attach pdump secondary process to primary process by same file-prefix:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=virtio-user0 -- \ + --pdump 'device_id=net_virtio_user0,queue=0,rx-dev=./pdump-virtio-rx-0.pcap,mbuf-size=8000' \ + --pdump 'device_id=net_virtio_user0,queue=3,rx-dev=./pdump-virtio-rx-3.pcap,mbuf-size=8000' + +5. Send large packets from vhost, check loopback performance can get expected and each queue can receive packets:: + + testpmd> set fwd mac + testpmd> set txpkts 64,64,64,2000,2000,2000 + testpmd> set burst 1 + testpmd> start tx_first 1 + testpmd> show port stats all + testpmd> stop + +6. Quit pdump, check all the packets length are 6192 Byte in the pcap file and the payload in receive packets are same. + +7. Quit and relaunch vhost w/ diff channel:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-14 -n 4 --no-pci \ + --vdev 'eth_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[txq0@wq0.0;txq1@wq0.1;txq2@wq0.2;txq3@wq0.3;txq4@wq0.4;txq5@wq0.5;rxq2@wq1.2;rxq3@wq1.3;rxq4@wq1.4;rxq5@wq1.5;rxq6@wq1.6;rxq7@wq1.7]' \ + --iova=va -- -i --nb-cores=4 --rxq=8 --txq=8 --txd=1024 --rxd=1024 + +8.Rerun step 4-6. + +Test Case 23: Loopback packed ring non-mergeable path multi-queues payload check with server mode and dsa kernel driver +----------------------------------------------------------------------------------------------------------------------- +This case tests the payload is valid after forwading large chain packets in loopback vhost-user/virtio-user packed ring +non-mergeable path multi-queues with server mode when vhost uses the asynchronous operations with dsa kernel driver. + +1. Bind 8 DSA device to idxd like common step 2:: -9. Rerun steps 4-7. + ls /dev/dsa #check wq configure, reset if exist + ./usertools/dpdk-devbind.py -u 6a:01.0 6f:01.0 + ./usertools/dpdk-devbind.py -b idxd 6a:01.0 6f:01.0 + # ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 0 + # ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 1 + ls /dev/dsa #check wq configure success -10. Quit and relaunch virtio with packed ring non-mergeable path as below:: +2. Launch vhost-user:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-14 -n 4 --no-pci \ + --vdev 'eth_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[txq0@wq0.0;txq1@wq0.0;txq2@wq0.0;txq3@wq0.0;txq4@wq0.1;txq5@wq0.1;rxq2@wq1.0;rxq3@wq1.0;rxq4@wq1.1;rxq5@wq1.1;rxq6@wq1.1;rxq7@wq1.1]' \ + --iova=va -- -i --nb-cores=4 --rxq=8 --txq=8 --txd=1024 --rxd=1024 + +3. Launch virtio-user with packed ring non-mergeable path:: # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-6 -n 4 --file-prefix=virtio-user0 --no-pci \ --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=8,mrg_rxbuf=0,in_order=0,packed_vq=1,server=1 \ -- -i --nb-cores=4 --rxq=8 --txq=8 --txd=1024 --rxd=1024 - testpmd>set fwd csum - testpmd>start + testpmd> set fwd csum + testpmd> start -11. Rerun step 4. +4. Attach pdump secondary process to primary process by same file-prefix:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=virtio-user0 -- \ + --pdump 'device_id=net_virtio_user0,queue=0,rx-dev=./pdump-virtio-rx-0.pcap,mbuf-size=8000' \ + --pdump 'device_id=net_virtio_user0,queue=3,rx-dev=./pdump-virtio-rx-3.pcap,mbuf-size=8000' -12. Send pkts from vhost, check loopback performance can get expected and each queue can receive packets:: +5. Send large packets from vhost, check loopback performance can get expected and each queue can receive packets:: - testpmd> set fwd csum + testpmd> set fwd mac testpmd> set txpkts 64,128,256,512 testpmd> set burst 1 testpmd> start tx_first 1 testpmd> show port stats all testpmd> stop -13. Quit pdump, check all the packets length are 960 Byte in the pcap file and the payload in receive packets are same. +6. Quit pdump, check all the packets length are 960 Byte in the pcap file and the payload in receive packets are same. + +7. Quit and relaunch vhost w/ diff channel:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-14 -n 4 --no-pci \ + --vdev 'eth_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[txq0@wq0.0;txq1@wq0.1;txq2@wq0.2;txq3@wq0.3;txq4@wq0.4;txq5@wq0.5;rxq2@wq1.2;rxq3@wq1.3;rxq4@wq1.4;rxq5@wq1.5;rxq6@wq1.6;rxq7@wq1.7]' \ + --iova=va -- -i --nb-cores=4 --rxq=8 --txq=8 --txd=1024 --rxd=1024 + +8.Rerun step 4-6. + +Test Case 24: Loopback packed ring inorder non-mergeable path multi-queues payload check with server mode and dsa kernel driver +------------------------------------------------------------------------------------------------------------------------------- +This case tests the payload is valid after forwading large chain packets in loopback vhost-user/virtio-user packed ring +inorder non-mergeable path multi-queues with server mode when vhost uses the asynchronous operations with dsa kernel driver. + +1. Bind 8 DSA device to idxd like common step 2:: + + ls /dev/dsa #check wq configure, reset if exist + ./usertools/dpdk-devbind.py -u 6a:01.0 6f:01.0 + ./usertools/dpdk-devbind.py -b idxd 6a:01.0 6f:01.0 + # ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 0 + # ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 1 + ls /dev/dsa #check wq configure success + +2. Launch vhost-user:: -14. Quit and relaunch vhost and rerun step 11-13. + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-14 -n 4 --no-pci \ + --vdev 'eth_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[txq0@wq0.0;txq1@wq0.0;txq2@wq0.0;txq3@wq0.0;txq4@wq0.1;txq5@wq0.1;rxq2@wq1.0;rxq3@wq1.0;rxq4@wq1.1;rxq5@wq1.1;rxq6@wq1.1;rxq7@wq1.1]' \ + --iova=va -- -i --nb-cores=4 --rxq=8 --txq=8 --txd=1024 --rxd=1024 -15. Quit and relaunch virtio with packed ring inorder non-mergeable path as below:: +3. Launch virtio-user with packed ring inorder non-mergeable path:: # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-6 -n 4 --file-prefix=virtio-user0 --no-pci \ --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=8,mrg_rxbuf=0,in_order=1,packed_vq=1,server=1 \ -- -i --nb-cores=4 --rxq=8 --txq=8 --txd=1024 --rxd=1024 - testpmd>set fwd csum - testpmd>start + testpmd> set fwd csum + testpmd> start + +4. Attach pdump secondary process to primary process by same file-prefix:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=virtio-user0 -- \ + --pdump 'device_id=net_virtio_user0,queue=0,rx-dev=./pdump-virtio-rx-0.pcap,mbuf-size=8000' \ + --pdump 'device_id=net_virtio_user0,queue=3,rx-dev=./pdump-virtio-rx-3.pcap,mbuf-size=8000' + +5. Send large packets from vhost, check loopback performance can get expected and each queue can receive packets:: + + testpmd> set fwd mac + testpmd> set txpkts 64,128,256,512 + testpmd> set burst 1 + testpmd> start tx_first 1 + testpmd> show port stats all + testpmd> stop + +6. Quit pdump, check all the packets length are 960 Byte in the pcap file and the payload in receive packets are same. + +7. Quit and relaunch vhost w/ diff channel:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-14 -n 4 --no-pci \ + --vdev 'eth_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[txq0@wq0.0;txq1@wq0.1;txq2@wq0.2;txq3@wq0.3;txq4@wq0.4;txq5@wq0.5;rxq2@wq1.2;rxq3@wq1.3;rxq4@wq1.4;rxq5@wq1.5;rxq6@wq1.6;rxq7@wq1.7]' \ + --iova=va -- -i --nb-cores=4 --rxq=8 --txq=8 --txd=1024 --rxd=1024 + +8.Rerun step 4-6. + +Test Case 25: Loopback packed ring vectorized path multi-queues payload check with server mode and dsa kernel driver +-------------------------------------------------------------------------------------------------------------------- +This case tests the payload is valid after forwading large chain packets in loopback vhost-user/virtio-user packed ring +vectorized path multi-queues with server mode when vhost uses the asynchronous operations with dsa kernel driver. + +1. Bind 8 DSA device to idxd like common step 2:: -16. Rerun step 11-14. + ls /dev/dsa #check wq configure, reset if exist + ./usertools/dpdk-devbind.py -u 6a:01.0 6f:01.0 + ./usertools/dpdk-devbind.py -b idxd 6a:01.0 6f:01.0 + # ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 0 + # ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 1 + ls /dev/dsa #check wq configure success + +2. Launch vhost-user:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-14 -n 4 --no-pci \ + --vdev 'eth_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[txq0@wq0.0;txq1@wq0.0;txq2@wq0.0;txq3@wq0.0;txq4@wq0.1;txq5@wq0.1;rxq2@wq1.0;rxq3@wq1.0;rxq4@wq1.1;rxq5@wq1.1;rxq6@wq1.1;rxq7@wq1.1]' \ + --iova=va -- -i --nb-cores=4 --rxq=8 --txq=8 --txd=1024 --rxd=1024 -17. Quit and relaunch virtio with packed ring vectorized path as below:: +3. Launch virtio-user with packed ring vectorized path:: # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-6 -n 4 --file-prefix=virtio-user0 --no-pci --force-max-simd-bitwidth=512 \ --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=8,mrg_rxbuf=0,in_order=1,packed_vq=1,vectorized=1,server=1 \ -- -i --nb-cores=4 --rxq=8 --txq=8 --txd=1024 --rxd=1024 - testpmd>set fwd csum - testpmd>start + testpmd> set fwd csum + testpmd> start + +4. Attach pdump secondary process to primary process by same file-prefix:: -18. Rerun step 11-14. + # ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=virtio-user0 -- \ + --pdump 'device_id=net_virtio_user0,queue=0,rx-dev=./pdump-virtio-rx-0.pcap,mbuf-size=8000' \ + --pdump 'device_id=net_virtio_user0,queue=3,rx-dev=./pdump-virtio-rx-3.pcap,mbuf-size=8000' -19. Quit and relaunch virtio with packed ring vectorized path and ring size is not power of 2 as below:: +5. Send large packets from vhost, check loopback performance can get expected and each queue can receive packets:: + + testpmd> set fwd mac + testpmd> set txpkts 64,128,256,512 + testpmd> set burst 1 + testpmd> start tx_first 1 + testpmd> show port stats all + testpmd> stop + +6. Quit pdump, check all the packets length are 960 Byte in the pcap file and the payload in receive packets are same. + +7. Quit and relaunch vhost w/ diff channel:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-14 -n 4 --no-pci \ + --vdev 'eth_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[txq0@wq0.0;txq1@wq0.1;txq2@wq0.2;txq3@wq0.3;txq4@wq0.4;txq5@wq0.5;rxq2@wq1.2;rxq3@wq1.3;rxq4@wq1.4;rxq5@wq1.5;rxq6@wq1.6;rxq7@wq1.7]' \ + --iova=va -- -i --nb-cores=4 --rxq=8 --txq=8 --txd=1024 --rxd=1024 + +8.Rerun step 4-6. + +Test Case 26: Loopback packed ring vectorized path and ring size is not power of 2 multi-queues payload check with server mode and dsa kernel driver +---------------------------------------------------------------------------------------------------------------------------------------------------- +This case tests the payload is valid after forwading large chain packets in loopback vhost-user/virtio-user packed ring +vectorized path multi-queues with server mode and ring size is not power of 2 when vhost uses the asynchronous operations with dsa kernel driver. + +1. Bind 8 DSA device to idxd like common step 2:: + + ls /dev/dsa #check wq configure, reset if exist + ./usertools/dpdk-devbind.py -u 6a:01.0 6f:01.0 + ./usertools/dpdk-devbind.py -b idxd 6a:01.0 6f:01.0 + # ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 0 + # ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 1 + ls /dev/dsa #check wq configure success + +2. Launch vhost-user:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-14 -n 4 --no-pci \ + --vdev 'eth_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[txq0@wq0.0;txq1@wq0.0;txq2@wq0.0;txq3@wq0.0;txq4@wq0.1;txq5@wq0.1;rxq2@wq1.0;rxq3@wq1.0;rxq4@wq1.1;rxq5@wq1.1;rxq6@wq1.1;rxq7@wq1.1]' \ + --iova=va -- -i --nb-cores=4 --rxq=8 --txq=8 --txd=1024 --rxd=1024 + +3. Launch virtio-user with packed ring vectorized path and ring size is not power of 2:: # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-6 -n 4 --file-prefix=virtio-user0 --no-pci --force-max-simd-bitwidth=512 \ --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=8,mrg_rxbuf=0,in_order=1,packed_vq=1,vectorized=1,queue_size=1025,server=1 \ - -- -i --nb-cores=2 --rxq=8 --txq=8 --txd=1025 --rxd=1025 - testpmd>set fwd csum - testpmd>start + -- -i --nb-cores=4 --rxq=8 --txq=8 --txd=1025 --rxd=1025 + testpmd> set fwd csum + testpmd> start + +4. Attach pdump secondary process to primary process by same file-prefix:: -20. Rerun step 11-14. + # ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=virtio-user0 -- \ + --pdump 'device_id=net_virtio_user0,queue=0,rx-dev=./pdump-virtio-rx-0.pcap,mbuf-size=8000' \ + --pdump 'device_id=net_virtio_user0,queue=3,rx-dev=./pdump-virtio-rx-3.pcap,mbuf-size=8000' -21. Quit and relaunch vhost with diff channel:: +5. Send large packets from vhost, check loopback performance can get expected and each queue can receive packets:: + + testpmd> set fwd mac + testpmd> set txpkts 64,128,256,512 + testpmd> set burst 1 + testpmd> start tx_first 1 + testpmd> show port stats all + testpmd> stop + +6. Quit pdump, check all the packets length are 960 Byte in the pcap file and the payload in receive packets are same. + +7. Quit and relaunch vhost w/ diff channel:: # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-14 -n 4 --no-pci \ - --vdev 'eth_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7;rxq0;rxq1;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]' \ - --iova=va -- -i --nb-cores=4 --rxq=8 --txq=8 --txd=1024 --rxd=1024 \ - --lcore-dma=[lcore11@wq0.0,lcore11@wq1.0,lcore12@wq0.1,lcore12@wq1.1,lcore13@wq0.2,lcore13@wq1.2,lcore14@wq0.3,lcore14@wq1.3] + --vdev 'eth_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[txq0@wq0.0;txq1@wq0.1;txq2@wq0.2;txq3@wq0.3;txq4@wq0.4;txq5@wq0.5;rxq2@wq1.2;rxq3@wq1.3;rxq4@wq1.4;rxq5@wq1.5;rxq6@wq1.6;rxq7@wq1.7]' \ + --iova=va -- -i --nb-cores=4 --rxq=8 --txq=8 --txd=1024 --rxd=1024 -22. Rerun steps 3-6. +8.Rerun step 4-6. -Test Case 9: Loopback split and packed ring server mode multi-queues and mergeable path payload check with dsa dpdk and kernel driver --------------------------------------------------------------------------------------------------------------------------------------- -This case tests the payload is valid after forwading large chain packets in loopback vhost-user/virtio-user split and packed ring -multi-queues with server mode when vhost uses the asynchronous enqueue and dequeue operations with dsa kernel driver. +Test Case 27: PV split and packed ring server mode test txonly mode with dsa dpdk and kernel driver +--------------------------------------------------------------------------------------------------- -1. bind 2 dsa device to idxd and 2 dsa device to vfio-pci like common step 1-2:: +1. Bind 2 DSA device to idxd and 2 DSA device to vfio-pci like common step 1-2:: ls /dev/dsa #check wq configure, reset if exist ./usertools/dpdk-devbind.py -u 6a:01.0 6f:01.0 f1:01.0 f6:01.0 - ./usertools/dpdk-devbind.py -b idxd 6a:01.0 6f:01.0 f1:01.0 f6:01.0 + ./usertools/dpdk-devbind.py -b idxd 6a:01.0 6f:01.0 ./usertools/dpdk-devbind.py -b vfio-pci f1:01.0 f6:01.0 # ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 0 # ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 1 ls /dev/dsa #check wq configure success -2. Launch vhost:: +2. Launch vhost-user:: - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-3 -n 4 --file-prefix=vhost -a 0000:f1:01.0,max_queues=4 -a 0000:f6:01.0,max_queues=4 \ - --vdev 'eth_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7;rxq0;rxq1;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]' \ - --iova=va -- -i --nb-cores=1 --rxq=8 --txq=8 --txd=1024 --rxd=1024 \ - --lcore-dma=[lcore3@wq0.0,lcore3@wq0.1,lcore3@wq1.0,lcore3@wq1.1,lcore3@0000:f1:01.0-q0,lcore3@0000:f1:01.0-q2,lcore3@0000:f6:01.0-q3] + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-6 -n 4 --file-prefix=vhost -a 0000:f1:01.0,max_queues=2 \ + --vdev 'eth_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[txq0@wq0.0;txq1@wq0.0;txq2@wq0.0;txq3@wq0.0;txq4@0000:f1:01.0-q0;txq5@0000:f1:01.0-q0;rxq2@0000:f1:01.0-q1;rxq3@0000:f1:01.0-q1;rxq4@wq1.1;rxq5@wq1.1;rxq6@wq1.1;rxq7@wq1.1]' \ + --iova=va -- -i --nb-cores=4 --rxq=8 --txq=8 --txd=1024 --rxd=1024 3. Launch virtio-user with split ring mergeable inorder path:: - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 4-5 -n 4 --file-prefix=virtio-user --no-pci \ + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-14 -n 4 --file-prefix=virtio-user --no-pci \ --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=8,mrg_rxbuf=1,in_order=1,server=1 \ - -- -i --nb-cores=1 --rxq=8 --txq=8 --txd=1024 --rxd=1024 - testpmd>set fwd csum - testpmd>start + -- -i --nb-cores=4 --rxq=8 --txq=8 --txd=1024 --rxd=1024 + testpmd> set fwd rxonly + testpmd> start 4. Attach pdump secondary process to primary process by same file-prefix:: @@ -750,45 +1308,25 @@ multi-queues with server mode when vhost uses the asynchronous enqueue and deque --pdump 'device_id=net_virtio_user0,queue=0,rx-dev=/tmp/pdump-virtio-rx-0.pcap,mbuf-size=8000' --pdump 'device_id=net_virtio_user0,queue=3,rx-dev=./pdump-virtio-rx-3.pcap,mbuf-size=8000' -5. Send large pkts from vhost, check loopback performance can get expected and each queue can receive packets:: +5. Send large packets from vhost:: - testpmd>set fwd csum - testpmd>set txpkts 64,64,64,2000,2000,2000 - testpmd>set burst 1 - testpmd>start tx_first 1 - testpmd>show port stats all - testpmd>stop + testpmd> set fwd txonly + testpmd> async_vhost tx poll completed on + testpmd> set txpkts 64,64,64,2000,2000,2000 + testpmd> set burst 1 + testpmd> start tx_first 1 + testpmd> stop -6. Quit pdump and chcek all the packets length is 6192 and the payload of all packets are same in the pcap file. +6. Quit pdump, check all the packets length are 6192 Byte in the pcap file. 7. Quit and relaunch vhost and rerun step 4-6. -8. Quit and relaunch virtio with split ring mergeable path as below:: - - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 4-5 -n 4 --file-prefix=virtio-user --no-pci \ - --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=8,mrg_rxbuf=1,in_order=0,server=1 \ - -- -i --nb-cores=1 --rxq=8 --txq=8 --txd=1024 --rxd=1024 - testpmd>set fwd csum - testpmd>start - -9. Stop vhost and rerun step 4-7. - -10. Quit and relaunch virtio with packed ring mergeable inorder path as below:: +8. Quit and relaunch virtio with packed ring mergeable inorder path as below:: # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 4-5 -n 4 --file-prefix=virtio-user --no-pci \ --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=8,mrg_rxbuf=1,in_order=1,packed_vq=1,server=1 \ -- -i --nb-cores=1 --rxq=8 --txq=8 --txd=1024 --rxd=1024 - testpmd>set fwd csum - testpmd>start - -11. Stop vhost and rerun step 4-7. - -12. Quit and relaunch virtio with packed ring mergeable path as below:: - - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 4-5 -n 4 --file-prefix=virtio-user --no-pci \ - --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=8,mrg_rxbuf=1,in_order=0,packed_vq=1,server=1 \ - -- -i --nb-cores=1 --rxq=8 --txq=8 --txd=1024 --rxd=1024 - testpmd>set fwd csum - testpmd>start + testpmd> set fwd csum + testpmd> start -13. Stop vhost and rerun step 4-7. +9. Stop vhost and rerun step 4-7. -- 2.25.1