From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 76A7AA034F; Thu, 1 Apr 2021 09:28:25 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 44A1740142; Thu, 1 Apr 2021 09:28:25 +0200 (CEST) Received: from mga06.intel.com (mga06.intel.com [134.134.136.31]) by mails.dpdk.org (Postfix) with ESMTP id A09A74013F for ; Thu, 1 Apr 2021 09:28:22 +0200 (CEST) IronPort-SDR: ibfuPNQzM9mSylk1AamqcfY8tYCrn80Do+dJlAAZFMiZjr1oiH5A1apn1c3g8fg/nw+QiVkh+C pqbCMMSmtr0Q== X-IronPort-AV: E=McAfee;i="6000,8403,9940"; a="253510209" X-IronPort-AV: E=Sophos;i="5.81,296,1610438400"; d="scan'208";a="253510209" Received: from orsmga003.jf.intel.com ([10.7.209.27]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 01 Apr 2021 00:28:19 -0700 IronPort-SDR: XCBWNSh2yytvlQnTTWSCNR9LKU0WBBHW3hvPaYpk/ycHPnElZnPUL4S+Gug32Z1EUT2khS7HlA Zk7oMUIx/gaA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.81,296,1610438400"; d="scan'208";a="377594660" Received: from dpdk-yinan-ntb1.sh.intel.com ([10.67.119.39]) by orsmga003.jf.intel.com with ESMTP; 01 Apr 2021 00:28:17 -0700 From: Yinan Wang To: dts@dpdk.org Cc: Yinan Wang Date: Thu, 1 Apr 2021 12:10:41 -0400 Message-Id: <20210401161041.676973-1-yinan.wang@intel.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Subject: [dts] [PATCH] test_plans/vhost_cbdma_test_plan.rst X-BeenThere: dts@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: test suite reviews and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dts-bounces@dpdk.org Sender: "dts" Add two cases for packed ring cbdma test. Signed-off-by: Yinan Wang --- test_plans/vhost_cbdma_test_plan.rst | 138 ++++++++++++++++++++++++++- 1 file changed, 135 insertions(+), 3 deletions(-) diff --git a/test_plans/vhost_cbdma_test_plan.rst b/test_plans/vhost_cbdma_test_plan.rst index a1ceda74..e5d8f41b 100644 --- a/test_plans/vhost_cbdma_test_plan.rst +++ b/test_plans/vhost_cbdma_test_plan.rst @@ -126,10 +126,10 @@ TG --> NIC --> Vhost --> Virtio--> Vhost --> NIC --> TG >set fwd mac >start -Test Case2: Dynamic queue number test for DMA-accelerated vhost Tx operations -============================================================================= +Test Case2: Split ring dynamic queue number test for DMA-accelerated vhost Tx operations +======================================================================================== -1. Bind four cbdma port and one nic port to igb_uio, then launch vhost by below command:: +1. Bind four cbdma channels and one nic port to igb_uio, then launch vhost by below command:: ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 28-29 \ --file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/s0,queues=2,client=1,dmas=[txq0@80:04.5;txq1@80:04.6],dmathr=1024' \ @@ -222,4 +222,136 @@ Test Case3: CBDMA threshold value check dma parameters: vid1,qid0,dma*,threshold:4096 dma parameters: vid1,qid2,dma*,threshold:4096 +Test Case 4: PVP packed ring all path with DMA-accelerated vhost enqueue +======================================================================== +Packet pipeline: +================ +TG --> NIC --> Vhost --> Virtio--> Vhost --> NIC --> TG + +1. Bind one cbdma port and one nic port to igb_uio, then launch vhost by below command:: + + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/s0,queues=1,dmas=[txq0@80:04.0],dmathr=1024' \ + -- -i --nb-cores=1 --txd=1024 --rxd=1024 + >set fwd mac + >start + +2. Launch virtio-user with inorder mergeable path:: + + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=1,in_order=1,queues=1,packed_vq=1 \ + -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --nb-cores=1 --txd=1024 --rxd=1024 + >set fwd mac + >start + +3. Send imix packets [64,1518] from packet generator, check the throughput can get expected data, restart vhost port, then check throughput again:: + + testpmd>show port stats all + testpmd>stop + testpmd>start + testpmd>show port stats all + +4. Relaunch virtio-user with mergeable path, then repeat step 3:: + + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=1,in_order=0,queues=1,packed_vq=1 \ + -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --nb-cores=1 --txd=1024 --rxd=1024 + >set fwd mac + >start + +5. Relaunch virtio-user with inorder non-mergeable path, then repeat step 3:: + + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=1,queues=1,packed_vq=1 \ + -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --nb-cores=1 --txd=1024 --rxd=1024 + >set fwd mac + >start + +6. Relaunch virtio-user with non-mergeable path, then repeat step 3:: + + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=0,queues=1,packed_vq=1 \ + -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --nb-cores=1 --txd=1024 --rxd=1024 + >set fwd mac + >start + +7. Relaunch virtio-user with vectorized path, then repeat step 3:: + + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 \ + --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=1,queues=1,packed_vq=1 \ + -- -i --nb-cores=1 --txd=1024 --rxd=1024 + >set fwd mac + >start + +8. Relaunch virtio-user with vector_rx path, then repeat step 3:: + + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 \ + --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=1,queues=1,packed_vq=1 \ + -- -i --enable-hw-vlan-strip --nb-cores=1 --txd=1024 --rxd=1024 + >set fwd mac + >start + +Test Case5: Packed ring dynamic queue number test for DMA-accelerated vhost Tx operations +========================================================================================= + +1. Bind four cbdma channels and one nic port to igb_uio, then launch vhost by below command:: + + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 28-29 \ + --file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/s0,queues=2,client=1,dmas=[txq0@80:04.5;txq1@80:04.6],dmathr=1024' \ + -- -i --nb-cores=1 --txd=1024 --rxd=1024 --txq=2 --rxq=2 + >set fwd mac + >start + +2. Launch virtio-user by below command:: + + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 30-31 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=1,in_order=1,queues=2,server=1,packed_vq=1 \ + -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --nb-cores=1 --txd=1024 --rxd=1024 --txq=2 --rxq=2 + >set fwd mac + >start + +3. Send imix packets [64,1518] from packet generator with random ip, check perforamnce can get target. + +4. Stop vhost port, check vhost RX and TX direction both exist packtes in two queues from vhost log. + +5. On virtio-user side, dynamic change rx queue numbers from 2 queue to 1 queues, then check one queue RX/TX can work normally:: + + testpmd>port stop all + testpmd>port config all rxq 1 + testpmd>port config all txq 1 + testpmd>port start all + testpmd>start + testpmd>show port stats all + +6. Relaunch virtio-user with vectorized path and 2 queues:: + + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 30-31 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=1,vectorized=1,queues=2,server=1,packed_vq=1 \ + -- -i --nb-cores=1 --txd=1024 --rxd=1024 --txq=2 --rxq=2 + >set fwd mac + >start + +7. Send imix packets [64,1518] from packet generator with random ip, check perforamnce can get target. + +8. Stop vhost port, check vhost RX and TX direction both exist packtes in queue0 from vhost log. + +9. On vhost side, dynamic change rx queue numbers from 2 queue to 1 queues, then check one queue RX/TX can work normally:: + + testpmd>port stop all + testpmd>port config all rxq 1 + testpmd>port config all txq 1 + testpmd>port start all + testpmd>start + testpmd>show port stats all + +10. Relaunch vhost with another two cbdma channels and 2 queueus, check perforamnce can get target:: + + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 28-29 \ + --file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/s0,queues=2,client=1,dmas=[txq0@00:04.5;txq1@00:04.6],dmathr=512' \ + -- -i --nb-cores=1 --txd=1024 --rxd=1024 --txq=2 --rxq=2 + >set fwd mac + >start + +11. Stop vhost port, check vhost RX and TX direction both exist packtes in two queues from vhost log. -- 2.25.1