From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 5B53241CAC; Thu, 16 Feb 2023 07:31:02 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 5001C40E0F; Thu, 16 Feb 2023 07:31:02 +0100 (CET) Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by mails.dpdk.org (Postfix) with ESMTP id 98ACD40695 for ; Thu, 16 Feb 2023 07:31:00 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1676529061; x=1708065061; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=hogOLhkMYOQ7wRV+f+IUYAg2ZCJkeVuBhJUukv9wnDg=; b=Jdk2UOgMbA9sxscznrPUDS9KOOsDJuCL0yxJBide+Ag06XKFaU31uvlK jlcUmBA513pKtEvHJhB2BO63l+27cCg6SzVREB5jZTKdzJYGqpPz8dXpN 4Chm+q50AniRINwHoRpNhizsZiaBy4Hb2PhzNDR4samEMCJkZGlg4M19+ +Bvx1sXX0XVwz6U3exE2xVumRamL/WN1k/eDnPbeAVo9Ev9bWZ2qm2uj1 9ZudCmULTf1ljeRvVPzAP5mJa602NXinHmiIZ5MRHw5c6zKmGhxlTrHgs lyobbhfyHbJHV7HWPeQvSNjuklfyYq3sdo35bsHSiVJJnE0cRbLaGIjw2 g==; X-IronPort-AV: E=McAfee;i="6500,9779,10622"; a="333802609" X-IronPort-AV: E=Sophos;i="5.97,301,1669104000"; d="scan'208";a="333802609" Received: from orsmga005.jf.intel.com ([10.7.209.41]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Feb 2023 22:30:59 -0800 X-IronPort-AV: E=McAfee;i="6500,9779,10622"; a="844028077" X-IronPort-AV: E=Sophos;i="5.97,301,1669104000"; d="scan'208";a="844028077" Received: from unknown (HELO localhost.localdomain) ([10.239.252.222]) by orsmga005-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Feb 2023 22:30:55 -0800 From: Wei Ling To: dts@dpdk.org Cc: Wei Ling Subject: [dts][PATCH V1 2/3] test_plans/vhost_async_robust_cbdma_test_plan: add new testplan Date: Thu, 16 Feb 2023 14:17:31 +0800 Message-Id: <20230216061731.1023156-1-weix.ling@intel.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-BeenThere: dts@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: test suite reviews and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dts-bounces@dpdk.org Add new testplan for testing Vhost asynchronous data path with CBDMA driver in the PVP topology environment with testpmd. Signed-off-by: Wei Ling --- .../vhost_async_robust_cbdma_test_plan.rst | 278 ++++++++++++++++++ 1 file changed, 278 insertions(+) create mode 100644 test_plans/vhost_async_robust_cbdma_test_plan.rst diff --git a/test_plans/vhost_async_robust_cbdma_test_plan.rst b/test_plans= /vhost_async_robust_cbdma_test_plan.rst new file mode 100644 index 00000000..e712e6d7 --- /dev/null +++ b/test_plans/vhost_async_robust_cbdma_test_plan.rst @@ -0,0 +1,278 @@ +.. SPDX-License-Identifier: BSD-3-Clause=0D + Copyright(c) 2023 Intel Corporation=0D +=0D +=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =0D +vhost async data-path robust with cbdma test plan=0D +=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =0D +=0D +Description=0D +=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=0D +=0D +This document provides the test plan for testing Vhost asynchronous=0D +data path with CBDMA driver in the PVP topology environment with testpmd.= =0D +=0D +CBDMA is a kind of DMA engine, Vhost asynchronous data path leverages DMA = devices=0D +to offload memory copies from the CPU and it is implemented in an asynchro= nous way.=0D +As a result, large packet copy can be accelerated by the DMA engine, and v= host can=0D +free CPU cycles for higher level functions.=0D +=0D +Asynchronous data path is enabled per tx/rx queue, and users need=0D +to specify the DMA device used by the tx/rx queue. Each tx/rx queue=0D +only supports to use one DMA device, but one DMA device can be shared=0D +among multiple tx/rx queues of different vhostpmd ports.=0D +=0D +Two PMD parameters are added:=0D +- dmas: specify the used DMA device for a tx/rx queue=0D +(Default: no queues enable asynchronous data path)=0D +- dma-ring-size: DMA ring size.=0D +(Default: 4096).=0D +=0D +Here is an example:=0D +--vdev 'eth_vhost0,iface=3D./s0,dmas=3D[txq0@0000:00.01.0;rxq0@0000:00.01.= 1],dma-ring-size=3D4096'=0D +=0D +Test case=0D +=3D=3D=3D=3D=3D=3D=3D=3D=3D=0D +=0D +Test case=0D +=3D=3D=3D=3D=3D=3D=3D=3D=3D=0D +=0D +Common steps=0D +------------=0D +1. Bind 1 NIC port and CBDMA devices to vfio-pci::=0D +=0D + # ./usertools/dpdk-devbind.py -b vfio-pci =0D + # ./usertools/dpdk-devbind.py -b vfio-pci =0D +=0D + For example, Bind 1 NIC port and 2 CBDMA devices::=0D + ./usertools/dpdk-devbind.py -b vfio-pci 0000:00:18.0=0D + ./usertools/dpdk-devbind.py -b vfio-pci 0000:00:04.0,0000:00:04.1=0D +=0D +2. Send imix packets [64,1518] to NIC by traffic generator::=0D +=0D + The tcp imix packets include packet size [64, 128, 256, 512, 1024, 151= 8], and the format of packet is as follows.=0D + +-------------+-------------+-------------+-------------+=0D + | MAC | MAC | IPV4 | IPV4 |=0D + | Src address | Dst address | Src address | Dst address |=0D + |-------------|-------------|-------------|-------------|=0D + | Random MAC | Virtio mac | Random IP | Random IP |=0D + +-------------+-------------+-------------+-------------+=0D + All the packets in this test plan use the Virtio mac: 00:11:22:33:44:1= 0.=0D +=0D +Test Case 1: PVP virtio-user quit test=0D +--------------------------------------=0D +This case is designed to test if virtio-user can quit normally regardless = of whether the back-end stop sending packets.=0D +=0D +1. Bind 1 NIC port and 1 CBDMA devices to vfio-pci as common step 1.=0D +=0D +2. Launch vhost by below command::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 -= -file-prefix=3Dvhost -a 0000:18:00.1 -a 0000:00:04.0 \=0D + --vdev 'net_vhost0,iface=3D./vhost_net0,queues=3D1,dmas=3D[txq0@0000:00:0= 4.0;rxq0@0000:00:04.0]' \=0D + --iova=3Dva -- -i --nb-cores=3D1 --txq=3D1 --rxq=3D1 --txd=3D1024 --rxd= =3D1024=0D + testpmd> set fwd mac=0D + testpmd> start=0D +=0D +3. Launch virtio-user with inorder mergeable path::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-6 --n= o-pci --file-prefix=3Dvirtio \=0D + --vdev=3Dnet_virtio_user0,mac=3D00:11:22:33:44:10,path=3D./vhost_net0,mrg= _rxbuf=3D1,in_order=3D1,queues=3D1 \=0D + -- -i --nb-cores=3D1 --txq=3D1 --rxq=3D1 --txd=3D1024 --rxd=3D1024=0D + testpmd> set fwd csum=0D + testpmd> start=0D +=0D +4. Send tcp imix packets [64,1518] from packet generator as common step2.= =0D +=0D +5. Quit virtio-user and relaunch virtio-user as step 3 while sending packe= ts from packet generator.=0D +=0D +6. Stop vhost port, then quit virtio-user and reluanch virtio-user as step= 3 while sending packets from packet generator.=0D +=0D +7. Stop sending packets from packet generator, then quit virtio-user and v= host.=0D +=0D +Test Case 2: PVP vhost-user quit test=0D +-------------------------------------=0D +This case is designed to test if vhost-user can quit normally regardless o= f whether the back-end stop sending packets.=0D +=0D +1. Bind 1 NIC port and 1 CBDMA devices to vfio-pci as common step 1.=0D +=0D +2. Launch vhost by below command::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 -= -file-prefix=3Dvhost -a 0000:18:00.1 -a 0000:00:04.0 \=0D + --vdev 'net_vhost0,iface=3D./vhost_net0,queues=3D1,client=3D1,dmas=3D[txq= 0@0000:00:04.0;rxq0@0000:00:04.0]' \=0D + --iova=3Dva -- -i --nb-cores=3D1 --txq=3D1 --rxq=3D1 --txd=3D1024 --rxd= =3D1024=0D + testpmd> set fwd mac=0D + testpmd> start=0D +=0D +3. Launch virtio-user with inorder mergeable path::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-6 --n= o-pci --file-prefix=3Dvirtio \=0D + --vdev=3Dnet_virtio_user0,mac=3D00:11:22:33:44:10,path=3D./vhost_net0,mrg= _rxbuf=3D1,in_order=3D1,queues=3D1,server=3D1 \=0D + -- -i --nb-cores=3D1 --txq=3D1 --rxq=3D1 --txd=3D1024 --rxd=3D1024=0D + testpmd> set fwd csum=0D + testpmd> start=0D +=0D +4. Send tcp imix packets [64,1518] from packet generator as common step2.= =0D +=0D +5. Quit vhost-user and relaunch vhost-user as step 2 while sending packets= from packet generator.=0D +=0D +6. Stop sending packets from packet generator, then quit vhost-user and vi= rtio-user.=0D +=0D +Test Case 3: PVP vhost async test with redundant device parameters=0D +------------------------------------------------------------------=0D +This case is designed to test if vhostpmd can work normally when binding a= nd using redundant device parameters.=0D +=0D +1. Bind 1 NIC port and 4 CBDMA devices to vfio-pci as common step 1.=0D +=0D +2. Launch vhost by below command::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 -= -file-prefix=3Dvhost -a 0000:18:00.1 -a 0000:00:04.0 -a 0000:00:04.1 -a 000= 0:00:04.2 -a 0000:00:04.3 \=0D + --vdev 'net_vhost0,iface=3D./vhost_net0,queues=3D1,client=3D1,dmas=3D[txq= 0@0000:00:04.1;rxq0@0000:00:04.1]' \=0D + --iova=3Dva -- -i --nb-cores=3D1 --txq=3D1 --rxq=3D1 --txd=3D1024 --rxd= =3D1024=0D + testpmd> set fwd mac=0D + testpmd> start=0D +=0D +3. Launch virtio-user with inorder mergeable path::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-6 --n= o-pci --file-prefix=3Dvirtio \=0D + --vdev=3Dnet_virtio_user0,mac=3D00:11:22:33:44:10,path=3D./vhost_net0,mrg= _rxbuf=3D1,in_order=3D1,queues=3D1,server=3D1 \=0D + -- -i --nb-cores=3D1 --txq=3D1 --rxq=3D1 --txd=3D1024 --rxd=3D1024=0D + testpmd> set fwd csum=0D + testpmd> start=0D +=0D +4. Send imix packets [64,1518] from packet generator as common step2, chec= k the throughput.=0D +=0D +Test Case 4: Loopback vhost async test with each queue using 2 DMA devices= =0D +--------------------------------------------------------------------------= =0D +Since each tx/rx queue only supports to use one DMA device, this case is d= esigned to test if vhostpmd can work normally when each queue using 2 DMA d= evices.=0D +=0D +1. Bind 3 CBDMA devices to vfio-pci as common step 1.=0D +=0D +2. Launch vhost by below command::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 -= -file-prefix=3Dvhost -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 \=0D + --vdev 'net_vhost0,iface=3D./vhost_net0,queues=3D2,client=3D1,dmas=3D[txq= 0@0000:00:04.0;txq0@0000:00:04.1;rxq0@0000:00:04.1;rxq0@0000:00:04.2]' \=0D + --iova=3Dva -- -i --nb-cores=3D1 --txq=3D2 --rxq=3D2 --txd=3D1024 --rxd= =3D1024=0D + testpmd> set fwd mac=0D +=0D +3. Launch virtio-user with inorder mergeable path::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-6 --n= o-pci --file-prefix=3Dvirtio \=0D + --vdev=3Dnet_virtio_user0,mac=3D00:11:22:33:44:10,path=3D./vhost_net0,mrg= _rxbuf=3D1,in_order=3D1,queues=3D2,server=3D1 \=0D + -- -i --nb-cores=3D1 --txq=3D2 --rxq=3D2 --txd=3D1024 --rxd=3D1024=0D + testpmd> set fwd csum=0D + testpmd> start=0D +=0D +4. Send packets from vhost-user testpmd, check the throughput::=0D +=0D + testpmd>set txpkts 1024=0D + testpmd>start tx_first 32=0D + testpmd>show port stats all=0D +=0D +Test Case 5: Loopback vhost async test with dmas parameters out of order=0D +------------------------------------------------------------------------=0D +This case is designed to test if vhostpmd can work normally when dmas para= meters out of order.=0D +=0D +1. Bind 2 CBDMA devices to vfio-pci as common step 1.=0D +=0D +2. Launch vhost by below command::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 -= -file-prefix=3Dvhost -a 0000:00:04.0 -a 0000:00:04.1 \=0D + --vdev 'net_vhost0,iface=3D./vhost_net0,queues=3D4,client=3D1,dmas=3D[rxq= 3@0000:00:04.1;txq0@0000:00:04.0;rxq1@0000:00:04.0;txq2@0000:00:04.1]' \=0D + --iova=3Dva -- -i --nb-cores=3D1 --txq=3D4 --rxq=3D4 --txd=3D1024 --rxd= =3D1024=0D + testpmd> set fwd mac=0D +=0D +3. Launch virtio-user with inorder mergeable path::=0D +=0D + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-6 --n= o-pci --file-prefix=3Dvirtio \=0D + --vdev=3Dnet_virtio_user0,mac=3D00:11:22:33:44:10,path=3D./vhost_net0,mrg= _rxbuf=3D1,in_order=3D1,queues=3D4,server=3D1 \=0D + -- -i --nb-cores=3D1 --txq=3D4 --rxq=3D4 --txd=3D1024 --rxd=3D1024=0D + testpmd> set fwd csum=0D + testpmd> start=0D +=0D +4. Send packets from vhost-user testpmd, check the throughput::=0D +=0D + testpmd>set txpkts 1024=0D + testpmd>start tx_first 32=0D + testpmd>show port stats all=0D +=0D +Test Case 6: VM2VM split and packed ring mergeable path with cbdma enable = and server mode=0D +--------------------------------------------------------------------------= ---------------=0D +This case tests split and packed ring with cbdma can work normally when th= e front-end change from virtio-net to virtio-pmd.=0D +=0D +1. Bind 16 CBDMA channels to vfio-pci, as common step 1.=0D +=0D +2. Launch the testpmd with 2 vhost ports below commands::=0D +=0D + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --file-prefix= =3Dvhost -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -a= 0000:00:04.4 -a 0000:00:04.5 -a 0000:00:04.6 -a 0000:00:04.7 \=0D + -a 0000:80:04.0 -a 0000:80:04.1 -a 0000:80:04.2 -a 0000:80:04.3 -a 0000:8= 0:04.4 -a 0000:80:04.5 -a 0000:80:04.6 -a 0000:80:04.7 \=0D + --vdev 'net_vhost0,iface=3Dvhost-net0,client=3D1,queues=3D8,dmas=3D[txq0@= 0000:00:04.0;txq1@0000:00:04.1;txq2@0000:00:04.2;txq3@0000:00:04.3;txq4@000= 0:00:04.4;txq5@0000:00:04.1;rxq2@0000:00:04.2;rxq3@0000:00:04.3;rxq4@0000:0= 0:04.4;rxq5@0000:00:04.5;rxq6@0000:00:04.6;rxq7@0000:00:04.7]' \=0D + --vdev 'net_vhost1,iface=3Dvhost-net1,client=3D1,queues=3D8,dmas=3D[txq0@= 0000:80:04.0;txq1@0000:80:04.1;txq2@0000:80:04.2;txq3@0000:80:04.3;txq4@000= 0:80:04.4;txq5@0000:80:04.1;rxq2@0000:80:04.2;rxq3@0000:80:04.3;rxq4@0000:8= 0:04.4;rxq5@0000:80:04.5;rxq6@0000:80:04.6;rxq7@0000:80:04.7]' \=0D + -- -i --nb-cores=3D4 --txd=3D1024 --rxd=3D1024 --rxq=3D8 --txq=3D8=0D + testpmd> start=0D +=0D +3. Launch VM1 and VM2::=0D +=0D + taskset -c 6-16 qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 9= -m 4096 \=0D + -object memory-backend-file,id=3Dmem,size=3D4096M,mem-path=3D/mnt/huge,sh= are=3Don \=0D + -numa node,memdev=3Dmem -mem-prealloc -drive file=3D/home/osimg/ubuntu20-= 04.img \=0D + -chardev socket,path=3D/tmp/vm1_qga0.sock,server,nowait,id=3Dvm1_qga0 -de= vice virtio-serial \=0D + -device virtserialport,chardev=3Dvm1_qga0,name=3Dorg.qemu.guest_agent.2 -= daemonize \=0D + -monitor unix:/tmp/vm1_monitor.sock,server,nowait -device e1000,netdev=3D= nttsip1 \=0D + -netdev user,id=3Dnttsip1,hostfwd=3Dtcp:127.0.0.1:6002-:22 \=0D + -chardev socket,id=3Dchar0,path=3D./vhost-net0,server \=0D + -netdev type=3Dvhost-user,id=3Dnetdev0,chardev=3Dchar0,vhostforce,queues= =3D8 \=0D + -device virtio-net-pci,netdev=3Dnetdev0,mac=3D52:54:00:00:00:01,disable-m= odern=3Dfalse,mrg_rxbuf=3Doff,mq=3Don,vectors=3D40,csum=3Don,guest_csum=3Do= n,host_tso4=3Don,guest_tso4=3Don,guest_ecn=3Don,guest_ufo=3Don,host_ufo=3Do= n -vnc :10=0D + =0D + taskset -c 17-27 qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp = 9 -m 4096 \=0D + -object memory-backend-file,id=3Dmem,size=3D4096M,mem-path=3D/mnt/huge,sh= are=3Don \=0D + -numa node,memdev=3Dmem -mem-prealloc -drive file=3D/home/osimg/ubuntu20-= 04-2.img \=0D + -chardev socket,path=3D/tmp/vm2_qga0.sock,server,nowait,id=3Dvm2_qga0 -de= vice virtio-serial \=0D + -device virtserialport,chardev=3Dvm2_qga0,name=3Dorg.qemu.guest_agent.2 -= daemonize \=0D + -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=3D= nttsip1 \=0D + -netdev user,id=3Dnttsip1,hostfwd=3Dtcp:127.0.0.1:6003-:22 \=0D + -chardev socket,id=3Dchar0,path=3D./vhost-net1,server \=0D + -netdev type=3Dvhost-user,id=3Dnetdev0,chardev=3Dchar0,vhostforce,queues= =3D8 \=0D + -device virtio-net-pci,netdev=3Dnetdev0,mac=3D52:54:00:00:00:02,disabl= e-modern=3Dfalse,mrg_rxbuf=3Doff,mq=3Don,vectors=3D40,csum=3Don,guest_csum= =3Don,host_tso4=3Don,guest_tso4=3Don,guest_ecn=3Don,guest_ufo=3Don,host_ufo= =3Don,packed=3Don -vnc :12=0D +=0D +4. On VM1, set virtio device IP and run arp protocal::=0D +=0D + ethtool -L ens5 combined 8=0D + ifconfig ens5 1.1.1.2=0D + arp -s 1.1.1.8 52:54:00:00:00:02=0D +=0D +5. On VM2, set virtio device IP and run arp protocal::=0D +=0D + ethtool -L ens5 combined 8=0D + ifconfig ens5 1.1.1.8=0D + arp -s 1.1.1.2 52:54:00:00:00:01=0D +=0D +6. Scp 1MB file form VM1 to VM2::=0D +=0D + Under VM1, run: `scp root@1.1.1.8:/` is the file name=0D +=0D +7. Check the iperf performance between two VMs by below commands::=0D +=0D + Under VM1, run: `iperf -s -i 1`=0D + Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60`=0D +=0D +8. On VM1 and VM2, bind virtio device with vfio-pci driver::=0D +=0D + modprobe vfio=0D + modprobe vfio-pci=0D + echo 1 > /sys/module/vfio/parameters/enable_unsafe_noiommu_mode=0D + ./usertools/dpdk-devbind.py --force --bind=3Dvfio-pci 0000:00:05.0=0D +=0D +9. Launch testpmd in VM1::=0D +=0D + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 4 -- -i --tx-offl= oads=3D0x00 --enable-hw-vlan-strip --txq=3D8 --rxq=3D8 --txd=3D1024 --rxd= =3D1024=0D + testpmd> set mac fwd=0D + testpmd> start=0D +=0D +10. Launch testpmd in VM2 and send imix pkts, check imix packets can loope= d between two VMs for 1 mins::=0D +=0D + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 4 -- -i --tx-offl= oads=3D0x00 --enable-hw-vlan-strip --txq=3D8 --rxq=3D8 --txd=3D1024 --rxd= =3D1024=0D + testpmd> set mac fwd=0D + testpmd> set txpkts 64,256,512=0D + testpmd> start tx_first 32=0D + testpmd> show port stats all=0D +=0D +11. Rerun step 4-10.=0D --=20 2.25.1