From: Yinan Wang <yinan.wang@intel.com> To: dts@dpdk.org Cc: Yinan Wang <yinan.wang@intel.com> Subject: [dts] [PATCH v1] test_plans/vhost_cbdma_test_plan.rst Date: Thu, 1 Apr 2021 13:12:45 -0400 Message-ID: <20210401171245.686174-1-yinan.wang@intel.com> (raw) Add cases for cbdma packed ring test. Signed-off-by: Yinan Wang <yinan.wang@intel.com> --- test_plans/vhost_cbdma_test_plan.rst | 140 ++++++++++++++++++++++++++- 1 file changed, 136 insertions(+), 4 deletions(-) diff --git a/test_plans/vhost_cbdma_test_plan.rst b/test_plans/vhost_cbdma_test_plan.rst index a1ceda74..c827adaa 100644 --- a/test_plans/vhost_cbdma_test_plan.rst +++ b/test_plans/vhost_cbdma_test_plan.rst @@ -1,4 +1,4 @@ -.. Copyright (c) <2020>, Intel Corporation +.. Copyright (c) <2021>, Intel Corporation All rights reserved. Redistribution and use in source and binary forms, with or without @@ -126,10 +126,10 @@ TG --> NIC --> Vhost --> Virtio--> Vhost --> NIC --> TG >set fwd mac >start -Test Case2: Dynamic queue number test for DMA-accelerated vhost Tx operations -============================================================================= +Test Case2: Split ring dynamic queue number test for DMA-accelerated vhost Tx operations +======================================================================================== -1. Bind four cbdma port and one nic port to igb_uio, then launch vhost by below command:: +1. Bind four cbdma channels and one nic port to igb_uio, then launch vhost by below command:: ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 28-29 \ --file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/s0,queues=2,client=1,dmas=[txq0@80:04.5;txq1@80:04.6],dmathr=1024' \ @@ -222,4 +222,136 @@ Test Case3: CBDMA threshold value check dma parameters: vid1,qid0,dma*,threshold:4096 dma parameters: vid1,qid2,dma*,threshold:4096 +Test Case 4: PVP packed ring all path with DMA-accelerated vhost enqueue +======================================================================== +Packet pipeline: +================ +TG --> NIC --> Vhost --> Virtio--> Vhost --> NIC --> TG + +1. Bind one cbdma port and one nic port to igb_uio, then launch vhost by below command:: + + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/s0,queues=1,dmas=[txq0@80:04.0],dmathr=1024' \ + -- -i --nb-cores=1 --txd=1024 --rxd=1024 + >set fwd mac + >start + +2. Launch virtio-user with inorder mergeable path:: + + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=1,in_order=1,queues=1,packed_vq=1 \ + -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --nb-cores=1 --txd=1024 --rxd=1024 + >set fwd mac + >start + +3. Send imix packets [64,1518] from packet generator, check the throughput can get expected data, restart vhost port, then check throughput again:: + + testpmd>show port stats all + testpmd>stop + testpmd>start + testpmd>show port stats all + +4. Relaunch virtio-user with mergeable path, then repeat step 3:: + + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=1,in_order=0,queues=1,packed_vq=1 \ + -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --nb-cores=1 --txd=1024 --rxd=1024 + >set fwd mac + >start + +5. Relaunch virtio-user with inorder non-mergeable path, then repeat step 3:: + + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=1,queues=1,packed_vq=1 \ + -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --nb-cores=1 --txd=1024 --rxd=1024 + >set fwd mac + >start + +6. Relaunch virtio-user with non-mergeable path, then repeat step 3:: + + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=0,queues=1,packed_vq=1 \ + -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --nb-cores=1 --txd=1024 --rxd=1024 + >set fwd mac + >start + +7. Relaunch virtio-user with vectorized path, then repeat step 3:: + + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 \ + --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=1,queues=1,packed_vq=1 \ + -- -i --nb-cores=1 --txd=1024 --rxd=1024 + >set fwd mac + >start + +8. Relaunch virtio-user with vector_rx path, then repeat step 3:: + + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 \ + --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=1,queues=1,packed_vq=1 \ + -- -i --enable-hw-vlan-strip --nb-cores=1 --txd=1024 --rxd=1024 + >set fwd mac + >start + +Test Case5: Packed ring dynamic queue number test for DMA-accelerated vhost Tx operations +========================================================================================= + +1. Bind four cbdma channels and one nic port to igb_uio, then launch vhost by below command:: + + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 28-29 \ + --file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/s0,queues=2,client=1,dmas=[txq0@80:04.5;txq1@80:04.6],dmathr=1024' \ + -- -i --nb-cores=1 --txd=1024 --rxd=1024 --txq=2 --rxq=2 + >set fwd mac + >start + +2. Launch virtio-user by below command:: + + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 30-31 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=1,in_order=1,queues=2,server=1,packed_vq=1 \ + -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --nb-cores=1 --txd=1024 --rxd=1024 --txq=2 --rxq=2 + >set fwd mac + >start + +3. Send imix packets [64,1518] from packet generator with random ip, check perforamnce can get target. + +4. Stop vhost port, check vhost RX and TX direction both exist packets in two queues from vhost log. + +5. On virtio-user side, dynamic change rx queue numbers from 2 queue to 1 queues, then check one queue RX/TX can work normally:: + + testpmd>port stop all + testpmd>port config all rxq 1 + testpmd>port config all txq 1 + testpmd>port start all + testpmd>start + testpmd>show port stats all + +6. Relaunch virtio-user with vectorized path and 2 queues:: + + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 30-31 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=1,vectorized=1,queues=2,server=1,packed_vq=1 \ + -- -i --nb-cores=1 --txd=1024 --rxd=1024 --txq=2 --rxq=2 + >set fwd mac + >start + +7. Send imix packets [64,1518] from packet generator with random ip, check perforamnce can get target. + +8. Stop vhost port, check vhost RX and TX direction both exist packets in queue0 from vhost log. + +9. On vhost side, dynamic change rx queue numbers from 2 queue to 1 queues, then check one queue RX/TX can work normally:: + + testpmd>port stop all + testpmd>port config all rxq 1 + testpmd>port config all txq 1 + testpmd>port start all + testpmd>start + testpmd>show port stats all + +10. Relaunch vhost with another two cbdma channels and 2 queueus, check perforamnce can get target:: + + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 28-29 \ + --file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/s0,queues=2,client=1,dmas=[txq0@00:04.5;txq1@00:04.6],dmathr=512' \ + -- -i --nb-cores=1 --txd=1024 --rxd=1024 --txq=2 --rxq=2 + >set fwd mac + >start + +11. Stop vhost port, check vhost RX and TX direction both exist packets in two queues from vhost log. -- 2.25.1
next reply other threads:[~2021-04-01 8:30 UTC|newest] Thread overview: 7+ messages / expand[flat|nested] mbox.gz Atom feed top 2021-04-01 17:12 Yinan Wang [this message] 2021-04-07 2:19 ` Tu, Lijuan -- strict thread matches above, loose matches on Subject: below -- 2021-07-29 18:00 Yinan Wang 2021-03-02 15:22 Yinan Wang 2021-03-03 5:16 ` Tu, Lijuan 2020-12-15 23:46 Yinan Wang 2020-12-21 7:33 ` Tu, Lijuan
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=20210401171245.686174-1-yinan.wang@intel.com \ --to=yinan.wang@intel.com \ --cc=dts@dpdk.org \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: link
test suite reviews and discussions This inbox may be cloned and mirrored by anyone: git clone --mirror https://inbox.dpdk.org/dts/0 dts/git/0.git # If you have public-inbox 1.1+ installed, you may # initialize and index your mirror using the following commands: public-inbox-init -V2 dts dts/ https://inbox.dpdk.org/dts \ dts@dpdk.org public-inbox-index dts Example config snippet for mirrors. Newsgroup available over NNTP: nntp://inbox.dpdk.org/inbox.dpdk.dts AGPL code for this site: git clone https://public-inbox.org/public-inbox.git