From: Wei Ling <weix.ling@intel.com>
To: dts@dpdk.org
Cc: Wei Ling <weix.ling@intel.com>
Subject: [dts][PATCH V1 1/2] test_plans/virtio_event_idx_interrupt_cbdma: modify re-run times from 100 to 10
Date: Tue, 28 Mar 2023 15:40:57 +0800 [thread overview]
Message-ID: <20230328074058.3796087-2-weix.ling@intel.com> (raw)
In-Reply-To: <20230328074058.3796087-1-weix.ling@intel.com>
Modify re-run times from 100 to 10 for reduce run time.
Signed-off-by: Wei Ling <weix.ling@intel.com>
---
...io_event_idx_interrupt_cbdma_test_plan.rst | 122 +++++++++---------
1 file changed, 61 insertions(+), 61 deletions(-)
diff --git a/test_plans/virtio_event_idx_interrupt_cbdma_test_plan.rst b/test_plans/virtio_event_idx_interrupt_cbdma_test_plan.rst
index 55008f07..8cf34c0b 100644
--- a/test_plans/virtio_event_idx_interrupt_cbdma_test_plan.rst
+++ b/test_plans/virtio_event_idx_interrupt_cbdma_test_plan.rst
@@ -32,24 +32,24 @@ General set up
--------------
1. Compile DPDK::
- # CC=gcc meson --werror -Denable_kmods=True -Dlibdir=lib -Dexamples=all --default-library=static <dpdk build dir>
- # ninja -C <dpdk build dir> -j 110
- For example:
- CC=gcc meson --werror -Denable_kmods=True -Dlibdir=lib -Dexamples=all --default-library=static x86_64-native-linuxapp-gcc
- ninja -C x86_64-native-linuxapp-gcc -j 110
+ # CC=gcc meson --werror -Denable_kmods=True -Dlibdir=lib -Dexamples=all --default-library=static <dpdk build dir>
+ # ninja -C <dpdk build dir> -j 110
+ For example:
+ CC=gcc meson --werror -Denable_kmods=True -Dlibdir=lib -Dexamples=all --default-library=static x86_64-native-linuxapp-gcc
+ ninja -C x86_64-native-linuxapp-gcc -j 110
2. Get the PCI device ID and DMA device ID of DUT, for example, 0000:18:00.0 is PCI device ID, 0000:00:04.0, 0000:00:04.1 is DMA device ID::
- <dpdk dir># ./usertools/dpdk-devbind.py -s
+ <dpdk dir># ./usertools/dpdk-devbind.py -s
- Network devices using kernel driver
- ===================================
- 0000:18:00.0 'Device 159b' if=ens785f0 drv=ice unused=vfio-pci
+ Network devices using kernel driver
+ ===================================
+ 0000:18:00.0 'Device 159b' if=ens785f0 drv=ice unused=vfio-pci
- DMA devices using kernel driver
- ===============================
- 0000:00:04.0 'Sky Lake-E CBDMA Registers 2021' drv=ioatdma unused=vfio-pci
- 0000:00:04.1 'Sky Lake-E CBDMA Registers 2021' drv=ioatdma unused=vfio-pci
+ DMA devices using kernel driver
+ ===============================
+ 0000:00:04.0 'Sky Lake-E CBDMA Registers 2021' drv=ioatdma unused=vfio-pci
+ 0000:00:04.1 'Sky Lake-E CBDMA Registers 2021' drv=ioatdma unused=vfio-pci
Test case
=========
@@ -62,11 +62,11 @@ operations with CBDMA channels.
1. Bind one nic port and one cbdma channel to vfio-pci, then launch the vhost sample by below commands::
- rm -rf vhost-net*
- ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xF0000000 -n 4 --file-prefix=vhost \
- --vdev 'net_vhost,iface=vhost-net,queues=1,dmas=[txq0@0000:00:04.0;rxq0@0000:00:04.0]' \
- -- -i --nb-cores=1 --txd=1024 --rxd=1024
- testpmd> start
+ rm -rf vhost-net*
+ ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xF0000000 -n 4 --file-prefix=vhost \
+ --vdev 'net_vhost,iface=vhost-net,queues=1,dmas=[txq0@0000:00:04.0;rxq0@0000:00:04.0]' \
+ -- -i --nb-cores=1 --txd=1024 --rxd=1024
+ testpmd> start
2. Launch VM::
@@ -82,21 +82,21 @@ operations with CBDMA channels.
3. On VM1, set virtio device IP, send 10M packets from packet generator to nic then check virtio device can receive packets::
- ifconfig [ens3] 1.1.1.2 # [ens3] is the name of virtio-net
- tcpdump -i [ens3]
+ ifconfig [ens3] 1.1.1.2 # [ens3] is the name of virtio-net
+ tcpdump -i [ens3]
4. Reload virtio-net driver by below cmds::
- ifconfig [ens3] down
- ./usertools/dpdk-devbind.py -u [00:03.0] # [00:03.0] is the pci addr of virtio-net
- ./usertools/dpdk-devbind.py -b virtio-pci [00:03.0]
+ ifconfig [ens3] down
+ ./usertools/dpdk-devbind.py -u [00:03.0] # [00:03.0] is the pci addr of virtio-net
+ ./usertools/dpdk-devbind.py -b virtio-pci [00:03.0]
5. Check virtio device can receive packets again::
- ifconfig [ens3] 1.1.1.2
- tcpdump -i [ens3]
+ ifconfig [ens3] 1.1.1.2
+ tcpdump -i [ens3]
-6. Rerun step4 and step5 100 times to check event idx workable after driver reload.
+6. Rerun step4 and step5 10 times to check event idx workable after driver reload.
Test Case 2: Split ring 16 queues virtio-net event idx interrupt mode test with cbdma enable
--------------------------------------------------------------------------------------------
@@ -105,11 +105,11 @@ vhost uses the asynchronous operations with CBDMA channels.
1. Bind one nic port and 4 cbdma channels to vfio-pci, then launch the vhost sample by below commands::
- rm -rf vhost-net*
- ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-17 -n 4 --file-prefix=vhost \
- --vdev 'net_vhost,iface=vhost-net,queues=16,client=1,dmas=[txq0@0000:00:04.0;txq1@0000:00:04.0;txq2@0000:00:04.0;txq3@0000:00:04.0;txq4@0000:00:04.0;txq5@0000:00:04.0;txq6@0000:00:04.0;txq7@0000:00:04.0;txq8@0000:00:04.1;txq9@0000:00:04.1;txq10@0000:00:04.1;txq11@0000:00:04.1;txq12@0000:00:04.1;txq13@0000:00:04.1;txq14@0000:00:04.1;txq15@0000:00:04.1;rxq0@0000:00:04.2;rxq1@0000:00:04.2;rxq2@0000:00:04.2;rxq3@0000:00:04.2;rxq4@0000:00:04.2;rxq5@0000:00:04.2;rxq6@0000:00:04.2;rxq7@0000:00:04.2;rxq8@0000:00:04.3;rxq9@0000:00:04.3;rxq10@0000:00:04.3;rxq11@0000:00:04.3;rxq12@0000:00:04.3;rxq13@0000:00:04.3;rxq14@0000:00:04.3;rxq15@0000:00:04.3]' \
- -- -i --nb-cores=16 --txd=1024 --rxd=1024 --rxq=16 --txq=16
- testpmd> start
+ rm -rf vhost-net*
+ ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-17 -n 4 --file-prefix=vhost \
+ --vdev 'net_vhost,iface=vhost-net,queues=16,client=1,dmas=[txq0@0000:00:04.0;txq1@0000:00:04.0;txq2@0000:00:04.0;txq3@0000:00:04.0;txq4@0000:00:04.0;txq5@0000:00:04.0;txq6@0000:00:04.0;txq7@0000:00:04.0;txq8@0000:00:04.1;txq9@0000:00:04.1;txq10@0000:00:04.1;txq11@0000:00:04.1;txq12@0000:00:04.1;txq13@0000:00:04.1;txq14@0000:00:04.1;txq15@0000:00:04.1;rxq0@0000:00:04.2;rxq1@0000:00:04.2;rxq2@0000:00:04.2;rxq3@0000:00:04.2;rxq4@0000:00:04.2;rxq5@0000:00:04.2;rxq6@0000:00:04.2;rxq7@0000:00:04.2;rxq8@0000:00:04.3;rxq9@0000:00:04.3;rxq10@0000:00:04.3;rxq11@0000:00:04.3;rxq12@0000:00:04.3;rxq13@0000:00:04.3;rxq14@0000:00:04.3;rxq15@0000:00:04.3]' \
+ -- -i --nb-cores=16 --txd=1024 --rxd=1024 --rxq=16 --txq=16
+ testpmd> start
2. Launch VM::
@@ -125,18 +125,18 @@ vhost uses the asynchronous operations with CBDMA channels.
3. On VM1, give virtio device IP and enable vitio-net with 16 quques::
- ifconfig [ens3] 1.1.1.2 # [ens3] is the name of virtio-net
- ethtool -L [ens3] combined 16
+ ifconfig [ens3] 1.1.1.2 # [ens3] is the name of virtio-net
+ ethtool -L [ens3] combined 16
4. Send 10M different IP packets from packet generator to nic, check virtio-net interrupt times by below cmd in VM::
- cat /proc/interrupts
+ cat /proc/interrupts
5. Stop testpmd, check each queue has new packets coming, then start testpmd and check each queue has new packets coming::
- testpmd> stop
- testpmd> start
- testpmd> stop
+ testpmd> stop
+ testpmd> start
+ testpmd> stop
Test Case 3: Packed ring virtio-pci driver reload test with CBDMA enable
------------------------------------------------------------------------
@@ -146,11 +146,11 @@ with CBDMA channels.
1. Bind one nic port and one cbdma channel to vfio-pci, then launch the vhost sample by below commands::
- rm -rf vhost-net*
- ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xF0000000 -n 4 --file-prefix=vhost \
- --vdev 'net_vhost,iface=vhost-net,queues=1,dmas=[txq0@0000:00:04.0;rxq0@0000:00:04.0]' \
- -- -i --nb-cores=1 --txd=1024 --rxd=1024
- testpmd> start
+ rm -rf vhost-net*
+ ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xF0000000 -n 4 --file-prefix=vhost \
+ --vdev 'net_vhost,iface=vhost-net,queues=1,dmas=[txq0@0000:00:04.0;rxq0@0000:00:04.0]' \
+ -- -i --nb-cores=1 --txd=1024 --rxd=1024
+ testpmd> start
2. Launch VM::
@@ -166,21 +166,21 @@ with CBDMA channels.
3. On VM1, set virtio device IP, send 10M packets from packet generator to nic then check virtio device can receive packets::
- ifconfig [ens3] 1.1.1.2 # [ens3] is the name of virtio-net
- tcpdump -i [ens3]
+ ifconfig [ens3] 1.1.1.2 # [ens3] is the name of virtio-net
+ tcpdump -i [ens3]
4. Reload virtio-net driver by below cmds::
- ifconfig [ens3] down
- ./usertools/dpdk-devbind.py -u [00:03.0] # [00:03.0] is the pci addr of virtio-net
- ./usertools/dpdk-devbind.py -b virtio-pci [00:03.0]
+ ifconfig [ens3] down
+ ./usertools/dpdk-devbind.py -u [00:03.0] # [00:03.0] is the pci addr of virtio-net
+ ./usertools/dpdk-devbind.py -b virtio-pci [00:03.0]
5. Check virtio device can receive packets again::
- ifconfig [ens3] 1.1.1.2
- tcpdump -i [ens3]
+ ifconfig [ens3] 1.1.1.2
+ tcpdump -i [ens3]
-6. Rerun step4 and step5 100 times to check event idx workable after driver reload.
+6. Rerun step4 and step5 10 times to check event idx workable after driver reload.
Test Case 4: Packed ring 16 queues virtio-net event idx interrupt mode test with cbdma enable
---------------------------------------------------------------------------------------------
@@ -189,11 +189,11 @@ uses the asynchronous operations with CBDMA channels.
1. Bind one nic port and 4 cbdma channels to vfio-pci, then launch the vhost sample by below commands::
- rm -rf vhost-net*
- ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-17 -n 4 --file-prefix=vhost \
- --vdev 'net_vhost,iface=vhost-net,queues=16,client=1,dmas=[txq0@0000:00:04.0;txq1@0000:00:04.0;txq2@0000:00:04.0;txq3@0000:00:04.0;txq4@0000:00:04.0;txq5@0000:00:04.0;txq6@0000:00:04.0;txq7@0000:00:04.0;txq8@0000:00:04.1;txq9@0000:00:04.1;txq10@0000:00:04.1;txq11@0000:00:04.1;txq12@0000:00:04.1;txq13@0000:00:04.1;txq14@0000:00:04.1;txq15@0000:00:04.1;rxq0@0000:00:04.2;rxq1@0000:00:04.2;rxq2@0000:00:04.2;rxq3@0000:00:04.2;rxq4@0000:00:04.2;rxq5@0000:00:04.2;rxq6@0000:00:04.2;rxq7@0000:00:04.2;rxq8@0000:00:04.3;rxq9@0000:00:04.3;rxq10@0000:00:04.3;rxq11@0000:00:04.3;rxq12@0000:00:04.3;rxq13@0000:00:04.3;rxq14@0000:00:04.3;rxq15@0000:00:04.3]' \
- -- -i --nb-cores=16 --txd=1024 --rxd=1024 --rxq=16 --txq=16
- testpmd> start
+ rm -rf vhost-net*
+ ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-17 -n 4 --file-prefix=vhost \
+ --vdev 'net_vhost,iface=vhost-net,queues=16,client=1,dmas=[txq0@0000:00:04.0;txq1@0000:00:04.0;txq2@0000:00:04.0;txq3@0000:00:04.0;txq4@0000:00:04.0;txq5@0000:00:04.0;txq6@0000:00:04.0;txq7@0000:00:04.0;txq8@0000:00:04.1;txq9@0000:00:04.1;txq10@0000:00:04.1;txq11@0000:00:04.1;txq12@0000:00:04.1;txq13@0000:00:04.1;txq14@0000:00:04.1;txq15@0000:00:04.1;rxq0@0000:00:04.2;rxq1@0000:00:04.2;rxq2@0000:00:04.2;rxq3@0000:00:04.2;rxq4@0000:00:04.2;rxq5@0000:00:04.2;rxq6@0000:00:04.2;rxq7@0000:00:04.2;rxq8@0000:00:04.3;rxq9@0000:00:04.3;rxq10@0000:00:04.3;rxq11@0000:00:04.3;rxq12@0000:00:04.3;rxq13@0000:00:04.3;rxq14@0000:00:04.3;rxq15@0000:00:04.3]' \
+ -- -i --nb-cores=16 --txd=1024 --rxd=1024 --rxq=16 --txq=16
+ testpmd> start
2. Launch VM::
@@ -209,15 +209,15 @@ uses the asynchronous operations with CBDMA channels.
3. On VM1, configure virtio device IP and enable vitio-net with 16 quques::
- ifconfig [ens3] 1.1.1.2 # [ens3] is the name of virtio-net
- ethtool -L [ens3] combined 16
+ ifconfig [ens3] 1.1.1.2 # [ens3] is the name of virtio-net
+ ethtool -L [ens3] combined 16
4. Send 10M different IP packets from packet generator to nic, check virtio-net interrupt times by below cmd in VM::
- cat /proc/interrupts
+ cat /proc/interrupts
5. Stop testpmd, check each queue has new packets coming, then start testpmd and check each queue has new packets coming::
- testpmd> stop
- testpmd> start
- testpmd> stop
+ testpmd> stop
+ testpmd> start
+ testpmd> stop
--
2.25.1
next prev parent reply other threads:[~2023-03-28 7:44 UTC|newest]
Thread overview: 5+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-03-28 7:40 [dts][PATCH V1 0/2] " Wei Ling
2023-03-28 7:40 ` Wei Ling [this message]
2023-03-28 7:40 ` [dts][PATCH V1 2/2] tests/virtio_event_idx_interrupt_cbdma: " Wei Ling
2023-04-04 3:12 ` He, Xingguang
2023-04-11 8:56 ` lijuan.tu
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20230328074058.3796087-2-weix.ling@intel.com \
--to=weix.ling@intel.com \
--cc=dts@dpdk.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).