From: Yingya Han <yingyax.han@intel.com>
To: dts@dpdk.org
Cc: Yingya Han <yingyax.han@intel.com>
Subject: [dts][PATCH V1 1/3]test_plans: add ice_header_split_perf test plan
Date: Mon, 26 Dec 2022 06:55:14 +0000 [thread overview]
Message-ID: <20221226065514.10020-1-yingyax.han@intel.com> (raw)
Signed-off-by: Yingya Han <yingyax.han@intel.com>
---
| 236 ++++++++++++++++++
test_plans/index.rst | 1 +
2 files changed, 237 insertions(+)
create mode 100644 test_plans/ice_header_split_perf_test_plan.rst
--git a/test_plans/ice_header_split_perf_test_plan.rst b/test_plans/ice_header_split_perf_test_plan.rst
new file mode 100644
index 00000000..da07a28b
--- /dev/null
+++ b/test_plans/ice_header_split_perf_test_plan.rst
@@ -0,0 +1,236 @@
+.. SPDX-License-Identifier: BSD-3-Clause
+ Copyright(c) 2022 Intel Corporation
+
+==============================================================
+Benchmark the performance of header split forwarding with E810
+==============================================================
+
+Description
+===========
+When Rx queue is configured with RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT offload
+and corresponding protocol, packets received will be directly split into
+two different mempools.
+
+Prerequisites
+=============
+
+1. Hardware:
+
+ 1.1) header split perf test for Intel® Ethernet Network Adapter E810-CQDA2:
+ 1 NIC or 2 NIC cards attached to the same processor and 1 port used of each NIC.
+ 1.2) header split perf test for Intel® Ethernet Network Adapter E810-XXVDA4:
+ 1 NIC card attached to the processor and 4 ports used.
+
+2. Software::
+
+ dpdk: git clone http://dpdk.org/git/dpdk
+ trex: git clone http://trex-tgn.cisco.com/trex/release/v2.93.tar.gz
+
+
+Test Case
+=========
+The test case check the throughput result with ipv4, in the case,
+we will send the bi-direction flows with line rate, then we can check the
+passthrough rate.
+
+Common Steps
+------------
+1. Bind tested ports to vfio-pci::
+
+ <dpdk_dir>#./usertools/dpdk-devbind.py -s
+ 0000:17:00.0 'Device 1592' if=ens5f0 drv=ice unused=vfio-pci
+ 0000:4b:00.0 'Device 1592' if=ens6f0 drv=ice unused=vfio-pci
+ <dpdk_dir>#./usertools/dpdk-devbind.py -b vfio-pci <pci device id>
+ <dpdk_dir>#./usertools/dpdk-devbind.py -b vfio-pci 0000:17:00.0
+ <dpdk_dir>#./usertools/dpdk-devbind.py -b vfio-pci 0000:4b:00.0
+
+2. Configure traffic generator to send traffic
+
+ Test flow MAC table.
+
+ +------+---------+------------+----------------+
+ | Flow | Traffic | MAC | MAC |
+ | | Gen. | Src. | Dst. |
+ | | Port | Address | Address |
+ +======+=========+============+================+
+ | 1 | TG0 | Random MAC | 11:22:33:44:55 |
+ +------+---------+------------+----------------+
+ | 2 | TG1 | Random Mac | 11:22:33:44:55 |
+ +------+---------+------------+----------------+
+ | 3 | TG2 | Random Mac | 11:22:33:44:55 |
+ +------+---------+------------+----------------+
+ | 4 | TG3 | Random Mac | 11:22:33:44:55 |
+ +------+---------+------------+----------------+
+
+ The Flow IP table.
+
+ +------+---------+------------+---------+
+ | Flow | Traffic | IPV4 | IPV4 |
+ | | Gen. | Src. | Dest. |
+ | | Port | Address | Address |
+ +======+=========+============+=========+
+ | 1 | TG0 | Any IP | 2.1.1.1 |
+ +------+---------+------------+---------+
+ | 2 | TG1 | Any IP | 1.1.1.1 |
+ +------+---------+------------+---------+
+ | 3 | TG2 | Any IP | 4.1.1.1 |
+ +------+---------+------------+---------+
+ | 4 | TG3 | Any IP | 3.1.1.1 |
+ +------+---------+------------+---------+
+
+ Set the packet length : 64 bytes-1518 bytes
+ The IPV4 Dest Address increase with the num 1024.
+
+3. Test results table.
+
+ +-----------+------------+-------------+---------+
+ | Fwd_core | Frame Size | Throughput | Rate |
+ +===========+============+=============+=========+
+ | 1C/1T | 64 | xxxxx Mpps | xxx % |
+ +-----------+------------+-------------+---------+
+ | 1C/1T | ... | xxxxx Mpps | xxx % |
+ +-----------+------------+-------------+---------+
+ | 2C/2T | 64 | xxxxx Mpps | xxx % |
+ +-----------+------------+-------------+---------+
+ | 2C/2T | ... | xxxxx Mpps | xxx % |
+ +-----------+------------+-------------+---------+
+ | 4C/4T | 64 | xxxxx Mpps | xxx % |
+ +-----------+------------+-------------+---------+
+ | 4C/4T | ... | xxxxx Mpps | xxx % |
+ +-----------+------------+-------------+---------+
+ | 8C/8T | 64 | xxxxx Mpps | xxx % |
+ +-----------+------------+-------------+---------+
+ | 8C/8T | ... | xxxxx Mpps | xxx % |
+ +-----------+------------+-------------+---------+
+
+Test Case 1: test_perf_enable_header_split_rx_on
+------------------------------------------------
+
+1. Bind PF ports to dpdk driver as common step 1::
+
+ ./usertools/dpdk-devbind.py -b vfio-pci 17:00.0 4b:00.0
+
+2. Start dpdk-testpmd::
+
+ <build_dir>/app/dpdk-testpmd -l 5,6 -n 8 --force-max-simd-bitwidth=64 \
+ -- -i --portmask=0x3 --rxq=1 --txq=1 --txd=1024 --rxd=1024 --forward=rxonly \
+ --nb-cores=1 --mbuf-size=2048,2048
+
+ Note:
+ -force-max-simd-bitwidth: Set 64, the feature only support 64.
+ -mbuf-size=2048,2048: config two mempools.
+
+3. Config mac split::
+
+ testpmd>port stop all
+ testpmd>port config 0 rx_offload buffer_split on
+ testpmd>port config 1 rx_offload buffer_split on
+ testpmd>set rxhdrs eth
+ testpmd>port start all
+ testpmd>start
+
+4. Configure traffic generator to send traffic as common step 2.
+
+5. Record Test results as common step 3.
+
+Test case 2: test_perf_disable_header_split_rx_on
+-------------------------------------------------
+
+1. Bind PF ports to dpdk driver as common step 1::
+
+ ./usertools/dpdk-devbind.py -b vfio-pci 17:00.0 4b:00.0
+
+2. Start dpdk-testpmd::
+
+ <build_dir>/app/dpdk-testpmd -l 5,6 -n 8 --force-max-simd-bitwidth=64 \
+ -- -i --portmask=0x3 --rxq=1 --txq=1 --txd=1024 --rxd=1024 --forward=rxonly \
+ --nb-cores=1
+
+ Note:
+ -force-max-simd-bitwidth: Set 64, the feature only support 64.
+
+3. Configure traffic generator to send traffic as common step 2.
+
+4. Record Test results as common step 3.
+
+5. Start dpdk-testpmd::
+
+ <build_dir>/app/dpdk-testpmd -l 5,6 -n 8 --force-max-simd-bitwidth=64 \
+ -- -i --portmask=0x3 --rxq=1 --txq=1 --txd=1024 --rxd=1024 --forward=rxonly \
+ --nb-cores=1 --mbuf-size=2048,2048
+
+ Note:
+ -force-max-simd-bitwidth: Set 64, the feature only support 64.
+ -mbuf-size=2048,2048: config two mempools.
+
+6. Configure traffic generator to send traffic as common step 2.
+
+7. Record Test results as common step 3.
+
+Test case 3: test_perf_enable_header_split
+------------------------------------------
+
+1. Bind PF ports to dpdk driver as common step 1::
+
+ ./usertools/dpdk-devbind.py -b vfio-pci 17:00.0 4b:00.0
+
+2. Start dpdk-testpmd::
+
+ <build_dir>/app/dpdk-testpmd -l 5,6 -n 8 --force-max-simd-bitwidth=64 \
+ -- -i --portmask=0x3 --rxq=1 --txq=1 --txd=1024 --rxd=1024 --forward=rxonly \
+ --nb-cores=1 --mbuf-size=2048,2048
+
+ Note:
+ -force-max-simd-bitwidth: Set 64, the feature only support 64.
+ -mbuf-size=2048,2048: config two mempools.
+
+3. Config udp split::
+
+ testpmd>port stop all
+ testpmd>port config 0 rx_offload buffer_split on
+ testpmd>port config 1 rx_offload buffer_split on
+ testpmd>set rxhdrs inner-ipv4-udp
+ testpmd>port start all
+ testpmd>start
+
+4. Config traffic generator as common step 2.
+
+5. Record Test results as common step 3.
+
+6. Config traffic generator with udp flow.
+
+7. Record Test results as common step 3.
+
+Test case 4: test_perf_disable_header_split
+-------------------------------------------
+
+1. Bind PF ports to dpdk driver as common step 1::
+
+ ./usertools/dpdk-devbind.py -b vfio-pci 17:00.0 4b:00.0
+
+2. Start dpdk-testpmd::
+
+ <build_dir>/app/dpdk-testpmd -l 5,6 -n 8 --force-max-simd-bitwidth=64 \
+ -- -i --portmask=0x3 --rxq=1 --txq=1 --txd=1024 --rxd=1024 --forward=io \
+ --nb-cores=1
+
+ Note:
+ -force-max-simd-bitwidth: Set 64, the feature only support 64.
+
+3. Configure traffic generator to send traffic as common step 2.
+
+4. Record Test results as common step 3.
+
+5. Start dpdk-testpmd::
+
+ <build_dir>/app/dpdk-testpmd -l 5,6 -n 8 --force-max-simd-bitwidth=64 \
+ -- -i --portmask=0x3 --rxq=1 --txq=1 --txd=1024 --rxd=1024 --forward=io \
+ --nb-cores=1 --mbuf-size=2048,2048
+
+ Note:
+ -force-max-simd-bitwidth: Set 64, the feature only support 64.
+ -mbuf-size=2048,2048: config two mempools.
+
+6. Configure traffic generator to send traffic as common step 2.
+
+7. Record Test results as common step 3.
diff --git a/test_plans/index.rst b/test_plans/index.rst
index 9ca954e2..a4e7c7e8 100644
--- a/test_plans/index.rst
+++ b/test_plans/index.rst
@@ -35,6 +35,7 @@ The following are the test plans for the DPDK DTS automated test system.
ice_dcf_switch_filter_pppoe_test_plan
ice_dcf_switch_filter_gtpu_test_plan
ice_dcf_flow_priority_test_plan
+ ice_header_split_perf_test_plan.rst
ice_flow_priority_test_plan
ice_dcf_qos_test_plan
ice_enable_basic_hqos_on_pf_test_plan
--
2.34.1
reply other threads:[~2022-12-26 6:55 UTC|newest]
Thread overview: [no followups] expand[flat|nested] mbox.gz Atom feed
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20221226065514.10020-1-yingyax.han@intel.com \
--to=yingyax.han@intel.com \
--cc=dts@dpdk.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).