From: <thaq@marvell.com>
To: <dts@dpdk.org>
Cc: <fmasood@marvell.com>, <avijay@marvell.com>, <jerinj@marvell.com>,
Thanseerulhaq <thaq@marvell.com>
Subject: [dts] [PATCH] eventdev_pipeline_perf_test_plan.rst: Adding Eventdev_pipeline feature performance Testplan.
Date: Fri, 31 May 2019 12:52:44 +0530 [thread overview]
Message-ID: <1559287364-19555-1-git-send-email-thaq@marvell.com> (raw)
From: Thanseerulhaq <thaq@marvell.com>
Adding testcase for 1/2/4 NIC ports for eventdev_pipeline features atomic, parallel, order stages.
Signed-off-by: Thanseerulhaq <thaq@marvell.com>
---
test_plans/eventdev_pipeline_perf_test_plan.rst | 257 ++++++++++++++++++++++++
1 file changed, 257 insertions(+)
create mode 100644 test_plans/eventdev_pipeline_perf_test_plan.rst
diff --git a/test_plans/eventdev_pipeline_perf_test_plan.rst b/test_plans/eventdev_pipeline_perf_test_plan.rst
new file mode 100644
index 0000000..f2b2a7e
--- /dev/null
+++ b/test_plans/eventdev_pipeline_perf_test_plan.rst
@@ -0,0 +1,257 @@
+.. SPDX-License-Identifier: BSD-3-Clause
+ Copyright (C) 2019 Marvell International Ltd.
+
+============================
+Eventdev Pipeline Perf Tests
+============================
+
+Prerequisites
+==============
+
+Each of the 10Gb/25Gb/40Gb/100Gb Ethernet* ports of the DUT is directly connected in
+full-duplex to a different port of the peer Ixia ports(traffic generator).
+
+Using TCL commands, the Ixia can be configured to send and receive traffic on a given set of ports.
+
+If using vfio the kernel must be >= 3.6+ and VT-d must be enabled in bios.When
+using vfio, use the following commands to load the vfio driver and bind it
+to the device under test ::
+
+ modprobe vfio
+ modprobe vfio-pci
+ usertools/dpdk-devbind.py --bind=vfio-pci device_bus_id
+ usertools/dpdk-devbind.py --bind=vfio-pci eventdev_device_bus_id
+
+Create huge pages
+=================
+mkdir -p /dev/huge
+mount -t hugetlbfs none /dev/huge
+echo 24 > /proc/sys/vm/nr_hugepages
+
+Configure limits of Eventdev devices
+====================================
+Set all eventdev devices sso and ssow limits to zero. Then set eventdev device under tests sso and ssow limits to non-zero values as per cores/queues requriments ::
+ echo 0 > /sys/bus/pci/devices/eventdev_device_bus_id/limits/sso
+ echo 0 > /sys/bus/pci/devices/eventdev_device_bus_id/limits/ssow
+
+Example ::
+ echo 0 > /sys/bus/pci/devices/eventdev_device_bus_id/limits/tim
+ echo 1 > /sys/bus/pci/devices/eventdev_device_bus_id/limits/npa
+ echo 16 > /sys/bus/pci/devices/eventdev_device_bus_id/limits/sso
+ echo 32 > /sys/bus/pci/devices/eventdev_device_bus_id/limits/ssow
+
+- ``eventdev_device_bus_id/limits/sso`` : Max limit `256`
+- ``eventdev_device_bus_id/limits/ssow``: Max limit `52`
+
+Test Case: Performance 1port atomic test
+========================================
+Description: Execute performance test with Atomic_atq type of stage in multi-flow situation for various cores.
+
+1. Run the sample with below command::
+
+ # ./build/dpdk-eventdev_pipeline -c 0xe00000 -w eventdev_device_bus_id -w device_bus_id -- -w 0xc00000 -n=0 --dump
+
+ Parameters:
+ -c, COREMASK : Hexadecimal bitmask of cores to run on
+ -w, --pci-whitelist : Add a PCI device in white list.
+ Only use the specified PCI devices. The argument format
+ is <[domain:]bus:devid.func>. This option can be present
+ several times (once per device).
+ EAL Commands
+ -w, --worker-mask=core mask : Run worker on CPUs in core mask
+ -n, --packets=N : Send N packets (default ~32M), 0 implies no limit
+ -D, --dump Print detailed statistics before exit
+
+2. Use Ixia to send huge number of packets(with same 5-tuple and different 5-tuple)
+
+3. Observe the speed of packets received(Rx-rate) on Ixia.
+
+Test Case: Performance 1port parallel test
+==========================================
+Description: Execute performance test with Parallel_atq type of stage in multi-flow situation for various cores.
+
+1. Run the sample with below command::
+
+ # ./build/dpdk-eventdev_pipeline -c 0xe00000 -w eventdev_device_bus_id -w device_bus_id -- -w 0xc00000 -n=0 -p --dump
+
+ Parameters:
+ -c, COREMASK : Hexadecimal bitmask of cores to run on
+ -w, --pci-whitelist : Add a PCI device in white list.
+ Only use the specified PCI devices. The argument format
+ is <[domain:]bus:devid.func>. This option can be present
+ several times (once per device).
+ EAL Commands
+ -w, --worker-mask=core mask : Run worker on CPUs in core mask
+ -n, --packets=N : Send N packets (default ~32M), 0 implies no limit
+ -p, --parallel : Use parallel scheduling
+ -D, --dump : Print detailed statistics before exit
+
+2. Use Ixia to send huge number of packets(with same 5-tuple and different 5-tuple)
+
+3. Observe the speed of packets received(Rx-rate) on Ixia.
+
+Test Case: Performance 1port ordered test
+=========================================
+Description: Execute performance test with Ordered_atq type of stage in multi-flow situation for various cores.
+
+1. Run the sample with below command::
+
+ # ./build/dpdk-eventdev_pipeline -c 0xe00000 -w eventdev_device_bus_id -w device_bus_id -- -w 0xc00000 -n=0 -o --dump
+
+ Parameters:
+ -c, COREMASK : Hexadecimal bitmask of cores to run on
+ -w, --pci-whitelist : Add a PCI device in white list.
+ Only use the specified PCI devices. The argument format
+ is <[domain:]bus:devid.func>. This option can be present
+ several times (once per device).
+ EAL Commands
+ -w, --worker-mask=core mask : Run worker on CPUs in core mask
+ -n, --packets=N : Send N packets (default ~32M), 0 implies no limit
+ -o, --ordered Use ordered scheduling
+ -D, --dump : Print detailed statistics before exit
+
+2. Use Ixia to send huge number of packets(with same 5-tuple and different 5-tuple)
+
+3. Observe the speed of packets received(Rx-rate) on Ixia.
+
+Test Case: Performance 2port atomic test
+========================================
+Description: Execute performance test with Atomic_atq type of stage in multi-flow situation for various cores.
+
+1. Run the sample with below command::
+
+ # ./build/dpdk-eventdev_pipeline -c 0xe00000 -w eventdev_device_bus_id -w device0_bus_id -w device1_bus_id -- -w 0xc00000 -n=0 --dump
+
+ Parameters:
+ -c, COREMASK : Hexadecimal bitmask of cores to run on
+ -w, --pci-whitelist : Add a PCI device in white list.
+ Only use the specified PCI devices. The argument format
+ is <[domain:]bus:devid.func>. This option can be present
+ several times (once per device).
+ EAL Commands
+ -w, --worker-mask=core mask : Run worker on CPUs in core mask
+ -n, --packets=N : Send N packets (default ~32M), 0 implies no limit
+ -D, --dump Print detailed statistics before exit
+
+2. Use Ixia to send huge number of packets(with same 5-tuple and different 5-tuple)
+
+3. Observe the speed of packets received(Rx-rate) on Ixia.
+
+Test Case: Performance 2port parallel test
+==========================================
+Description: Execute performance test with Parallel_atq type of stage in multi-flow situation for various cores.
+
+1. Run the sample with below command::
+
+ # ./build/dpdk-eventdev_pipeline -c 0xe00000 -w eventdev_device_bus_id -w device0_bus_id -w device1_bus_id -- -w 0xc00000 -n=0 -p --dump
+
+ Parameters:
+ -c, COREMASK : Hexadecimal bitmask of cores to run on
+ -w, --pci-whitelist : Add a PCI device in white list.
+ Only use the specified PCI devices. The argument format
+ is <[domain:]bus:devid.func>. This option can be present
+ several times (once per device).
+ EAL Commands
+ -w, --worker-mask=core mask : Run worker on CPUs in core mask
+ -n, --packets=N : Send N packets (default ~32M), 0 implies no limit
+ -p, --parallel : Use parallel scheduling
+ -D, --dump : Print detailed statistics before exit
+
+2. Use Ixia to send huge number of packets(with same 5-tuple and different 5-tuple)
+
+3. Observe the speed of packets received(Rx-rate) on Ixia.
+
+Test Case: Performance 2port ordered test
+=========================================
+Description: Execute performance test with Ordered_atq type of stage in multi-flow situation for various cores.
+
+1. Run the sample with below command::
+
+ # ./build/dpdk-eventdev_pipeline -c 0xe00000 -w eventdev_device_bus_id -w device0_bus_id -w device1_bus_id -- -w 0xc00000 -n=0 -o --dump
+
+ Parameters:
+ -c, COREMASK : Hexadecimal bitmask of cores to run on
+ -w, --pci-whitelist : Add a PCI device in white list.
+ Only use the specified PCI devices. The argument format
+ is <[domain:]bus:devid.func>. This option can be present
+ several times (once per device).
+ EAL Commands
+ -w, --worker-mask=core mask : Run worker on CPUs in core mask
+ -n, --packets=N : Send N packets (default ~32M), 0 implies no limit
+ -o, --ordered Use ordered scheduling
+ -D, --dump : Print detailed statistics before exit
+
+2. Use Ixia to send huge number of packets(with same 5-tuple and different 5-tuple)
+
+3. Observe the speed of packets received(Rx-rate) on Ixia.
+
+Test Case: Performance 4port atomic test
+========================================
+Description: Execute performance test with Atomic_atq type of stage in multi-flow situation for various cores.
+
+1. Run the sample with below command::
+
+ # ./build/dpdk-eventdev_pipeline -c 0xe00000 -w eventdev_device_bus_id -w device0_bus_id -w device1_bus_id -w device2_bus_id -w device3_bus_id -- -w 0xc00000 -n=0 --dump
+
+ Parameters:
+ -c, COREMASK : Hexadecimal bitmask of cores to run on
+ -w, --pci-whitelist : Add a PCI device in white list.
+ Only use the specified PCI devices. The argument format
+ is <[domain:]bus:devid.func>. This option can be present
+ several times (once per device).
+ EAL Commands
+ -w, --worker-mask=core mask : Run worker on CPUs in core mask
+ -n, --packets=N : Send N packets (default ~32M), 0 implies no limit
+ -D, --dump Print detailed statistics before exit
+
+2. Use Ixia to send huge number of packets(with same 5-tuple and different 5-tuple)
+
+3. Observe the speed of packets received(Rx-rate) on Ixia.
+
+Test Case: Performance 4port parallel test
+==========================================
+Description: Execute performance test with Parallel_atq type of stage in multi-flow situation for various cores.
+
+1. Run the sample with below command::
+
+ # ./build/dpdk-eventdev_pipeline -c 0xe00000 -w eventdev_device_bus_id -w device0_bus_id -w device1_bus_id -w device2_bus_id -w device3_bus_id -- -w 0xc00000 -n=0 -p --dump
+
+ Parameters:
+ -c, COREMASK : Hexadecimal bitmask of cores to run on
+ -w, --pci-whitelist : Add a PCI device in white list.
+ Only use the specified PCI devices. The argument format
+ is <[domain:]bus:devid.func>. This option can be present
+ several times (once per device).
+ EAL Commands
+ -w, --worker-mask=core mask : Run worker on CPUs in core mask
+ -n, --packets=N : Send N packets (default ~32M), 0 implies no limit
+ -p, --parallel : Use parallel scheduling
+ -D, --dump : Print detailed statistics before exit
+
+2. Use Ixia to send huge number of packets(with same 5-tuple and different 5-tuple)
+
+3. Observe the speed of packets received(Rx-rate) on Ixia.
+
+Test Case: Performance 4port ordered test
+=========================================
+Description: Execute performance test with Ordered_atq type of stage in multi-flow situation for various cores.
+
+1. Run the sample with below command::
+
+ # ./build/dpdk-eventdev_pipeline -c 0xe00000 -w eventdev_device_bus_id -w device0_bus_id -w device1_bus_id -w device2_bus_id -w device3_bus_id -- -w 0xc00000 -n=0 -o --dump
+
+ Parameters:
+ -c, COREMASK : Hexadecimal bitmask of cores to run on
+ -w, --pci-whitelist : Add a PCI device in white list.
+ Only use the specified PCI devices. The argument format
+ is <[domain:]bus:devid.func>. This option can be present
+ several times (once per device).
+ EAL Commands
+ -w, --worker-mask=core mask : Run worker on CPUs in core mask
+ -n, --packets=N : Send N packets (default ~32M), 0 implies no limit
+ -o, --ordered Use ordered scheduling
+ -D, --dump : Print detailed statistics before exit
+
+2. Use Ixia to send huge number of packets(with same 5-tuple and different 5-tuple)
+
+3. Observe the speed of packets received(Rx-rate) on Ixia.
--
1.8.3.1
next reply other threads:[~2019-05-31 7:23 UTC|newest]
Thread overview: 8+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-05-31 7:22 thaq [this message]
2019-06-05 2:31 ` Tu, Lijuan
2019-06-10 9:00 ` [dts] [PATCH v2] " thaq
2019-06-12 3:02 ` Tu, Lijuan
2019-06-12 5:02 ` Thanseerul Haq
2019-06-12 5:34 ` Tu, Lijuan
2019-06-12 6:38 ` Thanseerul Haq
2019-06-12 7:03 ` Tu, Lijuan
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1559287364-19555-1-git-send-email-thaq@marvell.com \
--to=thaq@marvell.com \
--cc=avijay@marvell.com \
--cc=dts@dpdk.org \
--cc=fmasood@marvell.com \
--cc=jerinj@marvell.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).