test suite reviews and discussions
 help / color / mirror / Atom feed
From: "Tu, Lijuan" <lijuan.tu@intel.com>
To: "thaq@marvell.com" <thaq@marvell.com>, "dts@dpdk.org" <dts@dpdk.org>
Cc: "fmasood@marvell.com" <fmasood@marvell.com>,
	"avijay@marvell.com" <avijay@marvell.com>,
	"jerinj@marvell.com" <jerinj@marvell.com>
Subject: Re: [dts] [PATCH] eventdev_pipeline_perf_test_plan.rst: Adding	Eventdev_pipeline features performance Testplan
Date: Wed, 12 Jun 2019 08:13:44 +0000	[thread overview]
Message-ID: <8CE3E05A3F976642AAB0F4675D0AD20E0BABB926@SHSMSX101.ccr.corp.intel.com> (raw)
In-Reply-To: <1560323900-12906-1-git-send-email-thaq@marvell.com>

Applied, thanks

> -----Original Message-----
> From: dts [mailto:dts-bounces@dpdk.org] On Behalf Of thaq@marvell.com
> Sent: Wednesday, June 12, 2019 3:18 PM
> To: dts@dpdk.org
> Cc: fmasood@marvell.com; avijay@marvell.com; jerinj@marvell.com;
> Thanseerulhaq <thaq@marvell.com>
> Subject: [dts] [PATCH] eventdev_pipeline_perf_test_plan.rst: Adding
> Eventdev_pipeline features performance Testplan
> 
> From: Thanseerulhaq <thaq@marvell.com>
> 
> Adding testcase for 1/2/4 NIC ports for eventdev_pipeline features atomic,
> parallel, order stages.
> 
> Signed-off-by: Thanseerulhaq <thaq@marvell.com>
> ---
>  test_plans/eventdev_pipeline_perf_test_plan.rst | 257
> ++++++++++++++++++++++++
>  1 file changed, 257 insertions(+)
>  create mode 100644 test_plans/eventdev_pipeline_perf_test_plan.rst
> 
> diff --git a/test_plans/eventdev_pipeline_perf_test_plan.rst
> b/test_plans/eventdev_pipeline_perf_test_plan.rst
> new file mode 100644
> index 0000000..619f9a3
> --- /dev/null
> +++ b/test_plans/eventdev_pipeline_perf_test_plan.rst
> @@ -0,0 +1,257 @@
> +.. SPDX-License-Identifier: BSD-3-Clause
> +   Copyright (C) 2019 Marvell International Ltd.
> +
> +============================
> +Eventdev Pipeline Perf Tests
> +============================
> +
> +Prerequisites
> +==============
> +
> +Each of the 10Gb/25Gb/40Gb/100Gb Ethernet* ports of the DUT is directly
> +connected in full-duplex to a different port of the peer Ixia ports(traffic
> generator).
> +
> +Using TCL commands, the Ixia can be configured to send and receive traffic
> on a given set of ports.
> +
> +If using vfio the kernel must be >= 3.6+ and VT-d must be enabled in
> +bios.When using vfio, use the following commands to load the vfio
> +driver and bind it to the device under test ::
> +
> +   modprobe vfio
> +   modprobe vfio-pci
> +   usertools/dpdk-devbind.py --bind=vfio-pci device_bus_id
> +   usertools/dpdk-devbind.py --bind=vfio-pci eventdev_device_bus_id
> +
> +Create huge pages
> +=================
> +mkdir -p /dev/huge
> +mount -t hugetlbfs none /dev/huge
> +echo 24 > /proc/sys/vm/nr_hugepages
> +
> +Configure limits of Eventdev devices
> +====================================
> +Set all eventdev devices sso and ssow limits to zero. Then set eventdev
> device under tests sso and ssow limits to non-zero values as per
> cores/queues requriments ::
> +   echo 0 > /sys/bus/pci/devices/eventdev_device_bus_id/limits/sso
> +   echo 0 > /sys/bus/pci/devices/eventdev_device_bus_id/limits/ssow
> +
> +Example ::
> +   echo 0 > /sys/bus/pci/devices/eventdev_device_bus_id/limits/tim
> +   echo 1 > /sys/bus/pci/devices/eventdev_device_bus_id/limits/npa
> +   echo 16 > /sys/bus/pci/devices/eventdev_device_bus_id/limits/sso
> +   echo 32 > /sys/bus/pci/devices/eventdev_device_bus_id/limits/ssow
> +
> +- ``eventdev_device_bus_id/limits/sso`` : Max limit `256`
> +- ``eventdev_device_bus_id/limits/ssow``: Max limit `52`
> +
> +Test Case: Performance 1port atomic test
> +========================================
> +Description: Execute performance test with Atomic_atq type of stage in
> multi-flow situation for various cores.
> +
> +1. Run the sample with below command::
> +
> +   # ./build/dpdk-eventdev_pipeline -c 0xe00000 -w
> + eventdev_device_bus_id -w device_bus_id -- -w 0xc00000 -n=0 --dump
> +
> +    Parameters:
> +        -c, COREMASK         : Hexadecimal bitmask of cores to run on
> +        -w, --pci-whitelist  : Add a PCI device in white list.
> +                               Only use the specified PCI devices. The argument format
> +                               is <[domain:]bus:devid.func>. This option can be present
> +                               several times (once per device).
> +        EAL Commands
> +        -w, --worker-mask=core mask : Run worker on CPUs in core mask
> +        -n, --packets=N             : Send N packets (default ~32M), 0 implies no
> limit
> +        -D, --dump                   Print detailed statistics before exit
> +
> +2. Use Ixia to send huge number of packets(with same 5-tuple and
> +different 5-tuple)
> +
> +3. Observe the speed of packets received(Rx-rate) on Ixia.
> +
> +Test Case: Performance 1port parallel test
> +==========================================
> +Description: Execute performance test with Parallel_atq type of stage in
> multi-flow situation for various cores.
> +
> +1. Run the sample with below command::
> +
> +   # ./build/dpdk-eventdev_pipeline -c 0xe00000 -w
> + eventdev_device_bus_id -w device_bus_id -- -w 0xc00000 -n=0 -p --dump
> +
> +    Parameters:
> +        -c, COREMASK         : Hexadecimal bitmask of cores to run on
> +        -w, --pci-whitelist  : Add a PCI device in white list.
> +                               Only use the specified PCI devices. The argument format
> +                               is <[domain:]bus:devid.func>. This option can be present
> +                               several times (once per device).
> +        EAL Commands
> +        -w, --worker-mask=core mask : Run worker on CPUs in core mask
> +        -n, --packets=N             : Send N packets (default ~32M), 0 implies no
> limit
> +        -p, --parallel              : Use parallel scheduling
> +        -D, --dump                  : Print detailed statistics before exit
> +
> +2. Use Ixia to send huge number of packets(with same 5-tuple and
> +different 5-tuple)
> +
> +3. Observe the speed of packets received(Rx-rate) on Ixia.
> +
> +Test Case: Performance 1port ordered test
> +=========================================
> +Description: Execute performance test with Ordered_atq type of stage in
> multi-flow situation for various cores.
> +
> +1. Run the sample with below command::
> +
> +   # ./build/dpdk-eventdev_pipeline -c 0xe00000 -w
> + eventdev_device_bus_id -w device_bus_id -- -w 0xc00000 -n=0 -o --dump
> +
> +    Parameters:
> +        -c, COREMASK         : Hexadecimal bitmask of cores to run on
> +        -w, --pci-whitelist  : Add a PCI device in white list.
> +                               Only use the specified PCI devices. The argument format
> +                               is <[domain:]bus:devid.func>. This option can be present
> +                               several times (once per device).
> +        EAL Commands
> +        -w, --worker-mask=core mask : Run worker on CPUs in core mask
> +        -n, --packets=N             : Send N packets (default ~32M), 0 implies no
> limit
> +        -o, --ordered                Use ordered scheduling
> +        -D, --dump                  : Print detailed statistics before exit
> +
> +2. Use Ixia to send huge number of packets(with same 5-tuple and
> +different 5-tuple)
> +
> +3. Observe the speed of packets received(Rx-rate) on Ixia.
> +
> +Test Case: Performance 2port atomic test
> +========================================
> +Description: Execute performance test with Atomic_atq type of stage in
> multi-flow situation for various cores.
> +
> +1. Run the sample with below command::
> +
> +   # ./build/dpdk-eventdev_pipeline -c 0xe00000 -w
> + eventdev_device_bus_id -w device0_bus_id -w device1_bus_id -- -w
> + 0xc00000 -n=0 --dump
> +
> +    Parameters:
> +        -c, COREMASK         : Hexadecimal bitmask of cores to run on
> +        -w, --pci-whitelist  : Add a PCI device in white list.
> +                               Only use the specified PCI devices. The argument format
> +                               is <[domain:]bus:devid.func>. This option can be present
> +                               several times (once per device).
> +        EAL Commands
> +        -w, --worker-mask=core mask : Run worker on CPUs in core mask
> +        -n, --packets=N             : Send N packets (default ~32M), 0 implies no
> limit
> +        -D, --dump                   Print detailed statistics before exit
> +
> +2. Use Ixia to send huge number of packets(with same 5-tuple and
> +different 5-tuple)
> +
> +3. Observe the speed of packets received(Rx-rate) on Ixia.
> +
> +Test Case: Performance 2port parallel test
> +==========================================
> +Description: Execute performance test with Parallel_atq type of stage in
> multi-flow situation for various cores.
> +
> +1. Run the sample with below command::
> +
> +   # ./build/dpdk-eventdev_pipeline -c 0xe00000 -w
> + eventdev_device_bus_id -w device0_bus_id -w device1_bus_id -- -w
> + 0xc00000 -n=0 -p --dump
> +
> +    Parameters:
> +        -c, COREMASK         : Hexadecimal bitmask of cores to run on
> +        -w, --pci-whitelist  : Add a PCI device in white list.
> +                               Only use the specified PCI devices. The argument format
> +                               is <[domain:]bus:devid.func>. This option can be present
> +                               several times (once per device).
> +        EAL Commands
> +        -w, --worker-mask=core mask : Run worker on CPUs in core mask
> +        -n, --packets=N             : Send N packets (default ~32M), 0 implies no
> limit
> +        -p, --parallel              : Use parallel scheduling
> +        -D, --dump                  : Print detailed statistics before exit
> +
> +2. Use Ixia to send huge number of packets(with same 5-tuple and
> +different 5-tuple)
> +
> +3. Observe the speed of packets received(Rx-rate) on Ixia.
> +
> +Test Case: Performance 2port ordered test
> +=========================================
> +Description: Execute performance test with Ordered_atq type of stage in
> multi-flow situation for various cores.
> +
> +1. Run the sample with below command::
> +
> +   # ./build/dpdk-eventdev_pipeline -c 0xe00000 -w
> + eventdev_device_bus_id -w device0_bus_id -w device1_bus_id -- -w
> + 0xc00000 -n=0 -o --dump
> +
> +    Parameters:
> +        -c, COREMASK         : Hexadecimal bitmask of cores to run on
> +        -w, --pci-whitelist  : Add a PCI device in white list.
> +                               Only use the specified PCI devices. The argument format
> +                               is <[domain:]bus:devid.func>. This option can be present
> +                               several times (once per device).
> +        EAL Commands
> +        -w, --worker-mask=core mask : Run worker on CPUs in core mask
> +        -n, --packets=N             : Send N packets (default ~32M), 0 implies no
> limit
> +        -o, --ordered                Use ordered scheduling
> +        -D, --dump                  : Print detailed statistics before exit
> +
> +2. Use Ixia to send huge number of packets(with same 5-tuple and
> +different 5-tuple)
> +
> +3. Observe the speed of packets received(Rx-rate) on Ixia.
> +
> +Test Case: Performance 4port atomic test
> +========================================
> +Description: Execute performance test with Atomic_atq type of stage in
> multi-flow situation for various cores.
> +
> +1. Run the sample with below command::
> +
> +   # ./build/dpdk-eventdev_pipeline -c 0xe00000 -w
> + eventdev_device_bus_id -w device0_bus_id -w device1_bus_id -w
> + device2_bus_id -w device3_bus_id -- -w 0xc00000 -n=0 --dump
> +
> +    Parameters:
> +        -c, COREMASK         : Hexadecimal bitmask of cores to run on
> +        -w, --pci-whitelist  : Add a PCI device in white list.
> +                               Only use the specified PCI devices. The argument format
> +                               is <[domain:]bus:devid.func>. This option can be present
> +                               several times (once per device).
> +        EAL Commands
> +        -w, --worker-mask=core mask : Run worker on CPUs in core mask
> +        -n, --packets=N             : Send N packets (default ~32M), 0 implies no
> limit
> +        -D, --dump                   Print detailed statistics before exit
> +
> +2. Use Ixia to send huge number of packets(with same 5-tuple and
> +different 5-tuple)
> +
> +3. Observe the speed of packets received(Rx-rate) on Ixia.
> +
> +Test Case: Performance 4port parallel test
> +==========================================
> +Description: Execute performance test with Parallel_atq type of stage in
> multi-flow situation for various cores.
> +
> +1. Run the sample with below command::
> +
> +   # ./build/dpdk-eventdev_pipeline -c 0xe00000 -w
> + eventdev_device_bus_id -w device0_bus_id -w device1_bus_id -w
> + device2_bus_id -w device3_bus_id -- -w 0xc00000 -n=0 -p --dump
> +
> +    Parameters:
> +        -c, COREMASK         : Hexadecimal bitmask of cores to run on
> +        -w, --pci-whitelist  : Add a PCI device in white list.
> +                               Only use the specified PCI devices. The argument format
> +                               is <[domain:]bus:devid.func>. This option can be present
> +                               several times (once per device).
> +        EAL Commands
> +        -w, --worker-mask=core mask : Run worker on CPUs in core mask
> +        -n, --packets=N             : Send N packets (default ~32M), 0 implies no
> limit
> +        -p, --parallel              : Use parallel scheduling
> +        -D, --dump                  : Print detailed statistics before exit
> +
> +2. Use Ixia to send huge number of packets(with same 5-tuple and
> +different 5-tuple)
> +
> +3. Observe the speed of packets received(Rx-rate) on Ixia.
> +
> +Test Case: Performance 4port ordered test
> +=========================================
> +Description: Execute performance test with Ordered_atq type of stage in
> multi-flow situation for various cores.
> +
> +1. Run the sample with below command::
> +
> +   # ./build/dpdk-eventdev_pipeline -c 0xe00000 -w
> + eventdev_device_bus_id -w device0_bus_id -w device1_bus_id -w
> + device2_bus_id -w device3_bus_id -- -w 0xc00000 -n=0 -o --dump
> +
> +    Parameters:
> +        -c, COREMASK         : Hexadecimal bitmask of cores to run on
> +        -w, --pci-whitelist  : Add a PCI device in white list.
> +                               Only use the specified PCI devices. The argument format
> +                               is <[domain:]bus:devid.func>. This option can be present
> +                               several times (once per device).
> +        EAL Commands
> +        -w, --worker-mask=core mask : Run worker on CPUs in core mask
> +        -n, --packets=N             : Send N packets (default ~32M), 0 implies no
> limit
> +        -o, --ordered                Use ordered scheduling
> +        -D, --dump                  : Print detailed statistics before exit
> +
> +2. Use Ixia to send huge number of packets(with same 5-tuple and
> +different 5-tuple)
> +
> +3. Observe the speed of packets received(Rx-rate) on Ixia.
> --
> 1.8.3.1


      reply	other threads:[~2019-06-12  8:13 UTC|newest]

Thread overview: 2+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-06-12  7:18 thaq
2019-06-12  8:13 ` Tu, Lijuan [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=8CE3E05A3F976642AAB0F4675D0AD20E0BABB926@SHSMSX101.ccr.corp.intel.com \
    --to=lijuan.tu@intel.com \
    --cc=avijay@marvell.com \
    --cc=dts@dpdk.org \
    --cc=fmasood@marvell.com \
    --cc=jerinj@marvell.com \
    --cc=thaq@marvell.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).