test suite reviews and discussions
 help / color / mirror / Atom feed
From: "Tu, Lijuan" <lijuan.tu@intel.com>
To: Thanseerul Haq <thaq@marvell.com>, "dts@dpdk.org" <dts@dpdk.org>
Cc: Faisal Masood <fmasood@marvell.com>,
	Vijaya Bhaskar Annayyolla <avijay@marvell.com>,
	Jerin Jacob Kollanukkaran <jerinj@marvell.com>
Subject: Re: [dts] [PATCH v2] Adding Eventdev_pipeline feature performance	Testplan
Date: Wed, 12 Jun 2019 05:34:54 +0000	[thread overview]
Message-ID: <8CE3E05A3F976642AAB0F4675D0AD20E0BABA5F5@SHSMSX101.ccr.corp.intel.com> (raw)
In-Reply-To: <MN2PR18MB2735814294C5CC19F68C0FF7D4EC0@MN2PR18MB2735.namprd18.prod.outlook.com>

[-- Attachment #1: Type: text/plain, Size: 12363 bytes --]

Your patch is for test_plans/eventdev_pipeline_perf_test_plan.rst , but the link you provided is test_plans/eventdev_pipeline_test_plan.rst

They are different files, anything I missing?


From: Thanseerul Haq [mailto:thaq@marvell.com]
Sent: Wednesday, June 12, 2019 1:03 PM
To: Tu, Lijuan <lijuan.tu@intel.com>; dts@dpdk.org
Cc: Faisal Masood <fmasood@marvell.com>; Vijaya Bhaskar Annayyolla <avijay@marvell.com>; Jerin Jacob Kollanukkaran <jerinj@marvell.com>
Subject: Re: [dts] [PATCH v2] Adding Eventdev_pipeline feature performance Testplan

Hi Lijuan,

In Current DTS test_plans and scripts are available for eventdev_pipeline functional cases.

Test Links:
https://git.dpdk.org/tools/dts/tree/test_plans/eventdev_pipeline_test_plan.rst
https://git.dpdk.org/tools/dts/tree/tests/TestSuite_eventdev_pipeline.py



Regards,

 - Thanseerul Haq

________________________________
From: Tu, Lijuan <lijuan.tu@intel.com<mailto:lijuan.tu@intel.com>>
Sent: 12 June 2019 08:32
To: Thanseerul Haq; dts@dpdk.org<mailto:dts@dpdk.org>
Cc: Faisal Masood; Vijaya Bhaskar Annayyolla; Jerin Jacob Kollanukkaran
Subject: [EXT] RE: [dts] [PATCH v2] Adding Eventdev_pipeline feature performance Testplan

External Email

----------------------------------------------------------------------
Hi thaq,
I think you should submit the whole test plan, because there is no test_plans/eventdev_pipeline_test_plan.rst in current DTS.
thanks

> -----Original Message-----
> From: dts [mailto:dts-bounces@dpdk.org] On Behalf Of thaq@marvell.com<mailto:thaq@marvell.com>
> Sent: Monday, June 10, 2019 5:01 PM
> To: dts@dpdk.org<mailto:dts@dpdk.org>
> Cc: fmasood@marvell.com<mailto:fmasood@marvell.com>; avijay@marvell.com<mailto:avijay@marvell.com>; jerinj@marvell.com<mailto:jerinj@marvell.com>;
> Thanseerulhaq <thaq@marvell.com<mailto:thaq@marvell.com>>
> Subject: [dts] [PATCH v2] Adding Eventdev_pipeline feature performance
> Testplan
>
> From: Thanseerulhaq <thaq@marvell.com<mailto:thaq@marvell.com>>
>
> Adding testcase for 1/2/4 NIC ports for eventdev features atomic, parallel,
> order stages.
>
> Signed-off-by: Thanseerulhaq <thaq@marvell.com<mailto:thaq@marvell.com>>
> ---
>  test_plans/eventdev_pipeline_perf_test_plan.rst | 58 ++++++++++++-----------
> --
>  1 file changed, 29 insertions(+), 29 deletions(-)
>
> diff --git a/test_plans/eventdev_pipeline_perf_test_plan.rst
> b/test_plans/eventdev_pipeline_perf_test_plan.rst
> index f2b2a7e..619f9a3 100644
> --- a/test_plans/eventdev_pipeline_perf_test_plan.rst
> +++ b/test_plans/eventdev_pipeline_perf_test_plan.rst
> @@ -30,11 +30,11 @@ echo 24 > /proc/sys/vm/nr_hugepages
>
>  Configure limits of Eventdev devices
>  ====================================
> -Set all eventdev devices sso and ssow limits to zero. Then set eventdev
> device under tests sso and ssow limits to non-zero values as per
> cores/queues requriments ::
> +Set all eventdev devices sso and ssow limits to zero. Then set eventdev
> device under tests sso and ssow limits to non-zero values as per
> cores/queues requriments ::
>     echo 0 > /sys/bus/pci/devices/eventdev_device_bus_id/limits/sso
>     echo 0 > /sys/bus/pci/devices/eventdev_device_bus_id/limits/ssow
>
> -Example ::
> +Example ::
>     echo 0 > /sys/bus/pci/devices/eventdev_device_bus_id/limits/tim
>     echo 1 > /sys/bus/pci/devices/eventdev_device_bus_id/limits/npa
>     echo 16 > /sys/bus/pci/devices/eventdev_device_bus_id/limits/sso
> @@ -56,8 +56,8 @@ Description: Execute performance test with Atomic_atq
> type of stage in multi-flo
>          -w, --pci-whitelist  : Add a PCI device in white list.
>                                 Only use the specified PCI devices. The argument format
>                                 is <[domain:]bus:devid.func>. This option can be present
> -                               several times (once per device).
> -        EAL Commands
> +                               several times (once per device).
> +        EAL Commands
>          -w, --worker-mask=core mask : Run worker on CPUs in core mask
>          -n, --packets=N             : Send N packets (default ~32M), 0 implies no
> limit
>          -D, --dump                   Print detailed statistics before exit
> @@ -74,13 +74,13 @@ Description: Execute performance test with
> Parallel_atq type of stage in multi-f
>
>     # ./build/dpdk-eventdev_pipeline -c 0xe00000 -w eventdev_device_bus_id
> -w device_bus_id -- -w 0xc00000 -n=0 -p --dump
>
> -    Parameters:
> +    Parameters:
>          -c, COREMASK         : Hexadecimal bitmask of cores to run on
>          -w, --pci-whitelist  : Add a PCI device in white list.
>                                 Only use the specified PCI devices. The argument format
>                                 is <[domain:]bus:devid.func>. This option can be present
> -                               several times (once per device).
> -        EAL Commands
> +                               several times (once per device).
> +        EAL Commands
>          -w, --worker-mask=core mask : Run worker on CPUs in core mask
>          -n, --packets=N             : Send N packets (default ~32M), 0 implies no
> limit
>          -p, --parallel              : Use parallel scheduling
> @@ -97,14 +97,14 @@ Description: Execute performance test with
> Ordered_atq type of stage in multi-fl  1. Run the sample with below
> command::
>
>     # ./build/dpdk-eventdev_pipeline -c 0xe00000 -w eventdev_device_bus_id
> -w device_bus_id -- -w 0xc00000 -n=0 -o --dump
> -
> -    Parameters:
> +
> +    Parameters:
>          -c, COREMASK         : Hexadecimal bitmask of cores to run on
>          -w, --pci-whitelist  : Add a PCI device in white list.
>                                 Only use the specified PCI devices. The argument format
>                                 is <[domain:]bus:devid.func>. This option can be present
> -                               several times (once per device).
> -        EAL Commands
> +                               several times (once per device).
> +        EAL Commands
>          -w, --worker-mask=core mask : Run worker on CPUs in core mask
>          -n, --packets=N             : Send N packets (default ~32M), 0 implies no
> limit
>          -o, --ordered                Use ordered scheduling
> @@ -127,8 +127,8 @@ Description: Execute performance test with
> Atomic_atq type of stage in multi-flo
>          -w, --pci-whitelist  : Add a PCI device in white list.
>                                 Only use the specified PCI devices. The argument format
>                                 is <[domain:]bus:devid.func>. This option can be present
> -                               several times (once per device).
> -        EAL Commands
> +                               several times (once per device).
> +        EAL Commands
>          -w, --worker-mask=core mask : Run worker on CPUs in core mask
>          -n, --packets=N             : Send N packets (default ~32M), 0 implies no
> limit
>          -D, --dump                   Print detailed statistics before exit
> @@ -145,13 +145,13 @@ Description: Execute performance test with
> Parallel_atq type of stage in multi-f
>
>     # ./build/dpdk-eventdev_pipeline -c 0xe00000 -w eventdev_device_bus_id
> -w device0_bus_id -w device1_bus_id -- -w 0xc00000 -n=0 -p --dump
>
> -    Parameters:
> +    Parameters:
>          -c, COREMASK         : Hexadecimal bitmask of cores to run on
>          -w, --pci-whitelist  : Add a PCI device in white list.
>                                 Only use the specified PCI devices. The argument format
>                                 is <[domain:]bus:devid.func>. This option can be present
> -                               several times (once per device).
> -        EAL Commands
> +                               several times (once per device).
> +        EAL Commands
>          -w, --worker-mask=core mask : Run worker on CPUs in core mask
>          -n, --packets=N             : Send N packets (default ~32M), 0 implies no
> limit
>          -p, --parallel              : Use parallel scheduling
> @@ -168,14 +168,14 @@ Description: Execute performance test with
> Ordered_atq type of stage in multi-fl  1. Run the sample with below
> command::
>
>     # ./build/dpdk-eventdev_pipeline -c 0xe00000 -w eventdev_device_bus_id
> -w device0_bus_id -w device1_bus_id -- -w 0xc00000 -n=0 -o --dump
> -
> -    Parameters:
> +
> +    Parameters:
>          -c, COREMASK         : Hexadecimal bitmask of cores to run on
>          -w, --pci-whitelist  : Add a PCI device in white list.
>                                 Only use the specified PCI devices. The argument format
>                                 is <[domain:]bus:devid.func>. This option can be present
> -                               several times (once per device).
> -        EAL Commands
> +                               several times (once per device).
> +        EAL Commands
>          -w, --worker-mask=core mask : Run worker on CPUs in core mask
>          -n, --packets=N             : Send N packets (default ~32M), 0 implies no
> limit
>          -o, --ordered                Use ordered scheduling
> @@ -198,8 +198,8 @@ Description: Execute performance test with
> Atomic_atq type of stage in multi-flo
>          -w, --pci-whitelist  : Add a PCI device in white list.
>                                 Only use the specified PCI devices. The argument format
>                                 is <[domain:]bus:devid.func>. This option can be present
> -                               several times (once per device).
> -        EAL Commands
> +                               several times (once per device).
> +        EAL Commands
>          -w, --worker-mask=core mask : Run worker on CPUs in core mask
>          -n, --packets=N             : Send N packets (default ~32M), 0 implies no
> limit
>          -D, --dump                   Print detailed statistics before exit
> @@ -216,13 +216,13 @@ Description: Execute performance test with
> Parallel_atq type of stage in multi-f
>
>     # ./build/dpdk-eventdev_pipeline -c 0xe00000 -w eventdev_device_bus_id
> -w device0_bus_id -w device1_bus_id -w device2_bus_id -w device3_bus_id -
> - -w 0xc00000 -n=0 -p --dump
>
> -    Parameters:
> +    Parameters:
>          -c, COREMASK         : Hexadecimal bitmask of cores to run on
>          -w, --pci-whitelist  : Add a PCI device in white list.
>                                 Only use the specified PCI devices. The argument format
>                                 is <[domain:]bus:devid.func>. This option can be present
> -                               several times (once per device).
> -        EAL Commands
> +                               several times (once per device).
> +        EAL Commands
>          -w, --worker-mask=core mask : Run worker on CPUs in core mask
>          -n, --packets=N             : Send N packets (default ~32M), 0 implies no
> limit
>          -p, --parallel              : Use parallel scheduling
> @@ -239,14 +239,14 @@ Description: Execute performance test with
> Ordered_atq type of stage in multi-fl  1. Run the sample with below
> command::
>
>     # ./build/dpdk-eventdev_pipeline -c 0xe00000 -w eventdev_device_bus_id
> -w device0_bus_id -w device1_bus_id -w device2_bus_id -w device3_bus_id -
> - -w 0xc00000 -n=0 -o --dump
> -
> -    Parameters:
> +
> +    Parameters:
>          -c, COREMASK         : Hexadecimal bitmask of cores to run on
>          -w, --pci-whitelist  : Add a PCI device in white list.
>                                 Only use the specified PCI devices. The argument format
>                                 is <[domain:]bus:devid.func>. This option can be present
> -                               several times (once per device).
> -        EAL Commands
> +                               several times (once per device).
> +        EAL Commands
>          -w, --worker-mask=core mask : Run worker on CPUs in core mask
>          -n, --packets=N             : Send N packets (default ~32M), 0 implies no
> limit
>          -o, --ordered                Use ordered scheduling
> --
> 1.8.3.1

[-- Attachment #2: Type: text/html, Size: 29896 bytes --]

  reply	other threads:[~2019-06-12  5:34 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-05-31  7:22 [dts] [PATCH] eventdev_pipeline_perf_test_plan.rst: " thaq
2019-06-05  2:31 ` Tu, Lijuan
2019-06-10  9:00 ` [dts] [PATCH v2] " thaq
2019-06-12  3:02   ` Tu, Lijuan
2019-06-12  5:02     ` Thanseerul Haq
2019-06-12  5:34       ` Tu, Lijuan [this message]
2019-06-12  6:38         ` Thanseerul Haq
2019-06-12  7:03           ` Tu, Lijuan

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=8CE3E05A3F976642AAB0F4675D0AD20E0BABA5F5@SHSMSX101.ccr.corp.intel.com \
    --to=lijuan.tu@intel.com \
    --cc=avijay@marvell.com \
    --cc=dts@dpdk.org \
    --cc=fmasood@marvell.com \
    --cc=jerinj@marvell.com \
    --cc=thaq@marvell.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).