From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by dpdk.space (Postfix) with ESMTP id C0B4AA045E for ; Fri, 31 May 2019 09:23:13 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 91C5B2C19; Fri, 31 May 2019 09:23:12 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by dpdk.org (Postfix) with ESMTP id DF5FF9E4 for ; Fri, 31 May 2019 09:23:10 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x4V7Jn1h012459 for ; Fri, 31 May 2019 00:23:09 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : mime-version : content-type; s=pfpt0818; bh=jy6rDQVssl6Ou9Jfcm/f3vgXVEFN6kCE5IoZmS8f5DY=; b=SrNWpScynCFRkUR01QrOUs8ouvHlgJGXZdqwanESKq4xtQ0RrlGsdHwabUrTqfIwecs7 kTcJ/f2UMddV5bpjjT5kXIROU5TfvFqsQuGiM/GbrIbb7eAD/m/AB/3UPKeSGY5QDajD 2bLn/LPPfrphc3wXvd3fle+w5wmgb5l2NA53LYKiq1bm0VLX8/3DJHhuaCSxdkVEAmhl 6d2ilod1JJDnnlF+PJOhWl2WT02w5hBx2MsEoBJSgxQl29UiJd0kuhjge5BgtqLUJ3wj vi1QZrcszHCG9ofo+LqIkVneeNaUknMPlsLmRmhiEE7ysMypypIErlyvFz+MJL6mmTWP qw== Received: from sc-exch04.marvell.com ([199.233.58.184]) by mx0a-0016f401.pphosted.com with ESMTP id 2sttf115p4-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Fri, 31 May 2019 00:23:09 -0700 Received: from SC-EXCH03.marvell.com (10.93.176.83) by SC-EXCH04.marvell.com (10.93.176.84) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Fri, 31 May 2019 00:23:08 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Fri, 31 May 2019 00:23:08 -0700 Received: from thaq.marvell.com (unknown [10.28.10.232]) by maili.marvell.com (Postfix) with ESMTP id 845113F7040; Fri, 31 May 2019 00:23:07 -0700 (PDT) From: To: CC: , , , Thanseerulhaq Date: Fri, 31 May 2019 12:52:44 +0530 Message-ID: <1559287364-19555-1-git-send-email-thaq@marvell.com> X-Mailer: git-send-email 1.8.3.1 MIME-Version: 1.0 Content-Type: text/plain X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-05-31_04:, , signatures=0 Subject: [dts] [PATCH] eventdev_pipeline_perf_test_plan.rst: Adding Eventdev_pipeline feature performance Testplan. X-BeenThere: dts@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: test suite reviews and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dts-bounces@dpdk.org Sender: "dts" From: Thanseerulhaq Adding testcase for 1/2/4 NIC ports for eventdev_pipeline features atomic, parallel, order stages. Signed-off-by: Thanseerulhaq --- test_plans/eventdev_pipeline_perf_test_plan.rst | 257 ++++++++++++++++++++++++ 1 file changed, 257 insertions(+) create mode 100644 test_plans/eventdev_pipeline_perf_test_plan.rst diff --git a/test_plans/eventdev_pipeline_perf_test_plan.rst b/test_plans/eventdev_pipeline_perf_test_plan.rst new file mode 100644 index 0000000..f2b2a7e --- /dev/null +++ b/test_plans/eventdev_pipeline_perf_test_plan.rst @@ -0,0 +1,257 @@ +.. SPDX-License-Identifier: BSD-3-Clause + Copyright (C) 2019 Marvell International Ltd. + +============================ +Eventdev Pipeline Perf Tests +============================ + +Prerequisites +============== + +Each of the 10Gb/25Gb/40Gb/100Gb Ethernet* ports of the DUT is directly connected in +full-duplex to a different port of the peer Ixia ports(traffic generator). + +Using TCL commands, the Ixia can be configured to send and receive traffic on a given set of ports. + +If using vfio the kernel must be >= 3.6+ and VT-d must be enabled in bios.When +using vfio, use the following commands to load the vfio driver and bind it +to the device under test :: + + modprobe vfio + modprobe vfio-pci + usertools/dpdk-devbind.py --bind=vfio-pci device_bus_id + usertools/dpdk-devbind.py --bind=vfio-pci eventdev_device_bus_id + +Create huge pages +================= +mkdir -p /dev/huge +mount -t hugetlbfs none /dev/huge +echo 24 > /proc/sys/vm/nr_hugepages + +Configure limits of Eventdev devices +==================================== +Set all eventdev devices sso and ssow limits to zero. Then set eventdev device under tests sso and ssow limits to non-zero values as per cores/queues requriments :: + echo 0 > /sys/bus/pci/devices/eventdev_device_bus_id/limits/sso + echo 0 > /sys/bus/pci/devices/eventdev_device_bus_id/limits/ssow + +Example :: + echo 0 > /sys/bus/pci/devices/eventdev_device_bus_id/limits/tim + echo 1 > /sys/bus/pci/devices/eventdev_device_bus_id/limits/npa + echo 16 > /sys/bus/pci/devices/eventdev_device_bus_id/limits/sso + echo 32 > /sys/bus/pci/devices/eventdev_device_bus_id/limits/ssow + +- ``eventdev_device_bus_id/limits/sso`` : Max limit `256` +- ``eventdev_device_bus_id/limits/ssow``: Max limit `52` + +Test Case: Performance 1port atomic test +======================================== +Description: Execute performance test with Atomic_atq type of stage in multi-flow situation for various cores. + +1. Run the sample with below command:: + + # ./build/dpdk-eventdev_pipeline -c 0xe00000 -w eventdev_device_bus_id -w device_bus_id -- -w 0xc00000 -n=0 --dump + + Parameters: + -c, COREMASK : Hexadecimal bitmask of cores to run on + -w, --pci-whitelist : Add a PCI device in white list. + Only use the specified PCI devices. The argument format + is <[domain:]bus:devid.func>. This option can be present + several times (once per device). + EAL Commands + -w, --worker-mask=core mask : Run worker on CPUs in core mask + -n, --packets=N : Send N packets (default ~32M), 0 implies no limit + -D, --dump Print detailed statistics before exit + +2. Use Ixia to send huge number of packets(with same 5-tuple and different 5-tuple) + +3. Observe the speed of packets received(Rx-rate) on Ixia. + +Test Case: Performance 1port parallel test +========================================== +Description: Execute performance test with Parallel_atq type of stage in multi-flow situation for various cores. + +1. Run the sample with below command:: + + # ./build/dpdk-eventdev_pipeline -c 0xe00000 -w eventdev_device_bus_id -w device_bus_id -- -w 0xc00000 -n=0 -p --dump + + Parameters: + -c, COREMASK : Hexadecimal bitmask of cores to run on + -w, --pci-whitelist : Add a PCI device in white list. + Only use the specified PCI devices. The argument format + is <[domain:]bus:devid.func>. This option can be present + several times (once per device). + EAL Commands + -w, --worker-mask=core mask : Run worker on CPUs in core mask + -n, --packets=N : Send N packets (default ~32M), 0 implies no limit + -p, --parallel : Use parallel scheduling + -D, --dump : Print detailed statistics before exit + +2. Use Ixia to send huge number of packets(with same 5-tuple and different 5-tuple) + +3. Observe the speed of packets received(Rx-rate) on Ixia. + +Test Case: Performance 1port ordered test +========================================= +Description: Execute performance test with Ordered_atq type of stage in multi-flow situation for various cores. + +1. Run the sample with below command:: + + # ./build/dpdk-eventdev_pipeline -c 0xe00000 -w eventdev_device_bus_id -w device_bus_id -- -w 0xc00000 -n=0 -o --dump + + Parameters: + -c, COREMASK : Hexadecimal bitmask of cores to run on + -w, --pci-whitelist : Add a PCI device in white list. + Only use the specified PCI devices. The argument format + is <[domain:]bus:devid.func>. This option can be present + several times (once per device). + EAL Commands + -w, --worker-mask=core mask : Run worker on CPUs in core mask + -n, --packets=N : Send N packets (default ~32M), 0 implies no limit + -o, --ordered Use ordered scheduling + -D, --dump : Print detailed statistics before exit + +2. Use Ixia to send huge number of packets(with same 5-tuple and different 5-tuple) + +3. Observe the speed of packets received(Rx-rate) on Ixia. + +Test Case: Performance 2port atomic test +======================================== +Description: Execute performance test with Atomic_atq type of stage in multi-flow situation for various cores. + +1. Run the sample with below command:: + + # ./build/dpdk-eventdev_pipeline -c 0xe00000 -w eventdev_device_bus_id -w device0_bus_id -w device1_bus_id -- -w 0xc00000 -n=0 --dump + + Parameters: + -c, COREMASK : Hexadecimal bitmask of cores to run on + -w, --pci-whitelist : Add a PCI device in white list. + Only use the specified PCI devices. The argument format + is <[domain:]bus:devid.func>. This option can be present + several times (once per device). + EAL Commands + -w, --worker-mask=core mask : Run worker on CPUs in core mask + -n, --packets=N : Send N packets (default ~32M), 0 implies no limit + -D, --dump Print detailed statistics before exit + +2. Use Ixia to send huge number of packets(with same 5-tuple and different 5-tuple) + +3. Observe the speed of packets received(Rx-rate) on Ixia. + +Test Case: Performance 2port parallel test +========================================== +Description: Execute performance test with Parallel_atq type of stage in multi-flow situation for various cores. + +1. Run the sample with below command:: + + # ./build/dpdk-eventdev_pipeline -c 0xe00000 -w eventdev_device_bus_id -w device0_bus_id -w device1_bus_id -- -w 0xc00000 -n=0 -p --dump + + Parameters: + -c, COREMASK : Hexadecimal bitmask of cores to run on + -w, --pci-whitelist : Add a PCI device in white list. + Only use the specified PCI devices. The argument format + is <[domain:]bus:devid.func>. This option can be present + several times (once per device). + EAL Commands + -w, --worker-mask=core mask : Run worker on CPUs in core mask + -n, --packets=N : Send N packets (default ~32M), 0 implies no limit + -p, --parallel : Use parallel scheduling + -D, --dump : Print detailed statistics before exit + +2. Use Ixia to send huge number of packets(with same 5-tuple and different 5-tuple) + +3. Observe the speed of packets received(Rx-rate) on Ixia. + +Test Case: Performance 2port ordered test +========================================= +Description: Execute performance test with Ordered_atq type of stage in multi-flow situation for various cores. + +1. Run the sample with below command:: + + # ./build/dpdk-eventdev_pipeline -c 0xe00000 -w eventdev_device_bus_id -w device0_bus_id -w device1_bus_id -- -w 0xc00000 -n=0 -o --dump + + Parameters: + -c, COREMASK : Hexadecimal bitmask of cores to run on + -w, --pci-whitelist : Add a PCI device in white list. + Only use the specified PCI devices. The argument format + is <[domain:]bus:devid.func>. This option can be present + several times (once per device). + EAL Commands + -w, --worker-mask=core mask : Run worker on CPUs in core mask + -n, --packets=N : Send N packets (default ~32M), 0 implies no limit + -o, --ordered Use ordered scheduling + -D, --dump : Print detailed statistics before exit + +2. Use Ixia to send huge number of packets(with same 5-tuple and different 5-tuple) + +3. Observe the speed of packets received(Rx-rate) on Ixia. + +Test Case: Performance 4port atomic test +======================================== +Description: Execute performance test with Atomic_atq type of stage in multi-flow situation for various cores. + +1. Run the sample with below command:: + + # ./build/dpdk-eventdev_pipeline -c 0xe00000 -w eventdev_device_bus_id -w device0_bus_id -w device1_bus_id -w device2_bus_id -w device3_bus_id -- -w 0xc00000 -n=0 --dump + + Parameters: + -c, COREMASK : Hexadecimal bitmask of cores to run on + -w, --pci-whitelist : Add a PCI device in white list. + Only use the specified PCI devices. The argument format + is <[domain:]bus:devid.func>. This option can be present + several times (once per device). + EAL Commands + -w, --worker-mask=core mask : Run worker on CPUs in core mask + -n, --packets=N : Send N packets (default ~32M), 0 implies no limit + -D, --dump Print detailed statistics before exit + +2. Use Ixia to send huge number of packets(with same 5-tuple and different 5-tuple) + +3. Observe the speed of packets received(Rx-rate) on Ixia. + +Test Case: Performance 4port parallel test +========================================== +Description: Execute performance test with Parallel_atq type of stage in multi-flow situation for various cores. + +1. Run the sample with below command:: + + # ./build/dpdk-eventdev_pipeline -c 0xe00000 -w eventdev_device_bus_id -w device0_bus_id -w device1_bus_id -w device2_bus_id -w device3_bus_id -- -w 0xc00000 -n=0 -p --dump + + Parameters: + -c, COREMASK : Hexadecimal bitmask of cores to run on + -w, --pci-whitelist : Add a PCI device in white list. + Only use the specified PCI devices. The argument format + is <[domain:]bus:devid.func>. This option can be present + several times (once per device). + EAL Commands + -w, --worker-mask=core mask : Run worker on CPUs in core mask + -n, --packets=N : Send N packets (default ~32M), 0 implies no limit + -p, --parallel : Use parallel scheduling + -D, --dump : Print detailed statistics before exit + +2. Use Ixia to send huge number of packets(with same 5-tuple and different 5-tuple) + +3. Observe the speed of packets received(Rx-rate) on Ixia. + +Test Case: Performance 4port ordered test +========================================= +Description: Execute performance test with Ordered_atq type of stage in multi-flow situation for various cores. + +1. Run the sample with below command:: + + # ./build/dpdk-eventdev_pipeline -c 0xe00000 -w eventdev_device_bus_id -w device0_bus_id -w device1_bus_id -w device2_bus_id -w device3_bus_id -- -w 0xc00000 -n=0 -o --dump + + Parameters: + -c, COREMASK : Hexadecimal bitmask of cores to run on + -w, --pci-whitelist : Add a PCI device in white list. + Only use the specified PCI devices. The argument format + is <[domain:]bus:devid.func>. This option can be present + several times (once per device). + EAL Commands + -w, --worker-mask=core mask : Run worker on CPUs in core mask + -n, --packets=N : Send N packets (default ~32M), 0 implies no limit + -o, --ordered Use ordered scheduling + -D, --dump : Print detailed statistics before exit + +2. Use Ixia to send huge number of packets(with same 5-tuple and different 5-tuple) + +3. Observe the speed of packets received(Rx-rate) on Ixia. -- 1.8.3.1