From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by dpdk.space (Postfix) with ESMTP id 34AFBA00E6 for ; Wed, 12 Jun 2019 09:18:59 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 007201D151; Wed, 12 Jun 2019 09:18:58 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by dpdk.org (Postfix) with ESMTP id F1EC31D150 for ; Wed, 12 Jun 2019 09:18:57 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x5C7FOY5019787 for ; Wed, 12 Jun 2019 00:18:57 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : mime-version : content-type; s=pfpt0818; bh=lJNMzz5fh3vQjnKxniGnqrlbhRSNDDN6S7mlqKys9aQ=; b=Dn1mag/DywYB4eJUSaP0gLR9i5sIW+Hoid/gim7kmHz3O9aOBddsQSygJA6MVcUqU6sL o6CJZIrIRRzDCWcbXmegDCyBFKhHFwoiI2X1TzLnxcxp4oEXjTIxB9PZQumpOFE1m12q jGrn5L/KY14bv9gOK/3ANiqHtxP600FypN51/jBcZ2aCT9Ti1cUu0pQOLPHjXcbDp79B O42rKNWneiXvGe022a9eQINkN9o8pCwqZWewKlRZvuR3lIgDf4G/wagJ85Dc/eizLtOX ZgkB6bMYisk3Vily6/fbC2jUt+O6xFhzTzs/tghFU8R2ws3KTGStsw+WhTgH3pIwS07k VQ== Received: from sc-exch03.marvell.com ([199.233.58.183]) by mx0a-0016f401.pphosted.com with ESMTP id 2t2ukn0beb-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Wed, 12 Jun 2019 00:18:56 -0700 Received: from SC-EXCH01.marvell.com (10.93.176.81) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Wed, 12 Jun 2019 00:18:56 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Wed, 12 Jun 2019 00:18:56 -0700 Received: from thaq.marvell.com (unknown [10.28.10.34]) by maili.marvell.com (Postfix) with ESMTP id 8D0053F703F; Wed, 12 Jun 2019 00:18:54 -0700 (PDT) From: To: CC: , , , Thanseerulhaq Date: Wed, 12 Jun 2019 12:48:20 +0530 Message-ID: <1560323900-12906-1-git-send-email-thaq@marvell.com> X-Mailer: git-send-email 1.8.3.1 MIME-Version: 1.0 Content-Type: text/plain X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-06-12_04:, , signatures=0 Subject: [dts] [PATCH] eventdev_pipeline_perf_test_plan.rst: Adding Eventdev_pipeline features performance Testplan X-BeenThere: dts@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: test suite reviews and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dts-bounces@dpdk.org Sender: "dts" From: Thanseerulhaq Adding testcase for 1/2/4 NIC ports for eventdev_pipeline features atomic, parallel, order stages. Signed-off-by: Thanseerulhaq --- test_plans/eventdev_pipeline_perf_test_plan.rst | 257 ++++++++++++++++++++++++ 1 file changed, 257 insertions(+) create mode 100644 test_plans/eventdev_pipeline_perf_test_plan.rst diff --git a/test_plans/eventdev_pipeline_perf_test_plan.rst b/test_plans/eventdev_pipeline_perf_test_plan.rst new file mode 100644 index 0000000..619f9a3 --- /dev/null +++ b/test_plans/eventdev_pipeline_perf_test_plan.rst @@ -0,0 +1,257 @@ +.. SPDX-License-Identifier: BSD-3-Clause + Copyright (C) 2019 Marvell International Ltd. + +============================ +Eventdev Pipeline Perf Tests +============================ + +Prerequisites +============== + +Each of the 10Gb/25Gb/40Gb/100Gb Ethernet* ports of the DUT is directly connected in +full-duplex to a different port of the peer Ixia ports(traffic generator). + +Using TCL commands, the Ixia can be configured to send and receive traffic on a given set of ports. + +If using vfio the kernel must be >= 3.6+ and VT-d must be enabled in bios.When +using vfio, use the following commands to load the vfio driver and bind it +to the device under test :: + + modprobe vfio + modprobe vfio-pci + usertools/dpdk-devbind.py --bind=vfio-pci device_bus_id + usertools/dpdk-devbind.py --bind=vfio-pci eventdev_device_bus_id + +Create huge pages +================= +mkdir -p /dev/huge +mount -t hugetlbfs none /dev/huge +echo 24 > /proc/sys/vm/nr_hugepages + +Configure limits of Eventdev devices +==================================== +Set all eventdev devices sso and ssow limits to zero. Then set eventdev device under tests sso and ssow limits to non-zero values as per cores/queues requriments :: + echo 0 > /sys/bus/pci/devices/eventdev_device_bus_id/limits/sso + echo 0 > /sys/bus/pci/devices/eventdev_device_bus_id/limits/ssow + +Example :: + echo 0 > /sys/bus/pci/devices/eventdev_device_bus_id/limits/tim + echo 1 > /sys/bus/pci/devices/eventdev_device_bus_id/limits/npa + echo 16 > /sys/bus/pci/devices/eventdev_device_bus_id/limits/sso + echo 32 > /sys/bus/pci/devices/eventdev_device_bus_id/limits/ssow + +- ``eventdev_device_bus_id/limits/sso`` : Max limit `256` +- ``eventdev_device_bus_id/limits/ssow``: Max limit `52` + +Test Case: Performance 1port atomic test +======================================== +Description: Execute performance test with Atomic_atq type of stage in multi-flow situation for various cores. + +1. Run the sample with below command:: + + # ./build/dpdk-eventdev_pipeline -c 0xe00000 -w eventdev_device_bus_id -w device_bus_id -- -w 0xc00000 -n=0 --dump + + Parameters: + -c, COREMASK : Hexadecimal bitmask of cores to run on + -w, --pci-whitelist : Add a PCI device in white list. + Only use the specified PCI devices. The argument format + is <[domain:]bus:devid.func>. This option can be present + several times (once per device). + EAL Commands + -w, --worker-mask=core mask : Run worker on CPUs in core mask + -n, --packets=N : Send N packets (default ~32M), 0 implies no limit + -D, --dump Print detailed statistics before exit + +2. Use Ixia to send huge number of packets(with same 5-tuple and different 5-tuple) + +3. Observe the speed of packets received(Rx-rate) on Ixia. + +Test Case: Performance 1port parallel test +========================================== +Description: Execute performance test with Parallel_atq type of stage in multi-flow situation for various cores. + +1. Run the sample with below command:: + + # ./build/dpdk-eventdev_pipeline -c 0xe00000 -w eventdev_device_bus_id -w device_bus_id -- -w 0xc00000 -n=0 -p --dump + + Parameters: + -c, COREMASK : Hexadecimal bitmask of cores to run on + -w, --pci-whitelist : Add a PCI device in white list. + Only use the specified PCI devices. The argument format + is <[domain:]bus:devid.func>. This option can be present + several times (once per device). + EAL Commands + -w, --worker-mask=core mask : Run worker on CPUs in core mask + -n, --packets=N : Send N packets (default ~32M), 0 implies no limit + -p, --parallel : Use parallel scheduling + -D, --dump : Print detailed statistics before exit + +2. Use Ixia to send huge number of packets(with same 5-tuple and different 5-tuple) + +3. Observe the speed of packets received(Rx-rate) on Ixia. + +Test Case: Performance 1port ordered test +========================================= +Description: Execute performance test with Ordered_atq type of stage in multi-flow situation for various cores. + +1. Run the sample with below command:: + + # ./build/dpdk-eventdev_pipeline -c 0xe00000 -w eventdev_device_bus_id -w device_bus_id -- -w 0xc00000 -n=0 -o --dump + + Parameters: + -c, COREMASK : Hexadecimal bitmask of cores to run on + -w, --pci-whitelist : Add a PCI device in white list. + Only use the specified PCI devices. The argument format + is <[domain:]bus:devid.func>. This option can be present + several times (once per device). + EAL Commands + -w, --worker-mask=core mask : Run worker on CPUs in core mask + -n, --packets=N : Send N packets (default ~32M), 0 implies no limit + -o, --ordered Use ordered scheduling + -D, --dump : Print detailed statistics before exit + +2. Use Ixia to send huge number of packets(with same 5-tuple and different 5-tuple) + +3. Observe the speed of packets received(Rx-rate) on Ixia. + +Test Case: Performance 2port atomic test +======================================== +Description: Execute performance test with Atomic_atq type of stage in multi-flow situation for various cores. + +1. Run the sample with below command:: + + # ./build/dpdk-eventdev_pipeline -c 0xe00000 -w eventdev_device_bus_id -w device0_bus_id -w device1_bus_id -- -w 0xc00000 -n=0 --dump + + Parameters: + -c, COREMASK : Hexadecimal bitmask of cores to run on + -w, --pci-whitelist : Add a PCI device in white list. + Only use the specified PCI devices. The argument format + is <[domain:]bus:devid.func>. This option can be present + several times (once per device). + EAL Commands + -w, --worker-mask=core mask : Run worker on CPUs in core mask + -n, --packets=N : Send N packets (default ~32M), 0 implies no limit + -D, --dump Print detailed statistics before exit + +2. Use Ixia to send huge number of packets(with same 5-tuple and different 5-tuple) + +3. Observe the speed of packets received(Rx-rate) on Ixia. + +Test Case: Performance 2port parallel test +========================================== +Description: Execute performance test with Parallel_atq type of stage in multi-flow situation for various cores. + +1. Run the sample with below command:: + + # ./build/dpdk-eventdev_pipeline -c 0xe00000 -w eventdev_device_bus_id -w device0_bus_id -w device1_bus_id -- -w 0xc00000 -n=0 -p --dump + + Parameters: + -c, COREMASK : Hexadecimal bitmask of cores to run on + -w, --pci-whitelist : Add a PCI device in white list. + Only use the specified PCI devices. The argument format + is <[domain:]bus:devid.func>. This option can be present + several times (once per device). + EAL Commands + -w, --worker-mask=core mask : Run worker on CPUs in core mask + -n, --packets=N : Send N packets (default ~32M), 0 implies no limit + -p, --parallel : Use parallel scheduling + -D, --dump : Print detailed statistics before exit + +2. Use Ixia to send huge number of packets(with same 5-tuple and different 5-tuple) + +3. Observe the speed of packets received(Rx-rate) on Ixia. + +Test Case: Performance 2port ordered test +========================================= +Description: Execute performance test with Ordered_atq type of stage in multi-flow situation for various cores. + +1. Run the sample with below command:: + + # ./build/dpdk-eventdev_pipeline -c 0xe00000 -w eventdev_device_bus_id -w device0_bus_id -w device1_bus_id -- -w 0xc00000 -n=0 -o --dump + + Parameters: + -c, COREMASK : Hexadecimal bitmask of cores to run on + -w, --pci-whitelist : Add a PCI device in white list. + Only use the specified PCI devices. The argument format + is <[domain:]bus:devid.func>. This option can be present + several times (once per device). + EAL Commands + -w, --worker-mask=core mask : Run worker on CPUs in core mask + -n, --packets=N : Send N packets (default ~32M), 0 implies no limit + -o, --ordered Use ordered scheduling + -D, --dump : Print detailed statistics before exit + +2. Use Ixia to send huge number of packets(with same 5-tuple and different 5-tuple) + +3. Observe the speed of packets received(Rx-rate) on Ixia. + +Test Case: Performance 4port atomic test +======================================== +Description: Execute performance test with Atomic_atq type of stage in multi-flow situation for various cores. + +1. Run the sample with below command:: + + # ./build/dpdk-eventdev_pipeline -c 0xe00000 -w eventdev_device_bus_id -w device0_bus_id -w device1_bus_id -w device2_bus_id -w device3_bus_id -- -w 0xc00000 -n=0 --dump + + Parameters: + -c, COREMASK : Hexadecimal bitmask of cores to run on + -w, --pci-whitelist : Add a PCI device in white list. + Only use the specified PCI devices. The argument format + is <[domain:]bus:devid.func>. This option can be present + several times (once per device). + EAL Commands + -w, --worker-mask=core mask : Run worker on CPUs in core mask + -n, --packets=N : Send N packets (default ~32M), 0 implies no limit + -D, --dump Print detailed statistics before exit + +2. Use Ixia to send huge number of packets(with same 5-tuple and different 5-tuple) + +3. Observe the speed of packets received(Rx-rate) on Ixia. + +Test Case: Performance 4port parallel test +========================================== +Description: Execute performance test with Parallel_atq type of stage in multi-flow situation for various cores. + +1. Run the sample with below command:: + + # ./build/dpdk-eventdev_pipeline -c 0xe00000 -w eventdev_device_bus_id -w device0_bus_id -w device1_bus_id -w device2_bus_id -w device3_bus_id -- -w 0xc00000 -n=0 -p --dump + + Parameters: + -c, COREMASK : Hexadecimal bitmask of cores to run on + -w, --pci-whitelist : Add a PCI device in white list. + Only use the specified PCI devices. The argument format + is <[domain:]bus:devid.func>. This option can be present + several times (once per device). + EAL Commands + -w, --worker-mask=core mask : Run worker on CPUs in core mask + -n, --packets=N : Send N packets (default ~32M), 0 implies no limit + -p, --parallel : Use parallel scheduling + -D, --dump : Print detailed statistics before exit + +2. Use Ixia to send huge number of packets(with same 5-tuple and different 5-tuple) + +3. Observe the speed of packets received(Rx-rate) on Ixia. + +Test Case: Performance 4port ordered test +========================================= +Description: Execute performance test with Ordered_atq type of stage in multi-flow situation for various cores. + +1. Run the sample with below command:: + + # ./build/dpdk-eventdev_pipeline -c 0xe00000 -w eventdev_device_bus_id -w device0_bus_id -w device1_bus_id -w device2_bus_id -w device3_bus_id -- -w 0xc00000 -n=0 -o --dump + + Parameters: + -c, COREMASK : Hexadecimal bitmask of cores to run on + -w, --pci-whitelist : Add a PCI device in white list. + Only use the specified PCI devices. The argument format + is <[domain:]bus:devid.func>. This option can be present + several times (once per device). + EAL Commands + -w, --worker-mask=core mask : Run worker on CPUs in core mask + -n, --packets=N : Send N packets (default ~32M), 0 implies no limit + -o, --ordered Use ordered scheduling + -D, --dump : Print detailed statistics before exit + +2. Use Ixia to send huge number of packets(with same 5-tuple and different 5-tuple) + +3. Observe the speed of packets received(Rx-rate) on Ixia. -- 1.8.3.1