From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by dpdk.space (Postfix) with ESMTP id 579A5A00E6 for ; Tue, 11 Jun 2019 10:35:48 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 1A36C1C2BA; Tue, 11 Jun 2019 10:35:48 +0200 (CEST) Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by dpdk.org (Postfix) with ESMTP id 997D81C276 for ; Tue, 11 Jun 2019 10:35:46 +0200 (CEST) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga006.jf.intel.com ([10.7.209.51]) by fmsmga102.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 11 Jun 2019 01:35:45 -0700 X-ExtLoop1: 1 Received: from meijuan2.sh.intel.com ([10.67.119.150]) by orsmga006.jf.intel.com with ESMTP; 11 Jun 2019 01:35:44 -0700 From: hanyingya To: dts@dpdk.org Cc: hanyingya Date: Tue, 11 Jun 2019 16:36:37 +0000 Message-Id: <20190611163637.26029-1-yingyax.han@intel.com> X-Mailer: git-send-email 2.17.1 Subject: [dts] [PATCH V2]test_plan: add pmd rss performance test plan X-BeenThere: dts@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: test suite reviews and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dts-bounces@dpdk.org Sender: "dts" Signed-off-by: hanyingya --- test_plans/pmd_test_plan.rst | 64 ++++++++++++++++++++++++++++++++++++ 1 file changed, 64 insertions(+) diff --git a/test_plans/pmd_test_plan.rst b/test_plans/pmd_test_plan.rst index 51a77ef..a077cfa 100644 --- a/test_plans/pmd_test_plan.rst +++ b/test_plans/pmd_test_plan.rst @@ -169,6 +169,70 @@ The results are printed in the following table: | 1518 | | | | | +-------+---------+------------+--------+---------------------+ +Test Case: Pmd RSS Performance +============================== + +The RSS feature is designed to improve networking performance by load balancing +the packets received from a NIC port to multiple NIC RX queues. + +In order to get the best pmdrss performance, Server configuration are required: + +- BIOS + + * Intel Hyper-Threading Technology is ENABLED + * Other: reference to 'Test Case: Single Core Performance Benchmarking' + + +Run application using a core mask for the appropriate thread and core +settings given in the following: + + +----+----------+-----------+-----------------------+ + | | Rx Ports | Rx Queues | Sockets/Cores/Threads | + +====+==========+===========+=======================+ + | 1 | 1 | 2 | 1S/1C/2T | + +----+----------+-----------+-----------------------+ + | 2 | 2 | 2 | 1S/2C/1T | + +----+----------+-----------+-----------------------+ + | 3 | 2 | 2 | 1S/4C/1T | + +----+----------+-----------+-----------------------+ + | 4 | 2 | 2 | 1S/2C/2T | + +----+----------+-----------+-----------------------+ + | 5 | 2 | 3 | 1S/3C/2T | + +----+----------+-----------+-----------------------+ + | 6 | 2 | 3 | 1S/6C/1T | + +----+----------+-----------+-----------------------+ + +``note``: A queue can be handled by only one core, but one core can handle a couple of queues. + +#. Start testpmd and start io forwading with the above parameters. + For example, 1S/1C/2T:: + + ./x86_64-native-linuxapp-gcc/app/testpmd -c 0x2000000000000030000000 -n 4 -- -i \ + --portmask=0x3 --txd=512 --rxd=512 --burst=32 --txpt=36 --txht=0 --txwt=0 \ + --txfreet=32 --rxfreet=64 --txrst=32 --mbcache=128 --nb-cores=2 --rxq=2 --txq=2 + +# Send packet with frame size from 64bytes to 1518bytes with ixia traffic generator, + record the perfromance numbers: + + +------------+----------+----------+-------------+----------+ + | Frame Size | Rx ports | S/C/T | Throughput | Linerate | + +============+==========+==========+=============+==========+ + | 64 | | | | | + +------------+----------+----------+-------------+----------+ + | 128 | | | | | + +------------+----------+----------+-------------+----------+ + | 256 | | | | | + +------------+----------+----------+-------------+----------+ + | 512 | | | | | + +------------+----------+----------+-------------+----------+ + | 1024 | | | | | + +------------+----------+----------+-------------+----------+ + | 1280 | | | | | + +------------+----------+----------+-------------+----------+ + | 1518 | | | | | + +------------+----------+----------+-------------+----------+ + + The memory partial writes are measured with the ``vtbwrun`` application and printed in the following table::: -- 2.17.1