DPDK patches and discussions
 help / color / mirror / Atom feed
From: "Zhang, Helin" <helin.zhang@intel.com>
To: "dev@dpdk.org" <dev@dpdk.org>
Subject: [dpdk-dev] i40e: Steps and required configurations of how to achieve the best performance!
Date: Wed, 17 Sep 2014 04:18:16 +0000	[thread overview]
Message-ID: <F35DEAC7BCE34641BA9FAC6BCA4A12E70A793414@SHSMSX104.ccr.corp.intel.com> (raw)

Hi all

As a lot of special configurations are needed for achieving the best performance on DPDK, and we are asked by a lot of guys here, I'd like to share with all of you about the steps and required configurations of how to achieve the best performance of i40e. I am trying to list all and what I am using and have done to get the best performance on i40e, though something might still be missed. So, supplements are welcome!

Please do not ask me the real performance numbers, as I am not the official way to publish the real numbers!

1. Hardware Platform:
-- Intel(r) Haswell(r) server
-- Intel(R) Xeon(R) CPU E5-2699 v3 @ 2.30GHz
-- Big enough memory, e.g. 32G, deployed on different memory channels
-- Intel Corporation Ethernet Controller XL710 for 40GbE QSFP+ / Intel Corporation Ethernet Controller X710 for 10GbE SFP+
  -- Make sure it is B0 hardware
  -- Update the firmware to at least 4.2.4 or newer, as they may have impact on performance
-- Make sure inserting the NICs to the correct PCIe Gen3 slot, as PCIe Gen2 cannot provide enough bandwidth

2. Software Platform:
-- Fedora 20 with updating kernel to 3.15.10, this is what I am using for testing.
  -- GCC 4.8.2
  -- Kernle 3.15.10
-- DPDK 1.7.0 or later version

3. BIOS Configurations:
-- Enhanced Intel Speedstep: Disabled
-- Processor C3: Disabled
-- Processor C6: Disabled
-- Hyper-Threading: Enabled
-- Intel VT-d: Disable
-- MLC Streamer: Enabled
-- MLC Spatial Prefetcher: Enabled
-- DCU Data Prefetcher: Enabled
-- DCU Instruction Prefetcher: Enabled
-- Direct Cache Access (DCA): Enabled
-- CPU Power and Performance Policy: Performance
-- Memory Power Optimization: Performance
-- Memory RAS and Performance Configuratioin -> NUMA Optimized: Enabled
-- *Extended Tag: Enabled
Note that 'Extended Tag' might not be seen in some BIOS, see 'compile settings' for doing that at runtime.

4. Grub Configurations:
-- Set huge pages, e.g. 'default_hugepagesz=1G hugepagesz=1G hugepages=8'
-- Isolate cpu cores to be used for rx/tx from Linux scheduling, e.g. ' isolcpus=2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17'

5. Compile Settings:
-- Change below configuration items in config files
  CONFIG_RTE_PCI_CONFIG=y
  CONFIG_RTE_PCI_EXTENDED_TAG="on"
  CONFIG_RTE_LIBRTE_I40E_16BYTE_RX_DESC=y

6. Application Command Line Parameters:
-- For 40G ports, make sure to use two ports on two different cards, as it cannot provide 80G bps on a single PCI gen3 slot.
-- Make sure the lcores to be used are on the CPU socket which the NIC PCI slot directly connected to.
  -- Run tools/ cpu_layout.py to get the lcore/socket topology.
  -- use 'lspci -q | grep Eth' to check the PCI address of the NIC ports
  -- e.g. for PCI address of 8x:00.x, it should use lcores on socket 1, otherwise, use lcores on socket 0
-- Make sure to use 2, 4 or more queues for l3fwd or testpmd to get better performance, 4 queues might be enough
-- e.g. run l3fwd on two ports on two different cards, with using two queues and two lcores per port
  ./l3fwd -c 0x3c0000 -n 4 -w 82:00.0 -w 85:00.0 -- -p 0x3 --config '(0,0,18),(0,1,19),(1,0,20),(1,1,21)'
-- e.g. run testpmd on two ports on two different cards, with using two queues and two lcores per port
  ./testpmd -c 0x3fc0001 -n 4 -w 82:00.0 -w 85:00.0 -- -i --coremask=0x3c0000 --nb-cores=4 --burst=32 --rxfreet=64 --txfreet=64 --txqflags=0xf03 --mbcache=256 --rxq=2 --txq=2 --rss-ip

7. Stream Configurations on Packet Generator:
-- Create a stream, e.g. on IXIA
  -- Set the Ethernet II type to 0x0800
  -- Set the protocols to IPv4
  -- Do not set any layer protocols, as I use IP rss
  -- Set correct destination IP address according to l3fwd-lpm/l3fwd-exact_match routing table.
  -- Set the source IP to random, this is very important to make sure the packet will be received in multiple queues.
-- Note that the desc MAC equals or not equals to the NIC port MAC may result in different performance numbers.
-- Note that promiscuous mode is enabled or not may result in different performance numbers.

Regards,
Helin

             reply	other threads:[~2014-09-17  4:12 UTC|newest]

Thread overview: 11+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2014-09-17  4:18 Zhang, Helin [this message]
2014-09-17  8:33 ` David Marchand
2014-09-17  8:50   ` Zhang, Helin
2014-09-17 14:03     ` David Marchand
2014-09-18  2:39       ` Zhang, Helin
2014-09-18  8:57         ` David Marchand
2014-09-19  3:43           ` Zhang, Helin
2014-10-15  9:41             ` Thomas Monjalon
2014-10-16  0:43               ` Zhang, Helin
2015-02-09 12:12                 ` David Marchand
2015-02-10  0:27                   ` Zhang, Helin

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=F35DEAC7BCE34641BA9FAC6BCA4A12E70A793414@SHSMSX104.ccr.corp.intel.com \
    --to=helin.zhang@intel.com \
    --cc=dev@dpdk.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).