DPDK patches and discussions
 help / color / mirror / Atom feed
* [dpdk-dev] i40e: Steps and required configurations of how to achieve the best performance!
@ 2014-09-17  4:18 Zhang, Helin
  2014-09-17  8:33 ` David Marchand
  0 siblings, 1 reply; 11+ messages in thread
From: Zhang, Helin @ 2014-09-17  4:18 UTC (permalink / raw)
  To: dev

Hi all

As a lot of special configurations are needed for achieving the best performance on DPDK, and we are asked by a lot of guys here, I'd like to share with all of you about the steps and required configurations of how to achieve the best performance of i40e. I am trying to list all and what I am using and have done to get the best performance on i40e, though something might still be missed. So, supplements are welcome!

Please do not ask me the real performance numbers, as I am not the official way to publish the real numbers!

1. Hardware Platform:
-- Intel(r) Haswell(r) server
-- Intel(R) Xeon(R) CPU E5-2699 v3 @ 2.30GHz
-- Big enough memory, e.g. 32G, deployed on different memory channels
-- Intel Corporation Ethernet Controller XL710 for 40GbE QSFP+ / Intel Corporation Ethernet Controller X710 for 10GbE SFP+
  -- Make sure it is B0 hardware
  -- Update the firmware to at least 4.2.4 or newer, as they may have impact on performance
-- Make sure inserting the NICs to the correct PCIe Gen3 slot, as PCIe Gen2 cannot provide enough bandwidth

2. Software Platform:
-- Fedora 20 with updating kernel to 3.15.10, this is what I am using for testing.
  -- GCC 4.8.2
  -- Kernle 3.15.10
-- DPDK 1.7.0 or later version

3. BIOS Configurations:
-- Enhanced Intel Speedstep: Disabled
-- Processor C3: Disabled
-- Processor C6: Disabled
-- Hyper-Threading: Enabled
-- Intel VT-d: Disable
-- MLC Streamer: Enabled
-- MLC Spatial Prefetcher: Enabled
-- DCU Data Prefetcher: Enabled
-- DCU Instruction Prefetcher: Enabled
-- Direct Cache Access (DCA): Enabled
-- CPU Power and Performance Policy: Performance
-- Memory Power Optimization: Performance
-- Memory RAS and Performance Configuratioin -> NUMA Optimized: Enabled
-- *Extended Tag: Enabled
Note that 'Extended Tag' might not be seen in some BIOS, see 'compile settings' for doing that at runtime.

4. Grub Configurations:
-- Set huge pages, e.g. 'default_hugepagesz=1G hugepagesz=1G hugepages=8'
-- Isolate cpu cores to be used for rx/tx from Linux scheduling, e.g. ' isolcpus=2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17'

5. Compile Settings:
-- Change below configuration items in config files
  CONFIG_RTE_PCI_CONFIG=y
  CONFIG_RTE_PCI_EXTENDED_TAG="on"
  CONFIG_RTE_LIBRTE_I40E_16BYTE_RX_DESC=y

6. Application Command Line Parameters:
-- For 40G ports, make sure to use two ports on two different cards, as it cannot provide 80G bps on a single PCI gen3 slot.
-- Make sure the lcores to be used are on the CPU socket which the NIC PCI slot directly connected to.
  -- Run tools/ cpu_layout.py to get the lcore/socket topology.
  -- use 'lspci -q | grep Eth' to check the PCI address of the NIC ports
  -- e.g. for PCI address of 8x:00.x, it should use lcores on socket 1, otherwise, use lcores on socket 0
-- Make sure to use 2, 4 or more queues for l3fwd or testpmd to get better performance, 4 queues might be enough
-- e.g. run l3fwd on two ports on two different cards, with using two queues and two lcores per port
  ./l3fwd -c 0x3c0000 -n 4 -w 82:00.0 -w 85:00.0 -- -p 0x3 --config '(0,0,18),(0,1,19),(1,0,20),(1,1,21)'
-- e.g. run testpmd on two ports on two different cards, with using two queues and two lcores per port
  ./testpmd -c 0x3fc0001 -n 4 -w 82:00.0 -w 85:00.0 -- -i --coremask=0x3c0000 --nb-cores=4 --burst=32 --rxfreet=64 --txfreet=64 --txqflags=0xf03 --mbcache=256 --rxq=2 --txq=2 --rss-ip

7. Stream Configurations on Packet Generator:
-- Create a stream, e.g. on IXIA
  -- Set the Ethernet II type to 0x0800
  -- Set the protocols to IPv4
  -- Do not set any layer protocols, as I use IP rss
  -- Set correct destination IP address according to l3fwd-lpm/l3fwd-exact_match routing table.
  -- Set the source IP to random, this is very important to make sure the packet will be received in multiple queues.
-- Note that the desc MAC equals or not equals to the NIC port MAC may result in different performance numbers.
-- Note that promiscuous mode is enabled or not may result in different performance numbers.

Regards,
Helin

^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2015-02-10  0:27 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2014-09-17  4:18 [dpdk-dev] i40e: Steps and required configurations of how to achieve the best performance! Zhang, Helin
2014-09-17  8:33 ` David Marchand
2014-09-17  8:50   ` Zhang, Helin
2014-09-17 14:03     ` David Marchand
2014-09-18  2:39       ` Zhang, Helin
2014-09-18  8:57         ` David Marchand
2014-09-19  3:43           ` Zhang, Helin
2014-10-15  9:41             ` Thomas Monjalon
2014-10-16  0:43               ` Zhang, Helin
2015-02-09 12:12                 ` David Marchand
2015-02-10  0:27                   ` Zhang, Helin

DPDK patches and discussions

This inbox may be cloned and mirrored by anyone:

	git clone --mirror https://inbox.dpdk.org/dev/0 dev/git/0.git

	# If you have public-inbox 1.1+ installed, you may
	# initialize and index your mirror using the following commands:
	public-inbox-init -V2 dev dev/ https://inbox.dpdk.org/dev \
		dev@dpdk.org
	public-inbox-index dev

Example config snippet for mirrors.
Newsgroup available over NNTP:
	nntp://inbox.dpdk.org/inbox.dpdk.dev


AGPL code for this site: git clone https://public-inbox.org/public-inbox.git