From: "Richardson, Bruce" <bruce.richardson@intel.com>
To: "Zachary.Jen@cas-well.com" <Zachary.Jen@cas-well.com>,
"dev@dpdk.org" <dev@dpdk.org>
Cc: "Alan.Yu@cas-well.com" <Alan.Yu@cas-well.com>
Subject: Re: [dpdk-dev] DPDK Performance issue with l2fwd
Date: Thu, 10 Jul 2014 15:53:31 +0000 [thread overview]
Message-ID: <59AF69C657FD0841A61C55336867B5B0343AC6CB@IRSMSX103.ger.corp.intel.com> (raw)
In-Reply-To: <53BE5C7D.4090707@cas-well.com>
Hi,
Have you tried running a test with 16 ports using any other applications, for example testpmd?
Regards,
/Bruce
> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Zachary.Jen@cas-
> well.com
> Sent: Thursday, July 10, 2014 2:29 AM
> To: dev@dpdk.org
> Cc: Alan.Yu@cas-well.com
> Subject: Re: [dpdk-dev] DPDK Performance issue with l2fwd
>
> Hi Alex:
>
> Thanks for your help.
>
> I forget to describe some criteria in my original post.
>
> At first, I has confirmed my 82599 has connected by PCIe Gen3 (Gen3 x8) speed.
> The theoretical bandwidth can support over 160G in total.
> Hence, It should get full speed in my test.
>
> Second, I have ever check the performance w/o DPDK in packet size 1518 in the
> same environment, and indeed it can get 160G totally (by IRQ balance method).
> So, I was so surprised to get this kinds of result in DPDK (I also use size 1518 to
> test DPDK).
>
> BTW, I can get 120G throughput in 12 ports already. But when I add more than
> 12 ports, I only can get 100G.
> Why the performance gets less than 120G? Why only 10 ports works fine and NO
> Tx and Rx in the others?
> Is it bugs or limitations in DPDK?
>
> Has anyone every do the similar or the same test?
>
>
> On 07/10/2014 04:40 PM, Alex Markuze wrote:
> Hi Zachary,
> Your issue may be with the PCI-e 3, with 16 lanes Each slot is limited to
> 128Gb/s[3].
> Now, AFAIK[1] the CPU is connected to the I/O with a single PCI-E slot.
>
> Several thoughts that may help you:
>
> 1. You can figure out the max b/w by running netsurf over the kernel interfaces
> (w/o DPDK). Each CPU can handle the Netperf and the Completion interrupts
> with grace (packets of 64K and all offloads on) for 10Gb nics.
> With more then 10 Nics I would disable the IRQ balancer and make sure
> interrupts are spread evenly by setting the IRQ affinity manually [2].
> As long as you have a physical core(NO hyperthreading) per NIC port you can
> figure out the MAX B/W you can get with all the nics.
>
> 2. You can try using (If available to you , obviously) 40Gb and 56Gb Nics
> (Mellanox), In this case for each Netperf flow you will need to separate each
> Netperf Stream and the interrupts to different Cores to Reach wire speed as
> long as both cores are on the same NUMA node(lscpu).
>
> Hope this helps.
>
> [1]http://komposter.com.ua/documents/PCI_Express_Base_Specification_Revis
> ion_3.0.pdf
> [2]http://h50146.www5.hp.com/products/software/oe/linux/mainstream/supp
> ort/whitepaper/pdfs/4AA4-9294ENW.pdf
> [3]http://en.wikipedia.org/wiki/PCI_Express#PCI_Express_3.x
>
>
> On Thu, Jul 10, 2014 at 11:07 AM, <Zachary.Jen@cas-
> well.com<mailto:Zachary.Jen@cas-well.com>> wrote:
> Hey Guys,
>
> Recently, I have used l2fwd to test 160G (82599 10G * 16 ports), but I
> got a strange pheromone in my test.
>
> When I used 12 ports to test the performance of l2fwd, it can work fine
> and achieve 120G.
> But it got abnormal when I using over than 12 port. Part of ports seems
> something wrong and no any Tx/Rx.
> Has anyone know about this?
>
> My testing Environment.
> 1. E5-2658 v2 (10 cores) * 2
> http://ark.intel.com/zh-tw/products/76160/Intel-Xeon-Processor-E5-2658-v2-
> 25M-Cache-2_40-GHz
> 2. one core handle one port. (In order to get best performance.)
> 3. No any QPI crossing issue.
> 4. l2fwd parameters
> 4.1 -c 0xF0FF -- -P 0xF00FF => 120G get!
> 4.2 -c 0xFF0FF -- -P 0xFF0FF => Failed! Only first 10 ports can
> work well.
> 4.3 -c 0x3F3FF -- -P 0x3F3FF => Failed! Only first 10 ports can
> work well.
>
> BTW, I have tried lots of parameter sets and if I set the ports number
> over than 12 ports, it only first 10 ports got work.
> Else, everything got well.
>
> Can anyone help me to solve the issue? Or DPDK only can set less equal
> than 12 ports?
> Or DPDK max throughput is 120G?
>
> 本信件可能包含瑞祺電通機密資訊,非指定之收件者,請勿使用或揭露本
> 信件內容,並請銷毀此信件。 This email may contain confidential
> information. Please do not use or disclose it in any way and delete it if you are
> not the intended recipient.
>
>
>
> --
> Best Regards,
> Zachary Jen
>
> Software RD
> CAS-WELL Inc.
> 8th Floor, No. 242, Bo-Ai St., Shu-Lin City, Taipei County 238, Taiwan
> Tel: +886-2-7705-8888#6305
> Fax: +886-2-7731-9988
>
> 本信件可能包含瑞祺電通機密資訊,非指定之收件者,請勿使用或揭露本
> 信件內容,並請銷毀此信件。 This email may contain confidential
> information. Please do not use or disclose it in any way and delete it if you are
> not the intended recipient.
next prev parent reply other threads:[~2014-07-10 15:53 UTC|newest]
Thread overview: 7+ messages / expand[flat|nested] mbox.gz Atom feed top
2014-07-10 8:07 Zachary.Jen
2014-07-10 8:40 ` Alex Markuze
2014-07-10 9:29 ` Zachary.Jen
2014-07-10 15:53 ` Richardson, Bruce [this message]
2014-07-11 11:04 ` Zachary.Jen
2014-07-11 14:28 ` Richardson, Bruce
2014-07-12 17:55 ` Zachary.Jen
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=59AF69C657FD0841A61C55336867B5B0343AC6CB@IRSMSX103.ger.corp.intel.com \
--to=bruce.richardson@intel.com \
--cc=Alan.Yu@cas-well.com \
--cc=Zachary.Jen@cas-well.com \
--cc=dev@dpdk.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).