DPDK patches and discussions
 help / color / mirror / Atom feed
From: Emre Eraltan <emre.eraltan@6wind.com>
To: Shinae Woo <shinae2012@gmail.com>
Cc: dev@dpdk.org
Subject: Re: [dpdk-dev] Performances are not scale with multiple ports
Date: Mon, 27 May 2013 20:15:23 -0700	[thread overview]
Message-ID: <51A4214B.8040703@6wind.com> (raw)
In-Reply-To: <CA+f=Zzvyfgu8GcdP4R2W6qi0gfEpDhdEctHnhxmqYuTn8U+n3A@mail.gmail.com>

[-- Attachment #1: Type: text/plain, Size: 7375 bytes --]

Hello Shinae,

Did you try to use the testpmd tool with multiple queues per port? It 
gives you more flexibility compared to l2fwd app.

You need to trigger the RSS feature of the NIC by sending different 
streams (just by changing the destination port for instance or any 
information in the 5-tuple). This will load balance your packets among 
several cores so that you can probe multiple queues with different 
cores. Otherwise, you will use only one core (or thread if HT is 
enabled) per port for the RX side.

Best Regards,
Emre

-- 
Emre ERALTAN
6WIND Field Application Engineer


On 27/05/2013 20:05, Shinae Woo wrote:
> Thanks for sharing Naoto.
>
> So in your experiments, the forwarding performance still does not 
> reach the line rate.
>
> Your perf record shows that the cpu spend most of the time in polling 
> for receiving packets,
> and no other heavy operation.
> Even though the application polling packets in its best,
> the forwarder miss some packets from elsewhere from the application-side.
>
> The dpdk document shows that 160Mpps forwarding performance in 2 sockets,
> but I can only reach the 13 Mpps in 2 ports.
> Even doubling the number of ports to 4 ports, the performance is still 
> less than 17Mpps.
>
> I want to know where is the bottleneck lies in my environments, or
> how I can reprocuce the same performance as the dpdk published.
>
> Thank you,
> Shinae
>
>
>
> On Tue, May 28, 2013 at 11:30 AM, Naoto MATSUMOTO 
> <n-matsumoto@sakura.ad.jp <mailto:n-matsumoto@sakura.ad.jp>> wrote:
>
>
>     FYI: Disruptive IP Networking with Intel DPDK on Linux
>     http://slidesha.re/SeVFZo
>
>
>     On Tue, 28 May 2013 11:26:30 +0900
>     Shinae Woo <shinae2012@gmail.com <mailto:shinae2012@gmail.com>> wrote:
>
>     > Hello, all.
>     >
>     > I play the dpdk-1.2.3r1 with examples.
>     >
>     > But I can not achieve the line-rate packet receive performance,
>     > and the performance is not scale with multiple ports.
>     >
>     > For example, in example l2fwd, I have tested two cases with 2
>     ports, and 4
>     > ports,
>     > using belowed command line each
>     >
>     > ./build/l2fwd -cf -n3 -- -p3
>     > ./build/l2fwd -cf -n3 -- -pf
>     >
>     > But both cases, the aggregated performance are not scale.
>     >
>     > == experiments environments ==
>     > - Two Intel 82599 NICs (total 4 ports)
>     > - Intel Xeon X5690  @ 3.47GHz * 2 (total 12 cores)
>     > - 1024 * 2MB hugepages
>     > - Linux 2.6.38-15-server
>     > - Each ports receiving 10Gbps of traffic of 64 bytes packets,
>     14.88Mpps.
>     >
>     > *1. Packet forwarding performance*
>     >
>     > In 2 ports case,  receive performance is 13Mpps,
>     > In 4 ports case,  not 26Mbps, only 16.8Mpps.
>     >
>     > Port statistics ====================================
>     > Statistics for port 0 ------------------------------
>     > Packets sent:                  4292256
>     > Packets received:              6517396
>     > Packets dropped:               2224776
>     > Statistics for port 1 ------------------------------
>     > Packets sent:                  4291840
>     > Packets received:              6517044
>     > Packets dropped:               2225556
>     > Aggregate statistics ===============================
>     > Total packets sent:            8584128
>     > Total packets received:       13034472
>     > Total packets dropped:         4450332
>     > ====================================================
>     >
>     > Port statistics ====================================
>     > Statistics for port 0 ------------------------------
>     > Packets sent:                  1784064
>     > Packets received:              2632700
>     > Packets dropped:                848128
>     > Statistics for port 1 ------------------------------
>     > Packets sent:                  1784104
>     > Packets received:              2632196
>     > Packets dropped:                848596
>     > Statistics for port 2 ------------------------------
>     > Packets sent:                  3587616
>     > Packets received:              5816344
>     > Packets dropped:               2200176
>     > Statistics for port 3 ------------------------------
>     > Packets sent:                  3587712
>     > Packets received:              5787848
>     > Packets dropped:               2228684
>     > Aggregate statistics ===============================
>     > Total packets sent:           10743560
>     > Total packets received:       16869152
>     > Total packets dropped:         6125608
>     > ====================================================
>     >
>     > *2. Packet receiving performance*
>     > I fix the codes for only receiving packets (not forwarding),
>     > the performance is still not scalable as each 13.3Mpps, 18Mpps.
>     >
>     > Port statistics ====================================
>     > Statistics for port 0 ------------------------------
>     > Packets sent:                        0
>     > Packets received:              6678860
>     > Packets dropped:                     0
>     > Statistics for port 1 ------------------------------
>     > Packets sent:                        0
>     > Packets received:              6646120
>     > Packets dropped:                     0
>     > Aggregate statistics ===============================
>     > Total packets sent:                  0
>     > Total packets received:       13325012
>     > Total packets dropped:               0
>     > ====================================================
>     >
>     > Port statistics ====================================
>     > Statistics for port 0 ------------------------------
>     > Packets sent:                        0
>     > Packets received:              3129624
>     > Packets dropped:                     0
>     > Statistics for port 1 ------------------------------
>     > Packets sent:                        0
>     > Packets received:              3131292
>     > Packets dropped:                     0
>     > Statistics for port 2 ------------------------------
>     > Packets sent:                        0
>     > Packets received:              6260908
>     > Packets dropped:                     0
>     > Statistics for port 3 ------------------------------
>     > Packets sent:                        0
>     > Packets received:              6238764
>     > Packets dropped:                     0
>     > Aggregate statistics ===============================
>     > Total packets sent:                  0
>     > Total packets received:       18760640
>     > Total packets dropped:               0
>     > ====================================================
>     >
>     > The question is that
>     > 1. How I can achieve each port receiving full 14.88Mpps ?
>     >     What might be the bottleneck in current environment?
>     > 2. Why the performance using multiple ports is not scale?
>     >     I guess doubling ports shows the doubling the receiving
>     performance,
>     >     but it shows not. I am curious about what is limiting the packet
>     > receivng performance.
>     >
>     > Thanks,
>     > Shinae
>
>     --
>     SAKURA Internet Inc. / Senior Researcher
>     Naoto MATSUMOTO <n-matsumoto@sakura.ad.jp
>     <mailto:n-matsumoto@sakura.ad.jp>>
>     SAKURA Internet Research Center <http://research.sakura.ad.jp/>
>
>

[-- Attachment #2: Type: text/html, Size: 14024 bytes --]

  reply	other threads:[~2013-05-28  3:15 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2013-05-28  2:26 Shinae Woo
2013-05-28  2:30 ` Naoto MATSUMOTO
2013-05-28  3:05   ` Shinae Woo
2013-05-28  3:15     ` Emre Eraltan [this message]
2013-05-28  3:29       ` Stephen Hemminger
2013-05-28  4:00         ` Shinae Woo
2013-05-29  3:09     ` Naoto MATSUMOTO
2013-05-28  9:22 ` Thomas Monjalon

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=51A4214B.8040703@6wind.com \
    --to=emre.eraltan@6wind.com \
    --cc=dev@dpdk.org \
    --cc=shinae2012@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).