DPDK usage discussions
 help / color / mirror / Atom feed
* [dpdk-users] multi core performance problem
@ 2016-07-04 10:03 Sepehr Nazary
  2016-07-04 17:34 ` Muhammad Zain-ul-Abideen
  0 siblings, 1 reply; 3+ messages in thread
From: Sepehr Nazary @ 2016-07-04 10:03 UTC (permalink / raw)
  To: users

Hi
I run dpdk multi_process (symetric_mp) example and see this result:

NumberOfCore  Throughput
1              9986
2              9987
3              9808
4              9988
5              9645
6              9171
7              9030
8              8517
9              8537
10              8415
11              7832
12              7985
13              7569
14              7285
15              7624
16              6994

I run pktgen with range on and packet size is 64 byte.
I sure RSS works correctly and packets are distributed uniformly between
cores. Packet drops is 0, as well.

Why performance decrease ??

configuration :
dpdk-2.2.0, pktgen 3.0.0
NIC model : 82599ES 10-Gigabit
CPU model : Intel(R) Xeon(R) CPU E5-2699 v3 @ 2.30GHz
Linux version 3.16.0-4-amd64
Two NUMA Node:
socket 0 : 16G RAM , 18 CORE , 2 NIC ,run DPDK symetric_mp example
socket 1 : 16G RAM , 18 CORE , 2 NIC ,run Pktgen (range on)

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [dpdk-users] multi core performance problem
  2016-07-04 10:03 [dpdk-users] multi core performance problem Sepehr Nazary
@ 2016-07-04 17:34 ` Muhammad Zain-ul-Abideen
  2016-07-04 17:40   ` Masoud Moshref Javadi
  0 siblings, 1 reply; 3+ messages in thread
From: Muhammad Zain-ul-Abideen @ 2016-07-04 17:34 UTC (permalink / raw)
  To: Sepehr Nazary; +Cc: users

I am no expert with memory bandwidth and hardware acceleration. But what i
can infer from this issue is that 1 of 2 things
1. Either QPI is being used and the maximum bandwidth is achieved
2. Or RAM has reached it's maximum bandwidth and this is the reason the
higher orders core are getting more bandwidth then lower order

But keith, and Thomas can tell you more on this subject

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [dpdk-users] multi core performance problem
  2016-07-04 17:34 ` Muhammad Zain-ul-Abideen
@ 2016-07-04 17:40   ` Masoud Moshref Javadi
  0 siblings, 0 replies; 3+ messages in thread
From: Masoud Moshref Javadi @ 2016-07-04 17:40 UTC (permalink / raw)
  To: Muhammad Zain-ul-Abideen, Sepehr Nazary; +Cc: users

Check the CPU core frequencies as you increase the number of cores that run
in 100%.
Because of heating and dynamic frequency in Cores, as you increase the
number of cores that process packets, the frequency can go down.

Also make sure that for higher number of cores, you use large enough mem
pool.

On Mon, Jul 4, 2016 at 10:34 AM Muhammad Zain-ul-Abideen <zain2294@gmail.com>
wrote:

> I am no expert with memory bandwidth and hardware acceleration. But what i
> can infer from this issue is that 1 of 2 things
> 1. Either QPI is being used and the maximum bandwidth is achieved
> 2. Or RAM has reached it's maximum bandwidth and this is the reason the
> higher orders core are getting more bandwidth then lower order
>
> But keith, and Thomas can tell you more on this subject
>

^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2016-07-04 17:40 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-07-04 10:03 [dpdk-users] multi core performance problem Sepehr Nazary
2016-07-04 17:34 ` Muhammad Zain-ul-Abideen
2016-07-04 17:40   ` Masoud Moshref Javadi

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).