DPDK usage discussions
 help / color / mirror / Atom feed
* [dpdk-users] MEMPOOL contentions
@ 2016-09-02 18:27 Junguk Cho
  2016-09-02 18:35 ` Wiles, Keith
  0 siblings, 1 reply; 3+ messages in thread
From: Junguk Cho @ 2016-09-02 18:27 UTC (permalink / raw)
  To: users

Hi,

In my setup, I created one mempool which is shared with 5 rings.
So, I want to know statistics of contentions between 5 rings like taking
time to get one mbuf.
In the manual, there are some options like CONFIG_RTE_LIBRTE_MEMPOOL_DEBUG.
However, I don't know how to use it.

Is there any reference?

Intuitively, as the number of rings are increase, I think the contention
happened frequently.
So, to avoid this, is increasing CONFIG_RTE_MEMPOOL_CACHE_MAX_SIZE one
solution?


Thanks,
Junguk

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [dpdk-users] MEMPOOL contentions
  2016-09-02 18:27 [dpdk-users] MEMPOOL contentions Junguk Cho
@ 2016-09-02 18:35 ` Wiles, Keith
  2016-09-02 18:37   ` Junguk Cho
  0 siblings, 1 reply; 3+ messages in thread
From: Wiles, Keith @ 2016-09-02 18:35 UTC (permalink / raw)
  To: Junguk Cho; +Cc: users


Regards,
Keith

> On Sep 2, 2016, at 1:27 PM, Junguk Cho <jmanbal@gmail.com> wrote:
> 
> Hi,
> 
> In my setup, I created one mempool which is shared with 5 rings.
> So, I want to know statistics of contentions between 5 rings like taking
> time to get one mbuf.
> In the manual, there are some options like CONFIG_RTE_LIBRTE_MEMPOOL_DEBUG.

Add the option to your config file uses in the build of DPDK and set the value to ‘y’ instead of ’n’.

Example: edit or copy say defconfig_x86_64-native-linuxapp-gcc and add something like this;

CONFIG_RTE_LIBRTE_MEMPOOL_DEBUG=y

Then rebuild DPDK using something like ‘make install T=x86_64-native-linux-gcc -j'

> However, I don't know how to use it.
> 
> Is there any reference?
> 
> Intuitively, as the number of rings are increase, I think the contention
> happened frequently.
> So, to avoid this, is increasing CONFIG_RTE_MEMPOOL_CACHE_MAX_SIZE one
> solution?
> 
> 
> Thanks,
> Junguk


^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [dpdk-users] MEMPOOL contentions
  2016-09-02 18:35 ` Wiles, Keith
@ 2016-09-02 18:37   ` Junguk Cho
  0 siblings, 0 replies; 3+ messages in thread
From: Junguk Cho @ 2016-09-02 18:37 UTC (permalink / raw)
  To: Wiles, Keith; +Cc: users

Hi, Keith.

Thank you for reply.

I reached those setup.
My question is when I run my app after these setup, how to get statistics?
In other words, where can I see some statistics like file or screen?


Thanks,
Junguk

2016-09-02 12:35 GMT-06:00 Wiles, Keith <keith.wiles@intel.com>:

>
> Regards,
> Keith
>
> > On Sep 2, 2016, at 1:27 PM, Junguk Cho <jmanbal@gmail.com> wrote:
> >
> > Hi,
> >
> > In my setup, I created one mempool which is shared with 5 rings.
> > So, I want to know statistics of contentions between 5 rings like taking
> > time to get one mbuf.
> > In the manual, there are some options like CONFIG_RTE_LIBRTE_MEMPOOL_
> DEBUG.
>
> Add the option to your config file uses in the build of DPDK and set the
> value to ‘y’ instead of ’n’.
>
> Example: edit or copy say defconfig_x86_64-native-linuxapp-gcc and add
> something like this;
>
> CONFIG_RTE_LIBRTE_MEMPOOL_DEBUG=y
>
> Then rebuild DPDK using something like ‘make install
> T=x86_64-native-linux-gcc -j'
>
> > However, I don't know how to use it.
> >
> > Is there any reference?
> >
> > Intuitively, as the number of rings are increase, I think the contention
> > happened frequently.
> > So, to avoid this, is increasing CONFIG_RTE_MEMPOOL_CACHE_MAX_SIZE one
> > solution?
> >
> >
> > Thanks,
> > Junguk
>
>

^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2016-09-02 18:38 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-09-02 18:27 [dpdk-users] MEMPOOL contentions Junguk Cho
2016-09-02 18:35 ` Wiles, Keith
2016-09-02 18:37   ` Junguk Cho

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).