DPDK usage discussions
 help / color / mirror / Atom feed
From: Bruce Richardson <bruce.richardson@intel.com>
To: Venumadhav Josyula <vjosyula@gmail.com>
Cc: users@dpdk.org, dev@dpdk.org,
	Venumadhav Josyula <vjosyula@parallelwireless.com>
Subject: Re: [dpdk-users] [dpdk-dev] time taken for allocation of mempool.
Date: Wed, 13 Nov 2019 09:19:27 +0000	[thread overview]
Message-ID: <20191113091927.GA1501@bricha3-MOBL.ger.corp.intel.com> (raw)
In-Reply-To: <CA+i0PGV9DxiwwyL-AXCuhMcndZ=11Yk+t6KOub-R7yYuaB1qzQ@mail.gmail.com>

On Wed, Nov 13, 2019 at 10:37:57AM +0530, Venumadhav Josyula wrote:
> Hi ,
> We are using 'rte_mempool_create' for allocation of flow memory. This has
> been there for a while. We just migrated to dpdk-18.11 from dpdk-17.05. Now
> here is problem statement
> 
> Problem statement :
> In new dpdk ( 18.11 ), the 'rte_mempool_create' take approximately ~4.4 sec
> for allocation compared to older dpdk (17.05). We have som 8-9 mempools for
> our entire product. We do upfront allocation for all of them ( i.e. when
> dpdk application is coming up). Our application is run to completion model.
> 
> Questions:-
> i)  is that acceptable / has anybody seen such a thing ?
> ii) What has changed between two dpdk versions ( 18.11 v/s 17.05 ) from
> memory perspective ?
> 
> Any pointer are welcome.
> 
Hi,

from 17.05 to 18.11 there was a change in default memory model for DPDK. In
17.05 all DPDK memory was allocated statically upfront and that used for
the memory pools. With 18.11, no large blocks of memory are allocated at
init time, instead the memory is requested from the kernel as it is needed
by the app. This will make the initial startup of an app faster, but the
allocation of new objects like mempools slower, and it could be this you
are seeing.

Some things to try:
1. Use "--socket-mem" EAL flag to do an upfront allocation of memory for use
by your memory pools and see if it improves things.
2. Try using "--legacy-mem" flag to revert to the old memory model.

Regards,
/Bruce

  parent reply	other threads:[~2019-11-13  9:19 UTC|newest]

Thread overview: 18+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-11-13  5:07 [dpdk-users] " Venumadhav Josyula
2019-11-13  5:12 ` Venumadhav Josyula
2019-11-13  8:32   ` [dpdk-users] [dpdk-dev] " Olivier Matz
2019-11-13  9:11     ` Venumadhav Josyula
2019-11-13  9:30       ` Olivier Matz
2019-11-13  9:19 ` Bruce Richardson [this message]
2019-11-13 17:26   ` Burakov, Anatoly
2019-11-13 21:01     ` Venumadhav Josyula
2019-11-14  9:44       ` Burakov, Anatoly
2019-11-14  9:50         ` Venumadhav Josyula
2019-11-14  9:57           ` Burakov, Anatoly
2019-11-18 16:43             ` Venumadhav Josyula
2019-12-06 10:47               ` Burakov, Anatoly
2019-12-06 10:49                 ` Venumadhav Josyula
2019-11-14  8:12     ` Venumadhav Josyula
2019-11-14  9:49       ` Burakov, Anatoly
2019-11-14  9:53         ` Venumadhav Josyula
2019-11-18 16:45 ` [dpdk-users] " Venumadhav Josyula

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20191113091927.GA1501@bricha3-MOBL.ger.corp.intel.com \
    --to=bruce.richardson@intel.com \
    --cc=dev@dpdk.org \
    --cc=users@dpdk.org \
    --cc=vjosyula@gmail.com \
    --cc=vjosyula@parallelwireless.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).