DPDK usage discussions
 help / color / mirror / Atom feed
From: Venumadhav Josyula <vjosyula@gmail.com>
To: "Burakov, Anatoly" <anatoly.burakov@intel.com>
Cc: Bruce Richardson <bruce.richardson@intel.com>,
	users@dpdk.org, dev@dpdk.org,
	Venumadhav Josyula <vjosyula@parallelwireless.com>
Subject: Re: [dpdk-users] [dpdk-dev] time taken for allocation of mempool.
Date: Thu, 14 Nov 2019 15:23:03 +0530	[thread overview]
Message-ID: <CA+i0PGWBd9ZkD=B0fN8=aw3xfWaj5_S+WEfU8+t0t7-PY5fQdg@mail.gmail.com> (raw)
In-Reply-To: <133b1b07-77bd-330a-e42c-2a8ad40628b6@intel.com>

Hi Anatoly,

> I would also suggest using --limit-mem if you desire to limit the
> maximum amount of memory DPDK will be able to allocate.
We are already using that.

Thanks and regards,
Venu

On Thu, 14 Nov 2019 at 15:19, Burakov, Anatoly <anatoly.burakov@intel.com>
wrote:

> On 14-Nov-19 8:12 AM, Venumadhav Josyula wrote:
> > Hi Oliver,Bruce,
> >
> >   * we were using --SOCKET-MEM Eal flag.
> >   * We did not wanted to avoid going back to legacy mode.
> >   * we also wanted to avoid 1G huge-pages.
> >
> > Thanks for your inputs.
> >
> > Hi Anatoly,
> >
> > We were using vfio with iommu, but by default it s iova-mode=pa, after
> > changing to iova-mode=va via EAL it kind of helped us to bring down
> > allocation time(s) for mempools drastically. The time taken was brought
> > from ~4.4 sec to 0.165254 sec.
> >
> > Thanks and regards
> > Venu
>
> That's great to hear.
>
> As a final note, --socket-mem is no longer necessary, because 18.11 will
> allocate memory as needed. It is however still advisable to use it if
> you see yourself end up in a situation where the runtime allocation
> could conceivably fail (such as if you have other applications running
> on your system, and DPDK has to compete for hugepage memory).
>
> I would also suggest using --limit-mem if you desire to limit the
> maximum amount of memory DPDK will be able to allocate. This will make
> DPDK behave similarly to older releases in that it will not attempt to
> allocate more memory than you allow it.
>
> >
> >
> > On Wed, 13 Nov 2019 at 22:56, Burakov, Anatoly
> > <anatoly.burakov@intel.com <mailto:anatoly.burakov@intel.com>> wrote:
> >
> >     On 13-Nov-19 9:19 AM, Bruce Richardson wrote:
> >      > On Wed, Nov 13, 2019 at 10:37:57AM +0530, Venumadhav Josyula
> wrote:
> >      >> Hi ,
> >      >> We are using 'rte_mempool_create' for allocation of flow memory.
> >     This has
> >      >> been there for a while. We just migrated to dpdk-18.11 from
> >     dpdk-17.05. Now
> >      >> here is problem statement
> >      >>
> >      >> Problem statement :
> >      >> In new dpdk ( 18.11 ), the 'rte_mempool_create' take
> >     approximately ~4.4 sec
> >      >> for allocation compared to older dpdk (17.05). We have som 8-9
> >     mempools for
> >      >> our entire product. We do upfront allocation for all of them (
> >     i.e. when
> >      >> dpdk application is coming up). Our application is run to
> >     completion model.
> >      >>
> >      >> Questions:-
> >      >> i)  is that acceptable / has anybody seen such a thing ?
> >      >> ii) What has changed between two dpdk versions ( 18.11 v/s 17.05
> >     ) from
> >      >> memory perspective ?
> >      >>
> >      >> Any pointer are welcome.
> >      >>
> >      > Hi,
> >      >
> >      > from 17.05 to 18.11 there was a change in default memory model
> >     for DPDK. In
> >      > 17.05 all DPDK memory was allocated statically upfront and that
> >     used for
> >      > the memory pools. With 18.11, no large blocks of memory are
> >     allocated at
> >      > init time, instead the memory is requested from the kernel as it
> >     is needed
> >      > by the app. This will make the initial startup of an app faster,
> >     but the
> >      > allocation of new objects like mempools slower, and it could be
> >     this you
> >      > are seeing.
> >      >
> >      > Some things to try:
> >      > 1. Use "--socket-mem" EAL flag to do an upfront allocation of
> >     memory for use
> >      > by your memory pools and see if it improves things.
> >      > 2. Try using "--legacy-mem" flag to revert to the old memory
> model.
> >      >
> >      > Regards,
> >      > /Bruce
> >      >
> >
> >     I would also add to this the fact that the mempool will, by default,
> >     attempt to allocate IOVA-contiguous memory, with a fallback to
> non-IOVA
> >     contiguous memory whenever getting IOVA-contiguous memory isn't
> >     possible.
> >
> >     If you are running in IOVA as PA mode (such as would be the case if
> you
> >     are using igb_uio kernel driver), then, since it is now impossible to
> >     preallocate large PA-contiguous chunks in advance, what will likely
> >     happen in this case is, mempool will try to allocate IOVA-contiguous
> >     memory, fail and retry with non-IOVA contiguous memory (essentially
> >     allocating memory twice). For large mempools (or large number of
> >     mempools) that can take a bit of time.
> >
> >     The obvious workaround is using VFIO and IOVA as VA mode. This will
> >     cause the allocator to be able to get IOVA-contiguous memory at the
> >     outset, and allocation will complete faster.
> >
> >     The other two alternatives, already suggested in this thread by Bruce
> >     and Olivier, are:
> >
> >     1) use bigger page sizes (such as 1G)
> >     2) use legacy mode (and lose out on all of the benefits provided by
> the
> >     new memory model)
> >
> >     The recommended solution is to use VFIO/IOMMU, and IOVA as VA mode.
> >
> >     --
> >     Thanks,
> >     Anatoly
> >
>
>
> --
> Thanks,
> Anatoly
>

  reply	other threads:[~2019-11-14  9:53 UTC|newest]

Thread overview: 18+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-11-13  5:07 [dpdk-users] " Venumadhav Josyula
2019-11-13  5:12 ` Venumadhav Josyula
2019-11-13  8:32   ` [dpdk-users] [dpdk-dev] " Olivier Matz
2019-11-13  9:11     ` Venumadhav Josyula
2019-11-13  9:30       ` Olivier Matz
2019-11-13  9:19 ` Bruce Richardson
2019-11-13 17:26   ` Burakov, Anatoly
2019-11-13 21:01     ` Venumadhav Josyula
2019-11-14  9:44       ` Burakov, Anatoly
2019-11-14  9:50         ` Venumadhav Josyula
2019-11-14  9:57           ` Burakov, Anatoly
2019-11-18 16:43             ` Venumadhav Josyula
2019-12-06 10:47               ` Burakov, Anatoly
2019-12-06 10:49                 ` Venumadhav Josyula
2019-11-14  8:12     ` Venumadhav Josyula
2019-11-14  9:49       ` Burakov, Anatoly
2019-11-14  9:53         ` Venumadhav Josyula [this message]
2019-11-18 16:45 ` [dpdk-users] " Venumadhav Josyula

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CA+i0PGWBd9ZkD=B0fN8=aw3xfWaj5_S+WEfU8+t0t7-PY5fQdg@mail.gmail.com' \
    --to=vjosyula@gmail.com \
    --cc=anatoly.burakov@intel.com \
    --cc=bruce.richardson@intel.com \
    --cc=dev@dpdk.org \
    --cc=users@dpdk.org \
    --cc=vjosyula@parallelwireless.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).