From: Kamaraj P <pkamaraj@gmail.com>
To: "Burakov, Anatoly" <anatoly.burakov@intel.com>
Cc: Bruce Richardson <bruce.richardson@intel.com>,
dev@dpdk.org, mmahmoud@ciso.com
Subject: Re: [dpdk-dev] DPDK hugepage memory fragmentation
Date: Mon, 27 Jul 2020 21:00:40 +0530 [thread overview]
Message-ID: <CAG8PAap4avO913UHOE6TVFk7U_+njtbEoFe0K7wkVjHFt2ruQg@mail.gmail.com> (raw)
In-Reply-To: <75824075-9690-814a-1849-1107504ce344@intel.com>
Hi Anatoly,
Since we do not have the driver support of SRIOv with VFIO, we are using
IGB_UIO .
Basically our application is crashing due to the buffer allocation failure.
I believe because it didn't get a contiguous memory location and fails to
allocate the memory.
Is there any API, I can use to dump before our application dies ?
Please let me know.
Thanks,
Kamaraj
On Mon, Jul 13, 2020 at 2:57 PM Burakov, Anatoly <anatoly.burakov@intel.com>
wrote:
> On 11-Jul-20 8:51 AM, Kamaraj P wrote:
> > Hello Anatoly/Bruce,
> >
> > We are using the 18_11 version of DPDK and we are using igb_uio.
> > The way we observe an issue here is that, after we tried multiple
> > iterations of start/stop of container application(which has DPDK),
> > we were not able to allocate the memory for port during the init.
> > We thought that it could be an issue of not getting continuous
> > allocation hence it fails.
> >
> > Is there an API where I can check if the memory is fragmented before we
> > invoke an allocation ?
> > Or do we have any such mechanism to defragment the memory allocation
> > once we exist from the application ?
> > Please advise.
> >
>
> This is unlikely due to fragmentation, because the only way for 18.11 to
> be affected my memory fragmentation is 1) if you're using legacy mem
> mode, or 2) you're using IOVA as PA mode and you need huge amounts of
> contiguous memory. (you are using igb_uio, so you would be in IOVA as PA
> mode)
>
> NICs very rarely, if ever, allocate more than a 2M-page worth of
> contiguous memory, because their descriptor rings aren't that big, and
> they'll usually get all the IOVA-contiguous space they need even in the
> face of heavily fragmented memory. Similarly, while 18.11 mempools will
> request IOVA-contiguous memory first, they have a fallback to using
> non-contiguous memory and thus too work just fine in the face of high
> memory fragmentation.
>
> The nature of the 18.11 memory subsystem is such that IOVA layout is
> decoupled from VA layout, so fragmentation does not affect DPDK as much
> as it has for previous versions. The first thing i'd suggest is using
> VFIO as opposed to igb_uio, as it's safer to use in a container
> environment, and it's less susceptible to memory fragmentation issues
> because it can remap memory to appear IOVA-contiguous.
>
> Could you please provide detailed logs of the init process? You can add
> '--log-level=eal,8' to the EAL command-line to enable debug logging in
> the EAL.
>
> > Thanks,
> > Kamaraj
> >
> >
> >
> > On Fri, Jul 10, 2020 at 9:14 PM Burakov, Anatoly
> > <anatoly.burakov@intel.com <mailto:anatoly.burakov@intel.com>> wrote:
> >
> > On 10-Jul-20 11:28 AM, Bruce Richardson wrote:
> > > On Fri, Jul 10, 2020 at 02:52:16PM +0530, Kamaraj P wrote:
> > >> Hello All,
> > >>
> > >> We are running to run DPDK based application in a container mode,
> > >> When we do multiple start/stop of our container application, the
> > DPDK
> > >> initialization seems to be failing.
> > >> This is because the hugepage memory fragementated and is not
> > able to find
> > >> the continuous allocation of the memory to initialize the buffer
> > in the
> > >> dpdk init.
> > >>
> > >> As part of the cleanup of the container, we do call
> > rte_eal_cleanup() to
> > >> cleanup the memory w.r.t our application. However after
> > iterations we still
> > >> see the memory allocation failure due to the fragmentation issue.
> > >>
> > >> We also tried to set the "--huge-unlink" as an argument before
> > when we
> > >> called the rte_eal_init() and it did not help.
> > >>
> > >> Could you please suggest if there is an option or any existing
> > patches
> > >> available to clean up the memory to avoid fragmentation issues
> > in the
> > >> future.
> > >>
> > >> Please advise.
> > >>
> > > What version of DPDK are you using, and what kernel driver for NIC
> > > interfacing are you using?
> > > DPDK versions since 18.05 should be more forgiving of fragmented
> > memory,
> > > especially if using the vfio-pci kernel driver.
> > >
> >
> > This sounds odd, to be honest.
> >
> > Unless you're allocating huge chunks of IOVA-contiguous memory,
> > fragmentation shouldn't be an issue. How did you determine that this
> > was
> > in fact due to fragmentation?
> >
> > > Regards,
> > > /Bruce
> > >
> >
> >
> > --
> > Thanks,
> > Anatoly
> >
>
>
> --
> Thanks,
> Anatoly
>
next prev parent reply other threads:[~2020-07-27 15:30 UTC|newest]
Thread overview: 10+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-07-10 9:22 Kamaraj P
2020-07-10 10:28 ` Bruce Richardson
2020-07-10 15:44 ` Burakov, Anatoly
2020-07-11 7:51 ` Kamaraj P
2020-07-13 9:27 ` Burakov, Anatoly
2020-07-27 15:30 ` Kamaraj P [this message]
2020-07-28 10:10 ` Burakov, Anatoly
2020-09-16 4:32 ` Kamaraj P
2020-09-16 11:19 ` Burakov, Anatoly
2020-09-16 11:47 ` Burakov, Anatoly
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=CAG8PAap4avO913UHOE6TVFk7U_+njtbEoFe0K7wkVjHFt2ruQg@mail.gmail.com \
--to=pkamaraj@gmail.com \
--cc=anatoly.burakov@intel.com \
--cc=bruce.richardson@intel.com \
--cc=dev@dpdk.org \
--cc=mmahmoud@ciso.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).