DPDK patches and discussions
 help / color / mirror / Atom feed
From: Honnappa Nagarahalli <Honnappa.Nagarahalli@arm.com>
To: "Morten Brørup" <mb@smartsharesystems.com>,
	"Harris, James R" <james.r.harris@intel.com>,
	"dev@dpdk.org" <dev@dpdk.org>
Cc: nd <nd@arm.com>, nd <nd@arm.com>
Subject: RE: Bug in rte_mempool_do_generic_get?
Date: Fri, 24 Feb 2023 13:56:17 +0000	[thread overview]
Message-ID: <DBAPR08MB5814B266D6AD9E46B9D3925998A89@DBAPR08MB5814.eurprd08.prod.outlook.com> (raw)
In-Reply-To: <98CBD80474FA8B44BF855DF32C47DC35D87770@smartserver.smartshare.dk>



> -----Original Message-----
> From: Morten Brørup <mb@smartsharesystems.com>
> Sent: Friday, February 24, 2023 6:13 AM
> To: Harris, James R <james.r.harris@intel.com>; dev@dpdk.org
> Subject: RE: Bug in rte_mempool_do_generic_get?
> 
> > From: Harris, James R [mailto:james.r.harris@intel.com]
> > Sent: Friday, 24 February 2023 04.03
> >
> > Hi,
> >
> > I've tracked down a regression in SPDK to DPDK commit a2833ecc5
> ("mempool: fix get objects from mempool with cache").
> 
> The problem probably goes all the way back to the introduction of the cache
> flush threshold, which effectively increased the cache size to 1.5 times the
> configured cache size, in this commit:
> http://git.dpdk.org/dpdk/commit/lib/librte_mempool/rte_mempool.h?id=ea
> 5dd2744b90b330f07fd10f327ab99ef55c7266
> 
> It might even go further back.
> 
> >
> > Here's an example that demonstrates the problem:
> >
> > Allocate mempool with 2048 buffers and cache size 256.
> > Core 0 allocates 512 buffers.  Mempool pulls 512 + 256 buffers from
> backing ring, returns 512 of them to caller, puts the other 256 in core 0
> cache.  Backing ring now has 1280 buffers.
> > Core 1 allocates 512 buffers.  Mempool pulls 512 + 256 buffers from
> backing ring, returns 512 of them to caller, puts the other 256 in core 1
> cache.  Backing ring now has 512 buffers.
> > Core 2 allocates 512 buffers.  Mempool pulls remaining 512 buffers from
> backing ring and returns all of them to caller.  Backing ring now has 0 buffers.
> > Core 3 tries to allocate 512 buffers and it fails.
> >
> > In the SPDK case, we don't really need or use the mempool cache in this
> case, so changing the cache size to 0 fixes the problem and is what we're
> going to move forward with.
> 
> If you are not making get/put requests smaller than the cache size, then yes,
> having no cache is the best solution.
> 
> >
> > But the behavior did cause a regression so I thought I'd mention it here.
> 
> Thank you.
> 
> > If you have a mempool with 2048 objects, shouldn't 4 cores each be able to
> do a 512 buffer bulk get, regardless of the configured cache size?
> 
> No, the scenario you described above is the expected behavior. I think it is
> documented somewhere that objects in the caches are unavailable for other
> cores, but now I cannot find where this is documented.
> 
> 
> Furthermore, since the effective per-core cache size is 1.5 * configured cache
> size, a configured cache size of 256 may leave up to 384 objects in each per-
> core cache.
> 
> With 4 cores, you can expect up to 3 * 384 = 1152 objects sitting in the
> caches of other cores. If you want to be able to pull 512 objects with each
> core, the pool size should be 4 * 512 + 1152 objects.
May be we should document this in mempool library?


  reply	other threads:[~2023-02-24 13:56 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-02-24  3:02 Harris, James R
2023-02-24 12:13 ` Morten Brørup
2023-02-24 13:56   ` Honnappa Nagarahalli [this message]
2023-02-24 16:42     ` Harris, James R
2023-02-26 10:45       ` Morten Brørup
2023-02-27  9:09         ` Olivier Matz
2023-02-27  9:48           ` Morten Brørup
2023-02-27 10:39             ` Olivier Matz

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=DBAPR08MB5814B266D6AD9E46B9D3925998A89@DBAPR08MB5814.eurprd08.prod.outlook.com \
    --to=honnappa.nagarahalli@arm.com \
    --cc=dev@dpdk.org \
    --cc=james.r.harris@intel.com \
    --cc=mb@smartsharesystems.com \
    --cc=nd@arm.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).