DPDK patches and discussions
 help / color / mirror / Atom feed
From: "Harris, James R" <james.r.harris@intel.com>
To: Honnappa Nagarahalli <Honnappa.Nagarahalli@arm.com>
Cc: "Morten Brørup" <mb@smartsharesystems.com>,
	"dev@dpdk.org" <dev@dpdk.org>, nd <nd@arm.com>
Subject: Re: Bug in rte_mempool_do_generic_get?
Date: Fri, 24 Feb 2023 16:42:59 +0000	[thread overview]
Message-ID: <BC667505-F345-4284-A1AC-20D0A37DB2C9@intel.com> (raw)
In-Reply-To: <DBAPR08MB5814B266D6AD9E46B9D3925998A89@DBAPR08MB5814.eurprd08.prod.outlook.com>



> On Feb 24, 2023, at 6:56 AM, Honnappa Nagarahalli <Honnappa.Nagarahalli@arm.com> wrote:
> 
> 
> 
>> -----Original Message-----
>> From: Morten Brørup <mb@smartsharesystems.com>
>> Sent: Friday, February 24, 2023 6:13 AM
>> To: Harris, James R <james.r.harris@intel.com>; dev@dpdk.org
>> Subject: RE: Bug in rte_mempool_do_generic_get?
>> 
>>> 
>> 
>>> If you have a mempool with 2048 objects, shouldn't 4 cores each be able to do a 512 buffer bulk get, regardless of the configured cache size?
>> 
>> No, the scenario you described above is the expected behavior. I think it is
>> documented somewhere that objects in the caches are unavailable for other
>> cores, but now I cannot find where this is documented.
>> 

Thanks Morten.

Yeah, I think it is documented somewhere, but I also couldn’t find it.  I was aware of cores not being able to allocate from another core’s cache.  My surprise was that in a pristine new mempool, that 4 cores could not each do one initial 512 buffer bulk get.  But I also see that even before the a2833ecc5 patch, the cache would get populated on gets less than cache size, in addition to the buffers requested by the user.  So if cache size is 256, and bulk get is for 128 buffers, it pulls 384 buffers from backing pool - 128 for the caller, another 256 to prefill the cache.  Your patch makes this cache filling consistent between less-than-cache-size and greater-than-or-equal-to-cache-size cases.

>> Furthermore, since the effective per-core cache size is 1.5 * configured cache
>> size, a configured cache size of 256 may leave up to 384 objects in each per-
>> core cache.
>> 
>> With 4 cores, you can expect up to 3 * 384 = 1152 objects sitting in the
>> caches of other cores. If you want to be able to pull 512 objects with each
>> core, the pool size should be 4 * 512 + 1152 objects.
> May be we should document this in mempool library?
> 

Maybe.  But this case I described here is a bit wonky - SPDK should never have been specifying a non-zero cache in this case.  We only noticed this change in behavior because we were creating the mempool with a cache when we shouldn’t have.

-Jim



  reply	other threads:[~2023-02-24 16:43 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-02-24  3:02 Harris, James R
2023-02-24 12:13 ` Morten Brørup
2023-02-24 13:56   ` Honnappa Nagarahalli
2023-02-24 16:42     ` Harris, James R [this message]
2023-02-26 10:45       ` Morten Brørup
2023-02-27  9:09         ` Olivier Matz
2023-02-27  9:48           ` Morten Brørup
2023-02-27 10:39             ` Olivier Matz

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=BC667505-F345-4284-A1AC-20D0A37DB2C9@intel.com \
    --to=james.r.harris@intel.com \
    --cc=Honnappa.Nagarahalli@arm.com \
    --cc=dev@dpdk.org \
    --cc=mb@smartsharesystems.com \
    --cc=nd@arm.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).