From: Olivier MATZ <olivier.matz@6wind.com>
To: Vadim Suraev <vadim.suraev@gmail.com>
Cc: dev@dpdk.org
Subject: Re: [dpdk-dev] [PATCH] rte_mbuf: scattered pktmbufs freeing optimization
Date: Mon, 09 Mar 2015 09:38:40 +0100 [thread overview]
Message-ID: <54FD5C10.7060701@6wind.com> (raw)
In-Reply-To: <CAJ0CJ8nhhESK45c0vY4gxxg1O=1Mv-c=ht=ONEeOKNW9Ao6sHA@mail.gmail.com>
Hi Vadim,
On 03/07/2015 12:24 AM, Vadim Suraev wrote:
> Hi, Olivier,
> I realized that if local cache for the mempool is enabled and greater
> than 0,
> if, say, the mempool size is X and local cache length is Y (and it is
> not empty,Y>0)
> an attempt to allocate a bulk, whose size is greater than local cache
> size (max) and greater than X-Y (which is the number of entries in the
> ring) will fail.
> The reason is:
> __mempool_get_bulk will check whether the bulk to be allocated is
> greater than mp->cache_size and will fall to ring_dequeue.
> And the ring does not contain enough entries in this case while the sum
> of ring entries + cache length may be greater or equal to the bulk's
> size, so theoretically the bulk could be allocated.
> Is it an expected behaviour? Am I missing something?
I think it's the expected behavior as the code of mempool_get()
tries to minimize the number of tests. In this situation, even if
len(mempool) + len(cache) is greater than the number of requested
objects, we are almost out of buffers, so returning ENOBUF is not
a problem.
If the user wants to ensure that he can allocates at least X buffers,
he can create the pool with:
mempool_create(X + cache_size * MAX_LCORE)
> By the way, rte_mempool_count returns a ring count + sum of all local
> caches, IMHO it could mislead, even twice.
Right, today rte_mempool_count() cannot really be used for something
else than debug or stats. Adding rte_mempool_common_count() and
rte_mempool_cache_len() may be useful to give the user a better control
(and they will be faster because they won't browse the cache lengths of
all lcores).
But we have to keep in mind that for multi-consumer pools checking the
common_count before retrieving objects is useless because the other
lcores can retrieve objects at the same time.
Regards,
Olivier
prev parent reply other threads:[~2015-03-09 8:38 UTC|newest]
Thread overview: 10+ messages / expand[flat|nested] mbox.gz Atom feed top
2015-02-26 23:15 vadim.suraev
2015-02-27 0:49 ` Stephen Hemminger
2015-02-27 11:17 ` Ananyev, Konstantin
2015-02-27 12:18 ` Vadim Suraev
[not found] ` <2601191342CEEE43887BDE71AB977258213F2E3D@irsmsx105.ger.corp.intel.com>
2015-02-27 13:10 ` Ananyev, Konstantin
2015-02-27 13:20 ` Olivier MATZ
2015-02-27 17:09 ` Vadim Suraev
2015-03-04 8:54 ` Olivier MATZ
2015-03-06 23:24 ` Vadim Suraev
2015-03-09 8:38 ` Olivier MATZ [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=54FD5C10.7060701@6wind.com \
--to=olivier.matz@6wind.com \
--cc=dev@dpdk.org \
--cc=vadim.suraev@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).