From: "Hunt, David" <david.hunt@intel.com>
To: Olivier Matz <olivier.matz@6wind.com>, dev@dpdk.org
Subject: Re: [dpdk-dev] [PATCH v2 1/3] mempool: add stack (lifo) mempool handler
Date: Wed, 15 Jun 2016 11:10:11 +0100 [thread overview]
Message-ID: <57612983.7030709@intel.com> (raw)
In-Reply-To: <5742FDA6.5070108@6wind.com>
Hi Olivier,
On 23/5/2016 1:55 PM, Olivier Matz wrote:
> Hi David,
>
> Please find some comments below.
>
> On 05/19/2016 04:48 PM, David Hunt wrote:
>> This is a mempool handler that is useful for pipelining apps, where
>> the mempool cache doesn't really work - example, where we have one
>> core doing rx (and alloc), and another core doing Tx (and return). In
>> such a case, the mempool ring simply cycles through all the mbufs,
>> resulting in a LLC miss on every mbuf allocated when the number of
>> mbufs is large. A stack recycles buffers more effectively in this
>> case.
>>
>> v2: cleanup based on mailing list comments. Mainly removal of
>> unnecessary casts and comments.
>>
>> Signed-off-by: David Hunt <david.hunt@intel.com>
>> ---
>> lib/librte_mempool/Makefile | 1 +
>> lib/librte_mempool/rte_mempool_stack.c | 145 +++++++++++++++++++++++++++++++++
>> 2 files changed, 146 insertions(+)
>> create mode 100644 lib/librte_mempool/rte_mempool_stack.c
>>
>> diff --git a/lib/librte_mempool/Makefile b/lib/librte_mempool/Makefile
>> index f19366e..5aa9ef8 100644
>> --- a/lib/librte_mempool/Makefile
>> +++ b/lib/librte_mempool/Makefile
>> @@ -44,6 +44,7 @@ LIBABIVER := 2
>> SRCS-$(CONFIG_RTE_LIBRTE_MEMPOOL) += rte_mempool.c
>> SRCS-$(CONFIG_RTE_LIBRTE_MEMPOOL) += rte_mempool_handler.c
>> SRCS-$(CONFIG_RTE_LIBRTE_MEMPOOL) += rte_mempool_default.c
>> +SRCS-$(CONFIG_RTE_LIBRTE_MEMPOOL) += rte_mempool_stack.c
>> # install includes
>> SYMLINK-$(CONFIG_RTE_LIBRTE_MEMPOOL)-include := rte_mempool.h
>>
>> diff --git a/lib/librte_mempool/rte_mempool_stack.c b/lib/librte_mempool/rte_mempool_stack.c
>> new file mode 100644
>> index 0000000..6e25028
>> --- /dev/null
>> +++ b/lib/librte_mempool/rte_mempool_stack.c
>> @@ -0,0 +1,145 @@
>> +/*-
>> + * BSD LICENSE
>> + *
>> + * Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
>> + * All rights reserved.
> Should be 2016?
Yup, will change.
>> ...
>> +
>> +static void *
>> +common_stack_alloc(struct rte_mempool *mp)
>> +{
>> + struct rte_mempool_common_stack *s;
>> + unsigned n = mp->size;
>> + int size = sizeof(*s) + (n+16)*sizeof(void *);
>> +
>> + /* Allocate our local memory structure */
>> + s = rte_zmalloc_socket("common-stack",
> "mempool-stack" ?
Yes. Also, I thing the names of the function should be changed from
common_stack_x to simply stack_x. The "common_" does not add anything.
>> + size,
>> + RTE_CACHE_LINE_SIZE,
>> + mp->socket_id);
>> + if (s == NULL) {
>> + RTE_LOG(ERR, MEMPOOL, "Cannot allocate stack!\n");
>> + return NULL;
>> + }
>> +
>> + rte_spinlock_init(&s->sl);
>> +
>> + s->size = n;
>> + mp->pool = s;
>> + rte_mempool_set_handler(mp, "stack");
> rte_mempool_set_handler() is a user function, it should be called here
Sure, removed.
>> +
>> + return s;
>> +}
>> +
>> +static int common_stack_put(void *p, void * const *obj_table,
>> + unsigned n)
>> +{
>> + struct rte_mempool_common_stack *s = p;
>> + void **cache_objs;
>> + unsigned index;
>> +
>> + rte_spinlock_lock(&s->sl);
>> + cache_objs = &s->objs[s->len];
>> +
>> + /* Is there sufficient space in the stack ? */
>> + if ((s->len + n) > s->size) {
>> + rte_spinlock_unlock(&s->sl);
>> + return -ENOENT;
>> + }
> The usual return value for a failing put() is ENOBUFS (see in rte_ring).
Done.
> After reading it, I realize that it's nearly exactly the same code than
> in "app/test: test external mempool handler".
> http://patchwork.dpdk.org/dev/patchwork/patch/12896/
>
> We should drop one of them. If this stack handler is really useful for
> a performance use-case, it could go in librte_mempool. At the first
> read, the code looks like a demo example : it uses a simple spinlock for
> concurrent accesses to the common pool. Maybe the mempool cache hides
> this cost, in this case we could also consider removing the use of the
> rte_ring.
Unlike the code in the test app, the stack handler does not use a ring.
This is for the
case where applications do a lot of core-to-core transfers of mbufs. The
test app was
simply to demonstrate a simple example of a malloc mempool handler. This
patch adds
a new lifo handler for general use.
Using the mempool_perf_autotest, I see a 30% increase in throughput when
local cache is enabled/used. However, there is up to a 50% degradation
when local cache
is NOT used, so it's not usable in all situations. However, with a 30%
gain for the cache
use-case, I think it's worth having in there as an option for people to
try if the use-case suits.
> Do you have some some performance numbers? Do you know if it scales
> with the number of cores?
30% gain when local cache is used. And these numbers scale up with the
number of cores on my test machine. It may be better for other use cases.
> If we can identify the conditions where this mempool handler
> overperforms the default handler, it would be valuable to have them
> in the documentation.
>
I could certainly add this to the docs, and mention the recommendation to
use local cache.
Regards,
Dave.
next prev parent reply other threads:[~2016-06-15 10:10 UTC|newest]
Thread overview: 48+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-05-05 18:29 [dpdk-dev] [PATCH 0/2] mempool: add stack (fifo) " David Hunt
2016-05-05 18:29 ` [dpdk-dev] [PATCH 1/2] " David Hunt
2016-05-05 21:28 ` Stephen Hemminger
2016-05-19 15:21 ` Hunt, David
2016-05-05 18:29 ` [dpdk-dev] [PATCH 2/2] test: add autotest for external mempool stack handler David Hunt
2016-05-06 8:34 ` [dpdk-dev] [PATCH 0/2] mempool: add stack (fifo) mempool handler Tan, Jianfeng
2016-05-06 23:02 ` Hunt, David
2016-05-19 14:48 ` [dpdk-dev] v2 mempool: add stack (lifo) " David Hunt
2016-05-19 14:48 ` [dpdk-dev] [PATCH v2 1/3] " David Hunt
2016-05-23 12:55 ` Olivier Matz
2016-06-15 10:10 ` Hunt, David [this message]
2016-06-17 14:18 ` Hunt, David
2016-06-20 8:17 ` Olivier Matz
2016-06-20 12:59 ` Hunt, David
2016-06-29 14:31 ` Olivier MATZ
2016-05-19 14:48 ` [dpdk-dev] [PATCH v2 2/3] mempool: make declaration of handler structs const David Hunt
2016-05-23 12:55 ` Olivier Matz
2016-05-24 14:01 ` Hunt, David
2016-05-19 14:48 ` [dpdk-dev] [PATCH v2 3/3] test: add autotest for external mempool stack handler David Hunt
2016-05-19 15:16 ` [dpdk-dev] v2 mempool: add stack (lifo) mempool handler Hunt, David
2016-06-20 13:08 ` [dpdk-dev] mempool: add stack " David Hunt
2016-06-20 13:08 ` [dpdk-dev] [PATCH v3 1/2] mempool: add stack (lifo) " David Hunt
2016-06-20 13:25 ` Jerin Jacob
2016-06-20 13:54 ` Thomas Monjalon
2016-06-20 13:58 ` Ananyev, Konstantin
2016-06-20 14:22 ` Jerin Jacob
2016-06-20 17:56 ` Ananyev, Konstantin
2016-06-21 3:35 ` Jerin Jacob
2016-06-21 9:28 ` Ananyev, Konstantin
2016-06-21 9:44 ` Olivier Matz
2016-06-21 3:42 ` Jerin Jacob
2016-06-20 13:08 ` [dpdk-dev] [PATCH v3 2/2] test: add autotest for external mempool stack handler David Hunt
2016-06-30 7:41 ` [dpdk-dev] [PATCH v4 0/2] mempool: add stack mempool handler David Hunt
2016-06-30 7:41 ` [dpdk-dev] [PATCH v4 1/2] mempool: add stack (lifo) " David Hunt
2016-06-30 7:41 ` [dpdk-dev] [PATCH v4 2/2] test: migrate custom handler test to stack handler David Hunt
2016-06-30 9:45 ` Thomas Monjalon
2016-06-30 17:36 ` Hunt, David
2016-06-30 17:46 ` Thomas Monjalon
2016-06-30 17:49 ` Hunt, David
2016-06-30 18:05 ` [dpdk-dev] [PATCH v5 0/2] mempool: add stack mempool handler David Hunt
2016-06-30 18:05 ` [dpdk-dev] [PATCH v5 1/2] mempool: add stack (lifo) " David Hunt
2016-06-30 18:05 ` [dpdk-dev] [PATCH v5 2/2] test: migrate custom handler test to stack handler David Hunt
2016-07-01 7:32 ` Olivier MATZ
2016-07-01 7:46 ` [dpdk-dev] [PATCH v6 0/2] mempool: add stack mempool handler David Hunt
2016-07-01 7:46 ` [dpdk-dev] [PATCH v6 1/2] mempool: add stack (lifo) " David Hunt
2016-07-01 7:46 ` [dpdk-dev] [PATCH v6 2/2] test: migrate custom handler test to stack handler David Hunt
2016-07-01 8:18 ` [dpdk-dev] [PATCH v6 0/2] mempool: add stack mempool handler Olivier MATZ
2016-07-01 10:41 ` Thomas Monjalon
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=57612983.7030709@intel.com \
--to=david.hunt@intel.com \
--cc=dev@dpdk.org \
--cc=olivier.matz@6wind.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).