From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail.droids-corp.org (zoll.droids-corp.org [94.23.50.67]) by dpdk.org (Postfix) with ESMTP id 79DE15A58 for ; Mon, 23 May 2016 14:55:09 +0200 (CEST) Received: from was59-1-82-226-113-214.fbx.proxad.net ([82.226.113.214] helo=[192.168.0.10]) by mail.droids-corp.org with esmtpsa (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.84_2) (envelope-from ) id 1b4pQ5-0005UZ-Fd; Mon, 23 May 2016 14:57:18 +0200 To: David Hunt , dev@dpdk.org References: <1462472982-49782-1-git-send-email-david.hunt@intel.com> <1463669335-30378-1-git-send-email-david.hunt@intel.com> <1463669335-30378-2-git-send-email-david.hunt@intel.com> From: Olivier Matz Message-ID: <5742FDA6.5070108@6wind.com> Date: Mon, 23 May 2016 14:55:02 +0200 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:38.0) Gecko/20100101 Icedove/38.6.0 MIME-Version: 1.0 In-Reply-To: <1463669335-30378-2-git-send-email-david.hunt@intel.com> Content-Type: text/plain; charset=windows-1252 Content-Transfer-Encoding: 7bit Subject: Re: [dpdk-dev] [PATCH v2 1/3] mempool: add stack (lifo) mempool handler X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 23 May 2016 12:55:09 -0000 Hi David, Please find some comments below. On 05/19/2016 04:48 PM, David Hunt wrote: > This is a mempool handler that is useful for pipelining apps, where > the mempool cache doesn't really work - example, where we have one > core doing rx (and alloc), and another core doing Tx (and return). In > such a case, the mempool ring simply cycles through all the mbufs, > resulting in a LLC miss on every mbuf allocated when the number of > mbufs is large. A stack recycles buffers more effectively in this > case. > > v2: cleanup based on mailing list comments. Mainly removal of > unnecessary casts and comments. > > Signed-off-by: David Hunt > --- > lib/librte_mempool/Makefile | 1 + > lib/librte_mempool/rte_mempool_stack.c | 145 +++++++++++++++++++++++++++++++++ > 2 files changed, 146 insertions(+) > create mode 100644 lib/librte_mempool/rte_mempool_stack.c > > diff --git a/lib/librte_mempool/Makefile b/lib/librte_mempool/Makefile > index f19366e..5aa9ef8 100644 > --- a/lib/librte_mempool/Makefile > +++ b/lib/librte_mempool/Makefile > @@ -44,6 +44,7 @@ LIBABIVER := 2 > SRCS-$(CONFIG_RTE_LIBRTE_MEMPOOL) += rte_mempool.c > SRCS-$(CONFIG_RTE_LIBRTE_MEMPOOL) += rte_mempool_handler.c > SRCS-$(CONFIG_RTE_LIBRTE_MEMPOOL) += rte_mempool_default.c > +SRCS-$(CONFIG_RTE_LIBRTE_MEMPOOL) += rte_mempool_stack.c > # install includes > SYMLINK-$(CONFIG_RTE_LIBRTE_MEMPOOL)-include := rte_mempool.h > > diff --git a/lib/librte_mempool/rte_mempool_stack.c b/lib/librte_mempool/rte_mempool_stack.c > new file mode 100644 > index 0000000..6e25028 > --- /dev/null > +++ b/lib/librte_mempool/rte_mempool_stack.c > @@ -0,0 +1,145 @@ > +/*- > + * BSD LICENSE > + * > + * Copyright(c) 2010-2014 Intel Corporation. All rights reserved. > + * All rights reserved. Should be 2016? > ... > + > +static void * > +common_stack_alloc(struct rte_mempool *mp) > +{ > + struct rte_mempool_common_stack *s; > + unsigned n = mp->size; > + int size = sizeof(*s) + (n+16)*sizeof(void *); > + > + /* Allocate our local memory structure */ > + s = rte_zmalloc_socket("common-stack", "mempool-stack" ? > + size, > + RTE_CACHE_LINE_SIZE, > + mp->socket_id); > + if (s == NULL) { > + RTE_LOG(ERR, MEMPOOL, "Cannot allocate stack!\n"); > + return NULL; > + } > + > + rte_spinlock_init(&s->sl); > + > + s->size = n; > + mp->pool = s; > + rte_mempool_set_handler(mp, "stack"); rte_mempool_set_handler() is a user function, it should be called here > + > + return s; > +} > + > +static int common_stack_put(void *p, void * const *obj_table, > + unsigned n) > +{ > + struct rte_mempool_common_stack *s = p; > + void **cache_objs; > + unsigned index; > + > + rte_spinlock_lock(&s->sl); > + cache_objs = &s->objs[s->len]; > + > + /* Is there sufficient space in the stack ? */ > + if ((s->len + n) > s->size) { > + rte_spinlock_unlock(&s->sl); > + return -ENOENT; > + } The usual return value for a failing put() is ENOBUFS (see in rte_ring). After reading it, I realize that it's nearly exactly the same code than in "app/test: test external mempool handler". http://patchwork.dpdk.org/dev/patchwork/patch/12896/ We should drop one of them. If this stack handler is really useful for a performance use-case, it could go in librte_mempool. At the first read, the code looks like a demo example : it uses a simple spinlock for concurrent accesses to the common pool. Maybe the mempool cache hides this cost, in this case we could also consider removing the use of the rte_ring. Do you have some some performance numbers? Do you know if it scales with the number of cores? If we can identify the conditions where this mempool handler overperforms the default handler, it would be valuable to have them in the documentation. Regards, Olivier