From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail.droids-corp.org (zoll.droids-corp.org [94.23.50.67]) by dpdk.org (Postfix) with ESMTP id 604B41B024 for ; Mon, 19 Mar 2018 18:04:26 +0100 (CET) Received: from lfbn-lil-1-702-109.w81-254.abo.wanadoo.fr ([81.254.39.109] helo=droids-corp.org) by mail.droids-corp.org with esmtpsa (TLS1.0:RSA_AES_256_CBC_SHA1:256) (Exim 4.89) (envelope-from ) id 1exyDM-00073v-On; Mon, 19 Mar 2018 18:04:57 +0100 Received: by droids-corp.org (sSMTP sendmail emulation); Mon, 19 Mar 2018 18:04:20 +0100 Date: Mon, 19 Mar 2018 18:04:20 +0100 From: Olivier Matz To: Andrew Rybchenko Cc: dev@dpdk.org Message-ID: <20180319170420.usqetqzfquld2czi@platinum> References: <1516713372-10572-1-git-send-email-arybchenko@solarflare.com> <1520696382-16400-1-git-send-email-arybchenko@solarflare.com> <1520696382-16400-3-git-send-email-arybchenko@solarflare.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1520696382-16400-3-git-send-email-arybchenko@solarflare.com> User-Agent: NeoMutt/20170113 (1.7.2) Subject: Re: [dpdk-dev] [PATCH v1 2/9] mempool: add op to populate objects using provided memory X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 19 Mar 2018 17:04:26 -0000 On Sat, Mar 10, 2018 at 03:39:35PM +0000, Andrew Rybchenko wrote: > The callback allows to customize how objects are stored in the > memory chunk. Default implementation of the callback which simply > puts objects one by one is available. > > Signed-off-by: Andrew Rybchenko [...] > --- a/lib/librte_mempool/rte_mempool.c > +++ b/lib/librte_mempool/rte_mempool.c > @@ -99,7 +99,8 @@ static unsigned optimize_object_size(unsigned obj_size) > } > > static void > -mempool_add_elem(struct rte_mempool *mp, void *obj, rte_iova_t iova) > +mempool_add_elem(struct rte_mempool *mp, __rte_unused void *opaque, > + void *obj, rte_iova_t iova) > { > struct rte_mempool_objhdr *hdr; > struct rte_mempool_objtlr *tlr __rte_unused; > @@ -116,9 +117,6 @@ mempool_add_elem(struct rte_mempool *mp, void *obj, rte_iova_t iova) > tlr = __mempool_get_trailer(obj); > tlr->cookie = RTE_MEMPOOL_TRAILER_COOKIE; > #endif > - > - /* enqueue in ring */ > - rte_mempool_ops_enqueue_bulk(mp, &obj, 1); > } > > /* call obj_cb() for each mempool element */ Before this patch, the purpose of mempool_add_elem() was to add an object into a mempool: 1- write object header and trailers 2- chain it into the list of objects 3- add it into the ring/stack/whatever (=enqueue) Now, the enqueue is done in rte_mempool_op_populate_default() or will be done in the driver. I'm not sure it's a good idea to separate 3- from 2-, because an object that is chained into the list is expected to be in the ring/stack too. This risk of mis-synchronization is also enforced by the fact that ops->populate() can be provided by the driver and mempool_add_elem() is passed as a callback pointer. It's not clear to me why rte_mempool_ops_enqueue_bulk() is removed from mempool_add_elem(). > @@ -396,16 +394,13 @@ rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr, > else > off = RTE_PTR_ALIGN_CEIL(vaddr, RTE_CACHE_LINE_SIZE) - vaddr; > > - while (off + total_elt_sz <= len && mp->populated_size < mp->size) { > - off += mp->header_size; > - if (iova == RTE_BAD_IOVA) > - mempool_add_elem(mp, (char *)vaddr + off, > - RTE_BAD_IOVA); > - else > - mempool_add_elem(mp, (char *)vaddr + off, iova + off); > - off += mp->elt_size + mp->trailer_size; > - i++; > - } > + if (off > len) > + return -EINVAL; I think there is a memory leak here (memhdr), but it's my fault ;) I introduced a similar code in commit 84121f1971: if (i == 0) return -EINVAL; I can send a patch for it if you want.