From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dispatch1-us1.ppe-hosted.com (dispatch1-us1.ppe-hosted.com [148.163.129.52]) by dpdk.org (Postfix) with ESMTP id ECC79AACD for ; Wed, 21 Mar 2018 08:06:00 +0100 (CET) X-Virus-Scanned: Proofpoint Essentials engine Received: from webmail.solarflare.com (webmail.solarflare.com [12.187.104.26]) (using TLSv1 with cipher ECDHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1-us1.ppe-hosted.com (Proofpoint Essentials ESMTP Server) with ESMTPS id A43E910005E; Wed, 21 Mar 2018 07:05:59 +0000 (UTC) Received: from [192.168.38.17] (84.52.114.114) by ocex03.SolarFlarecom.com (10.20.40.36) with Microsoft SMTP Server (TLS) id 15.0.1044.25; Wed, 21 Mar 2018 00:05:56 -0700 To: Olivier Matz CC: References: <1516713372-10572-1-git-send-email-arybchenko@solarflare.com> <1520696382-16400-1-git-send-email-arybchenko@solarflare.com> <1520696382-16400-3-git-send-email-arybchenko@solarflare.com> <20180319170420.usqetqzfquld2czi@platinum> From: Andrew Rybchenko Message-ID: Date: Wed, 21 Mar 2018 10:05:54 +0300 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.6.0 MIME-Version: 1.0 In-Reply-To: <20180319170420.usqetqzfquld2czi@platinum> Content-Language: en-GB X-Originating-IP: [84.52.114.114] X-ClientProxiedBy: ocex03.SolarFlarecom.com (10.20.40.36) To ocex03.SolarFlarecom.com (10.20.40.36) X-MDID: 1521615960-2Nz-xNyVf81T Content-Type: text/plain; charset="utf-8"; format=flowed Content-Transfer-Encoding: 7bit X-Content-Filtered-By: Mailman/MimeDel 2.1.15 Subject: Re: [dpdk-dev] [PATCH v1 2/9] mempool: add op to populate objects using provided memory X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 21 Mar 2018 07:06:01 -0000 On 03/19/2018 08:04 PM, Olivier Matz wrote: > On Sat, Mar 10, 2018 at 03:39:35PM +0000, Andrew Rybchenko wrote: >> The callback allows to customize how objects are stored in the >> memory chunk. Default implementation of the callback which simply >> puts objects one by one is available. >> >> Signed-off-by: Andrew Rybchenko > [...] > >> --- a/lib/librte_mempool/rte_mempool.c >> +++ b/lib/librte_mempool/rte_mempool.c >> @@ -99,7 +99,8 @@ static unsigned optimize_object_size(unsigned obj_size) >> } >> >> static void >> -mempool_add_elem(struct rte_mempool *mp, void *obj, rte_iova_t iova) >> +mempool_add_elem(struct rte_mempool *mp, __rte_unused void *opaque, >> + void *obj, rte_iova_t iova) >> { >> struct rte_mempool_objhdr *hdr; >> struct rte_mempool_objtlr *tlr __rte_unused; >> @@ -116,9 +117,6 @@ mempool_add_elem(struct rte_mempool *mp, void *obj, rte_iova_t iova) >> tlr = __mempool_get_trailer(obj); >> tlr->cookie = RTE_MEMPOOL_TRAILER_COOKIE; >> #endif >> - >> - /* enqueue in ring */ >> - rte_mempool_ops_enqueue_bulk(mp, &obj, 1); >> } >> >> /* call obj_cb() for each mempool element */ > Before this patch, the purpose of mempool_add_elem() was to add > an object into a mempool: > 1- write object header and trailers > 2- chain it into the list of objects > 3- add it into the ring/stack/whatever (=enqueue) > > Now, the enqueue is done in rte_mempool_op_populate_default() or will be > done in the driver. I'm not sure it's a good idea to separate 3- from > 2-, because an object that is chained into the list is expected to be > in the ring/stack too. When an object is dequeued, it is still chained into the list, but not in the ring/stack. Separation is to use callback for generic mempool housekeeping. Enqueue is a driver-specific operation. > This risk of mis-synchronization is also enforced by the fact that > ops->populate() can be provided by the driver and mempool_add_elem() is > passed as a callback pointer. > > It's not clear to me why rte_mempool_ops_enqueue_bulk() is > removed from mempool_add_elem(). The idea was that it could be more efficient (and probably the only way) to enqueue the first time inside the driver. In theory bucket mempool could init and enqueue full buckets instead of objects one-by-one. However, finally it appears to be easier to reuse default populate callback and enqueue operation. So, now I have no strong opinion and agree with your arguments, that's why I've tried to highlight it rte_mempool_populate_t description. Even explicit description does not always help... So, should I return enqueue to callback or leave as is in my patches? >> @@ -396,16 +394,13 @@ rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr, >> else >> off = RTE_PTR_ALIGN_CEIL(vaddr, RTE_CACHE_LINE_SIZE) - vaddr; >> >> - while (off + total_elt_sz <= len && mp->populated_size < mp->size) { >> - off += mp->header_size; >> - if (iova == RTE_BAD_IOVA) >> - mempool_add_elem(mp, (char *)vaddr + off, >> - RTE_BAD_IOVA); >> - else >> - mempool_add_elem(mp, (char *)vaddr + off, iova + off); >> - off += mp->elt_size + mp->trailer_size; >> - i++; >> - } >> + if (off > len) >> + return -EINVAL; > I think there is a memory leak here (memhdr), but it's my fault ;) > I introduced a similar code in commit 84121f1971: > > if (i == 0) > return -EINVAL; > > I can send a patch for it if you want. This one is yours, above is mine :) Don't worry, I'll submit separate pre-patch to fix it with appropriate Fixes and Cc.