From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 27D4EA00C4; Thu, 31 Oct 2019 09:29:55 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id DF41E1C1F4; Thu, 31 Oct 2019 09:29:53 +0100 (CET) Received: from mail-wm1-f67.google.com (mail-wm1-f67.google.com [209.85.128.67]) by dpdk.org (Postfix) with ESMTP id 6404C1C1F1 for ; Thu, 31 Oct 2019 09:29:52 +0100 (CET) Received: by mail-wm1-f67.google.com with SMTP id q130so4911797wme.2 for ; Thu, 31 Oct 2019 01:29:52 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=6wind.com; s=google; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to:user-agent; bh=bPDFwFyw+Zup4j+LBIqsg4s3XQGauBa0eBJ+2shnbGk=; b=AjAKDt1GBtKF7SY60xy9TeaQTLZO6XjE/Mo61TJ3bh363WGM0+TZtgQexoolaOdysP 095sIBVzvM+YCnbv6ZWBf9Gq3Vax/9ZagLZwfqFsgg2n/PVh7D8BVX1I8EdoMGdYZL8N hpLRr6Ay1dXKWG4PCpze8n65hNu8KTAaFfmMx0CtXPBU1c9aXeaSxYidEaADY/A4LaEu 11CLEHakWsMCr68804geIBNUQxLluTdW8YzKDALr512ybkYUA3DKsGl0ouBmH441CuEk bTO4OwwM8yYFLB+IFrkABQANOf9KjvjFOlXoJeb4wwXomnHU6CyrJ3g0lm65BZKBx5FR F1nQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to:user-agent; bh=bPDFwFyw+Zup4j+LBIqsg4s3XQGauBa0eBJ+2shnbGk=; b=J8DP7kwV0X0D+uJdoXT7lZqsdchPYkPw6VZF9k1WSsAf43rg9o1tkRcg1HNmlc7ZId RjFM1DL7RQ3q7baoWdJtGJXwSjzvpMAY6V4vVZwQOUDn7Dtp9WYrMVcKMzE8SuoPlXRX CiB1LvF6AwqdTI0uE8OMP3keyrE00d2NyYN3Tl2KdqzAAlpwhyjhOfzWaL4vTEwoX4+F WI6T1nnZE24eM7aniMj0rlLxJp2H09MnoE2blc37FqS7ko79roqG7neZ/dNh6ep4hSp2 Zk3XCg2GgRfkdWbEp5xzemUAukv7/f5RdN5hI30wk/gbiQAjs1L1l1yKA8q2xYWF7faV tIPA== X-Gm-Message-State: APjAAAUCf8pxML5wGYyAGi4tME3DhHgQyRuXAlrVWRR9YTv4b/3nUbTU YSwK19ce01+PI9E7pvr9MIJr0A== X-Google-Smtp-Source: APXvYqyIxz/UxYekJBiRNU0eU1A4zTMP4NIUkUjNpPV93q1Dbz+1wwnwmdRaSZy9eNI1C6z3ZQ7hpQ== X-Received: by 2002:a1c:a9cf:: with SMTP id s198mr3794837wme.5.1572510591918; Thu, 31 Oct 2019 01:29:51 -0700 (PDT) Received: from 6wind.com (2a01cb0c0005a6000226b0fffeed02fc.ipv6.abo.wanadoo.fr. [2a01:cb0c:5:a600:226:b0ff:feed:2fc]) by smtp.gmail.com with ESMTPSA id l15sm2467053wmh.18.2019.10.31.01.29.50 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 31 Oct 2019 01:29:51 -0700 (PDT) Date: Thu, 31 Oct 2019 09:29:50 +0100 From: Olivier Matz To: Jerin Jacob Cc: Vamsi Krishna Attunuru , "dev@dpdk.org" , Anatoly Burakov , Andrew Rybchenko , Ferruh Yigit , "Giridharan, Ganesan" , Jerin Jacob Kollanukkaran , Kiran Kumar Kokkilagadda , Stephen Hemminger , Thomas Monjalon Message-ID: <20191031082950.ewgesitxnsjufu4h@platinum> References: <20190719133845.32432-1-olivier.matz@6wind.com> <20191030143619.4007-1-olivier.matz@6wind.com> <20191030143619.4007-6-olivier.matz@6wind.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: NeoMutt/20180716 Subject: Re: [dpdk-dev] [EXT] [PATCH v2 5/6] mempool: prevent objects from being across pages X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Hi Jerin, On Thu, Oct 31, 2019 at 01:49:10PM +0530, Jerin Jacob wrote: > On Thu, Oct 31, 2019 at 12:25 PM Vamsi Krishna Attunuru > wrote: > > > > Hi Olivier, > > > > Thanks for reworked patches. > > With V2, Tests with 512MB & 2M page sizes work fine with octeontx2 mempool pmd. One more concern is, octeontx fpa mempool driver also has the similar requirements. How do we address that, Can you suggest the best way to avoid code duplication in PMDs. > > # Actually both drivers don't call any HW specific function on those ops > # Is it possible to move the code under " /* derived from > rte_mempool_op_calc_mem_size_default() */" > to static function in the mempool lib > ie. > it will keep the generic rte_mempool_op_calc_mem_size_default() clean > and we can introduce > other variant of rte_mempool_op_calc_mem_size_default() for this > specific requirement which inclues the static generic function. > > I don't think, it is a one-off requirement to have an object size > aligned to object start address in all HW based mempool > implementation. > > The reason for the HW scheme doing such a scheme is the following: > # if object size aligned to object block size, when HW wants to > enqueue() a packet back to pool, > irrespective of the free pointer offset it can enqueue the mbuf > address to mempool. > > Example: > X - mbuf start address > Y - the object block size > X + n <- is the packet pointer to send the packet > > When submitting the packet o HW to send the packet, SW needs to only > mention the X + n and > when HW frees it, it can derive the X(mbuf pointer address) by the > following arithmetic > X = (X + n ) - ((X + n) MOD Y) > > Hi Olivier, > It is not worth going back and forth on this code organization. You > can decide on a scheme, We will follow that. Thanks for the explanation. Our mail crossed each others. Please see my answer to Vamsi. > > > > > > Regards > > A Vamsi > > > > > -----Original Message----- > > > From: Olivier Matz > > > Sent: Wednesday, October 30, 2019 8:06 PM > > > To: dev@dpdk.org > > > Cc: Anatoly Burakov ; Andrew Rybchenko > > > ; Ferruh Yigit ; > > > Giridharan, Ganesan ; Jerin Jacob Kollanukkaran > > > ; Kiran Kumar Kokkilagadda ; > > > Stephen Hemminger ; Thomas Monjalon > > > ; Vamsi Krishna Attunuru > > > Subject: [EXT] [PATCH v2 5/6] mempool: prevent objects from being across > > > pages > > > > > > External Email > > > > > > ---------------------------------------------------------------------- > > > When populating a mempool, ensure that objects are not located across several > > > pages, except if user did not request iova contiguous objects. > > > > > > Signed-off-by: Vamsi Krishna Attunuru > > > Signed-off-by: Olivier Matz > > > --- > > > drivers/mempool/octeontx2/Makefile | 3 + > > > drivers/mempool/octeontx2/meson.build | 3 + > > > drivers/mempool/octeontx2/otx2_mempool_ops.c | 119 ++++++++++++++++--- > > > lib/librte_mempool/rte_mempool.c | 23 ++-- > > > lib/librte_mempool/rte_mempool_ops_default.c | 32 ++++- > > > 5 files changed, 147 insertions(+), 33 deletions(-) > > > > > > diff --git a/drivers/mempool/octeontx2/Makefile > > > b/drivers/mempool/octeontx2/Makefile > > > index 87cce22c6..d781cbfc6 100644 > > > --- a/drivers/mempool/octeontx2/Makefile > > > +++ b/drivers/mempool/octeontx2/Makefile > > > @@ -27,6 +27,9 @@ EXPORT_MAP := rte_mempool_octeontx2_version.map > > > > > > LIBABIVER := 1 > > > > > > +# for rte_mempool_get_page_size > > > +CFLAGS += -DALLOW_EXPERIMENTAL_API > > > + > > > # > > > # all source are stored in SRCS-y > > > # > > > diff --git a/drivers/mempool/octeontx2/meson.build > > > b/drivers/mempool/octeontx2/meson.build > > > index 9fde40f0e..28f9634da 100644 > > > --- a/drivers/mempool/octeontx2/meson.build > > > +++ b/drivers/mempool/octeontx2/meson.build > > > @@ -21,3 +21,6 @@ foreach flag: extra_flags endforeach > > > > > > deps += ['eal', 'mbuf', 'kvargs', 'bus_pci', 'common_octeontx2', 'mempool'] > > > + > > > +# for rte_mempool_get_page_size > > > +allow_experimental_apis = true > > > diff --git a/drivers/mempool/octeontx2/otx2_mempool_ops.c > > > b/drivers/mempool/octeontx2/otx2_mempool_ops.c > > > index d769575f4..47117aec6 100644 > > > --- a/drivers/mempool/octeontx2/otx2_mempool_ops.c > > > +++ b/drivers/mempool/octeontx2/otx2_mempool_ops.c > > > @@ -713,12 +713,76 @@ static ssize_t > > > otx2_npa_calc_mem_size(const struct rte_mempool *mp, uint32_t obj_num, > > > uint32_t pg_shift, size_t *min_chunk_size, size_t *align) { > > > - /* > > > - * Simply need space for one more object to be able to > > > - * fulfill alignment requirements. > > > - */ > > > - return rte_mempool_op_calc_mem_size_default(mp, obj_num + 1, > > > pg_shift, > > > - min_chunk_size, align); > > > + size_t total_elt_sz; > > > + size_t obj_per_page, pg_sz, objs_in_last_page; > > > + size_t mem_size; > > > + > > > + /* derived from rte_mempool_op_calc_mem_size_default() */ > > > + > > > + total_elt_sz = mp->header_size + mp->elt_size + mp->trailer_size; > > > + > > > + if (total_elt_sz == 0) { > > > + mem_size = 0; > > > + } else if (pg_shift == 0) { > > > + /* one object margin to fix alignment */ > > > + mem_size = total_elt_sz * (obj_num + 1); > > > + } else { > > > + pg_sz = (size_t)1 << pg_shift; > > > + obj_per_page = pg_sz / total_elt_sz; > > > + > > > + /* we need to keep one object to fix alignment */ > > > + if (obj_per_page > 0) > > > + obj_per_page--; > > > + > > > + if (obj_per_page == 0) { > > > + /* > > > + * Note that if object size is bigger than page size, > > > + * then it is assumed that pages are grouped in subsets > > > + * of physically continuous pages big enough to store > > > + * at least one object. > > > + */ > > > + mem_size = RTE_ALIGN_CEIL(2 * total_elt_sz, > > > + pg_sz) * obj_num; > > > + } else { > > > + /* In the best case, the allocator will return a > > > + * page-aligned address. For example, with 5 objs, > > > + * the required space is as below: > > > + * | page0 | page1 | page2 (last) | > > > + * |obj0 |obj1 |xxx|obj2 |obj3 |xxx|obj4| > > > + * <------------- mem_size -------------> > > > + */ > > > + objs_in_last_page = ((obj_num - 1) % obj_per_page) + > > > 1; > > > + /* room required for the last page */ > > > + mem_size = objs_in_last_page * total_elt_sz; > > > + /* room required for other pages */ > > > + mem_size += ((obj_num - objs_in_last_page) / > > > + obj_per_page) << pg_shift; > > > + > > > + /* In the worst case, the allocator returns a > > > + * non-aligned pointer, wasting up to > > > + * total_elt_sz. Add a margin for that. > > > + */ > > > + mem_size += total_elt_sz - 1; > > > + } > > > + } > > > + > > > + *min_chunk_size = total_elt_sz * 2; > > > + *align = RTE_CACHE_LINE_SIZE; > > > + > > > + return mem_size; > > > +} > > > + > > > +/* Returns -1 if object crosses a page boundary, else returns 0 */ > > > +static int check_obj_bounds(char *obj, size_t pg_sz, size_t elt_sz) { > > > + if (pg_sz == 0) > > > + return 0; > > > + if (elt_sz > pg_sz) > > > + return 0; > > > + if (RTE_PTR_ALIGN(obj, pg_sz) != RTE_PTR_ALIGN(obj + elt_sz - 1, > > > pg_sz)) > > > + return -1; > > > + return 0; > > > } > > > > > > static int > > > @@ -726,8 +790,12 @@ otx2_npa_populate(struct rte_mempool *mp, > > > unsigned int max_objs, void *vaddr, > > > rte_iova_t iova, size_t len, > > > rte_mempool_populate_obj_cb_t *obj_cb, void *obj_cb_arg) > > > { > > > - size_t total_elt_sz; > > > + char *va = vaddr; > > > + size_t total_elt_sz, pg_sz; > > > size_t off; > > > + unsigned int i; > > > + void *obj; > > > + int ret; > > > > > > if (iova == RTE_BAD_IOVA) > > > return -EINVAL; > > > @@ -735,22 +803,45 @@ otx2_npa_populate(struct rte_mempool *mp, > > > unsigned int max_objs, void *vaddr, > > > total_elt_sz = mp->header_size + mp->elt_size + mp->trailer_size; > > > > > > /* Align object start address to a multiple of total_elt_sz */ > > > - off = total_elt_sz - ((uintptr_t)vaddr % total_elt_sz); > > > + off = total_elt_sz - (((uintptr_t)(va - 1) % total_elt_sz) + 1); > > > > > > if (len < off) > > > return -EINVAL; > > > > > > - vaddr = (char *)vaddr + off; > > > - iova += off; > > > - len -= off; > > > > > > - npa_lf_aura_op_range_set(mp->pool_id, iova, iova + len); > > > + npa_lf_aura_op_range_set(mp->pool_id, iova + off, iova + len - off); > > > > > > if (npa_lf_aura_range_update_check(mp->pool_id) < 0) > > > return -EBUSY; > > > > > > - return rte_mempool_op_populate_default(mp, max_objs, vaddr, iova, > > > len, > > > - obj_cb, obj_cb_arg); > > > + /* the following is derived from rte_mempool_op_populate_default() */ > > > + > > > + ret = rte_mempool_get_page_size(mp, &pg_sz); > > > + if (ret < 0) > > > + return ret; > > > + > > > + for (i = 0; i < max_objs; i++) { > > > + /* avoid objects to cross page boundaries, and align > > > + * offset to a multiple of total_elt_sz. > > > + */ > > > + if (check_obj_bounds(va + off, pg_sz, total_elt_sz) < 0) { > > > + off += RTE_PTR_ALIGN_CEIL(va + off, pg_sz) - (va + > > > off); > > > + off += total_elt_sz - (((uintptr_t)(va + off - 1) % > > > + total_elt_sz) + 1); > > > + } > > > + > > > + if (off + total_elt_sz > len) > > > + break; > > > + > > > + off += mp->header_size; > > > + obj = va + off; > > > + obj_cb(mp, obj_cb_arg, obj, > > > + (iova == RTE_BAD_IOVA) ? RTE_BAD_IOVA : (iova + off)); > > > + rte_mempool_ops_enqueue_bulk(mp, &obj, 1); > > > + off += mp->elt_size + mp->trailer_size; > > > + } > > > + > > > + return i; > > > } > > > > > > static struct rte_mempool_ops otx2_npa_ops = { diff --git > > > a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c > > > index 758c5410b..d3db9273d 100644 > > > --- a/lib/librte_mempool/rte_mempool.c > > > +++ b/lib/librte_mempool/rte_mempool.c > > > @@ -431,8 +431,6 @@ rte_mempool_get_page_size(struct rte_mempool *mp, > > > size_t *pg_sz) > > > > > > if (!need_iova_contig_obj) > > > *pg_sz = 0; > > > - else if (!alloc_in_ext_mem && rte_eal_iova_mode() == RTE_IOVA_VA) > > > - *pg_sz = 0; > > > else if (rte_eal_has_hugepages() || alloc_in_ext_mem) > > > *pg_sz = get_min_page_size(mp->socket_id); > > > else > > > @@ -481,17 +479,15 @@ rte_mempool_populate_default(struct rte_mempool > > > *mp) > > > * then just set page shift and page size to 0, because the user has > > > * indicated that there's no need to care about anything. > > > * > > > - * if we do need contiguous objects, there is also an option to reserve > > > - * the entire mempool memory as one contiguous block of memory, in > > > - * which case the page shift and alignment wouldn't matter as well. > > > + * if we do need contiguous objects (if a mempool driver has its > > > + * own calc_size() method returning min_chunk_size = mem_size), > > > + * there is also an option to reserve the entire mempool memory > > > + * as one contiguous block of memory. > > > * > > > * if we require contiguous objects, but not necessarily the entire > > > - * mempool reserved space to be contiguous, then there are two > > > options. > > > - * > > > - * if our IO addresses are virtual, not actual physical (IOVA as VA > > > - * case), then no page shift needed - our memory allocation will give us > > > - * contiguous IO memory as far as the hardware is concerned, so > > > - * act as if we're getting contiguous memory. > > > + * mempool reserved space to be contiguous, pg_sz will be != 0, > > > + * and the default ops->populate() will take care of not placing > > > + * objects across pages. > > > * > > > * if our IO addresses are physical, we may get memory from bigger > > > * pages, or we might get memory from smaller pages, and how much > > > of it @@ -504,11 +500,6 @@ rte_mempool_populate_default(struct > > > rte_mempool *mp) > > > * > > > * If we fail to get enough contiguous memory, then we'll go and > > > * reserve space in smaller chunks. > > > - * > > > - * We also have to take into account the fact that memory that we're > > > - * going to allocate from can belong to an externally allocated memory > > > - * area, in which case the assumption of IOVA as VA mode being > > > - * synonymous with IOVA contiguousness will not hold. > > > */ > > > > > > need_iova_contig_obj = !(mp->flags & > > > MEMPOOL_F_NO_IOVA_CONTIG); diff --git > > > a/lib/librte_mempool/rte_mempool_ops_default.c > > > b/lib/librte_mempool/rte_mempool_ops_default.c > > > index f6aea7662..e5cd4600f 100644 > > > --- a/lib/librte_mempool/rte_mempool_ops_default.c > > > +++ b/lib/librte_mempool/rte_mempool_ops_default.c > > > @@ -61,21 +61,47 @@ rte_mempool_op_calc_mem_size_default(const struct > > > rte_mempool *mp, > > > return mem_size; > > > } > > > > > > +/* Returns -1 if object crosses a page boundary, else returns 0 */ > > > +static int check_obj_bounds(char *obj, size_t pg_sz, size_t elt_sz) { > > > + if (pg_sz == 0) > > > + return 0; > > > + if (elt_sz > pg_sz) > > > + return 0; > > > + if (RTE_PTR_ALIGN(obj, pg_sz) != RTE_PTR_ALIGN(obj + elt_sz - 1, > > > pg_sz)) > > > + return -1; > > > + return 0; > > > +} > > > + > > > int > > > rte_mempool_op_populate_default(struct rte_mempool *mp, unsigned int > > > max_objs, > > > void *vaddr, rte_iova_t iova, size_t len, > > > rte_mempool_populate_obj_cb_t *obj_cb, void *obj_cb_arg) { > > > - size_t total_elt_sz; > > > + char *va = vaddr; > > > + size_t total_elt_sz, pg_sz; > > > size_t off; > > > unsigned int i; > > > void *obj; > > > + int ret; > > > + > > > + ret = rte_mempool_get_page_size(mp, &pg_sz); > > > + if (ret < 0) > > > + return ret; > > > > > > total_elt_sz = mp->header_size + mp->elt_size + mp->trailer_size; > > > > > > - for (off = 0, i = 0; off + total_elt_sz <= len && i < max_objs; i++) { > > > + for (off = 0, i = 0; i < max_objs; i++) { > > > + /* avoid objects to cross page boundaries */ > > > + if (check_obj_bounds(va + off, pg_sz, total_elt_sz) < 0) > > > + off += RTE_PTR_ALIGN_CEIL(va + off, pg_sz) - (va + > > > off); > > > + > > > + if (off + total_elt_sz > len) > > > + break; > > > + > > > off += mp->header_size; > > > - obj = (char *)vaddr + off; > > > + obj = va + off; > > > obj_cb(mp, obj_cb_arg, obj, > > > (iova == RTE_BAD_IOVA) ? RTE_BAD_IOVA : (iova + off)); > > > rte_mempool_ops_enqueue_bulk(mp, &obj, 1); > > > -- > > > 2.20.1 > >