From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 4F0FAA00BE; Thu, 31 Oct 2019 07:54:58 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id A5BDF1C1CD; Thu, 31 Oct 2019 07:54:57 +0100 (CET) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by dpdk.org (Postfix) with ESMTP id 62B6A1C1B5 for ; Thu, 31 Oct 2019 07:54:56 +0100 (CET) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id x9V6nvMF015677; Wed, 30 Oct 2019 23:54:55 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : references : in-reply-to : content-type : content-transfer-encoding : mime-version; s=pfpt0818; bh=BjIxi057uTNoFrlAKoPDT3LLYE7WuQTdDNuXXMHDsqQ=; b=Zj7NvGoqK4cg21kaVMdrtpQy+YJblbOEeTp72MKbrnyrmrpgsnQVomjYuxx9JNjAySvm MgO1P0+8Y5mCcPURWFdLHfNzJHlvqmUfPFji+beV10gqs2qMOu6ohtYlgzBfPv3ZYa7j tsJWyN5f35Qb/kdipQWMMU+dZL4DrcBAKlB53idiALDr8e/HwFwEr9/jvXjI9fdAYBic MI8jNr3XrW8Hfh5l9qDpydKaArLupDSs6mDIjcZcmqiSm8vnjWOpaBmNyzHsu3BH0uc/ XPJam3KohvrJsqdUqNhXJHT3MjNEv96uqpjoUT6/RBDY59G/uXhRQHMyx/k9Lg8EYwIx 4Q== Received: from sc-exch04.marvell.com ([199.233.58.184]) by mx0b-0016f401.pphosted.com with ESMTP id 2vxwjqdtpw-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Wed, 30 Oct 2019 23:54:54 -0700 Received: from SC-EXCH03.marvell.com (10.93.176.83) by SC-EXCH04.marvell.com (10.93.176.84) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Wed, 30 Oct 2019 23:54:53 -0700 Received: from NAM02-BL2-obe.outbound.protection.outlook.com (104.47.38.54) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server (TLS) id 15.0.1367.3 via Frontend Transport; Wed, 30 Oct 2019 23:54:53 -0700 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=kfIkI4Qrnc+xM1ZjcKTvHDQz3lXs+EfBN/42iulJUxAcllLbtQXncMzklDb1NndzWqYyOHI5QTNPIUAUEOUrhzJUCiOqguPd7JwQQ/ULgH5xAiOQIUcmf4aUui6lrJPy/Wb/w8ZSWyGdKl9O8uHnynAfEccEXMpGxILf97wToQxz56HIMj7tMt0QRi/V5cj/EDb7V0+TkihkCeNIDTdMet2FngZ7AQBqil6IIU2XEYZLEa5hZ2+eRDFDarruE1PhZuA49goxC5NW8mlKFrBCXKolni7TKW7Pp7OuTN18rY1UpUVA95GCCYkHvKpRrH0xn/0sI5qxHJ+9uAaurbP9DA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=BjIxi057uTNoFrlAKoPDT3LLYE7WuQTdDNuXXMHDsqQ=; b=YNM4gAQAF54MYqn+mOiYuhSstztfXBeGNklaIEQeXG43QaSacR+iU9C3PUm4vFKUI8k87IhHkucu42DQ5nqDt/3T4I+DppvfCOK79f3pFQQrhQPKd7dCMOljTk9hOqx0s7LBlQBbk9L+/cBDg+hbZE0yTZko0J7a+iHY7isKs3mgCdvKpj/5+hX+RC2KmmwAW+m199lh0QsQqh5JzvcYKk1qQSli05GTHeibRDu9m2HbKDj4g6qiudzIwH7YuZZuaFU5I/37vX8J8XiqFwfwxK/Fl32Pg18NfLmqALMBS1T+GmuCNFrX9sSypMaakwmnwwhsquOMSQDY0D5MxZTYjA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=marvell.com; dmarc=pass action=none header.from=marvell.com; dkim=pass header.d=marvell.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.onmicrosoft.com; s=selector2-marvell-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=BjIxi057uTNoFrlAKoPDT3LLYE7WuQTdDNuXXMHDsqQ=; b=PwKuIEY+YtkGma01DbRUy9g0seI8x86vtuzySeVcf7HETT+/W0TTn4oUuW55YpFXtnE4uUgadkpkI8Rw1n/eeTbttfUSrO/yPHG1rUMr2S0Yl0wOf1+mtdbHAGAhr4NmqBHL3a68tG9TvJFMg1QCQ3aauANT8ONQmX/rSGiMVxc= Received: from MWHPR18MB1645.namprd18.prod.outlook.com (10.173.241.137) by MWHPR18MB1278.namprd18.prod.outlook.com (10.175.6.14) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.2387.24; Thu, 31 Oct 2019 06:54:51 +0000 Received: from MWHPR18MB1645.namprd18.prod.outlook.com ([fe80::b4fd:71ce:2bc4:7afb]) by MWHPR18MB1645.namprd18.prod.outlook.com ([fe80::b4fd:71ce:2bc4:7afb%3]) with mapi id 15.20.2387.028; Thu, 31 Oct 2019 06:54:51 +0000 From: Vamsi Krishna Attunuru To: Olivier Matz , "dev@dpdk.org" CC: Anatoly Burakov , Andrew Rybchenko , Ferruh Yigit , "Giridharan, Ganesan" , Jerin Jacob Kollanukkaran , Kiran Kumar Kokkilagadda , Stephen Hemminger , Thomas Monjalon Thread-Topic: [EXT] [PATCH v2 5/6] mempool: prevent objects from being across pages Thread-Index: AQHVjy9/7WYB3jOdskaTZzFUiPgONqd0TnhA Date: Thu, 31 Oct 2019 06:54:50 +0000 Message-ID: References: <20190719133845.32432-1-olivier.matz@6wind.com> <20191030143619.4007-1-olivier.matz@6wind.com> <20191030143619.4007-6-olivier.matz@6wind.com> In-Reply-To: <20191030143619.4007-6-olivier.matz@6wind.com> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [14.140.231.66] x-ms-publictraffictype: Email x-ms-office365-filtering-correlation-id: 90f1723b-97e6-4b96-7b82-08d75dcf3a16 x-ms-traffictypediagnostic: MWHPR18MB1278: x-ms-exchange-transport-forked: True x-microsoft-antispam-prvs: x-ms-oob-tlc-oobclassifiers: OLM:10000; x-forefront-prvs: 02070414A1 x-forefront-antispam-report: SFV:NSPM; SFS:(10009020)(4636009)(366004)(39860400002)(136003)(376002)(346002)(396003)(43544003)(189003)(199004)(13464003)(316002)(478600001)(305945005)(81166006)(8936002)(2906002)(25786009)(33656002)(14444005)(7736002)(14454004)(256004)(66066001)(4326008)(54906003)(110136005)(81156014)(74316002)(8676002)(7696005)(76116006)(99286004)(229853002)(53546011)(2501003)(26005)(86362001)(66556008)(186003)(76176011)(9686003)(64756008)(71200400001)(71190400001)(6436002)(102836004)(11346002)(55236004)(6246003)(6506007)(66446008)(30864003)(3846002)(6116002)(486006)(66946007)(476003)(52536014)(66476007)(5660300002)(55016002)(446003)(21314003); DIR:OUT; SFP:1101; SCL:1; SRVR:MWHPR18MB1278; H:MWHPR18MB1645.namprd18.prod.outlook.com; FPR:; SPF:None; LANG:en; PTR:InfoNoRecords; MX:1; A:1; received-spf: None (protection.outlook.com: marvell.com does not designate permitted sender hosts) x-ms-exchange-senderadcheck: 1 x-microsoft-antispam: BCL:0; x-microsoft-antispam-message-info: UxrUmRK0gaqjnXuBA3sJATK/NbXc3UDnZnHY6oBYoCudZHsY+wn1DAWyMPBBFhNTI8R8uUyQcg1jPBLSbuNEMXMPv6/xSRrIsyNS/W0GbSBRDqHwXhI09kIz+TD7CqxOK/WIN+/ScaeftzZbpB8wNMsNB2W2dkczj1raoieOnw9XfMdPuEl71AMtLRTuo9pCc23P4T7B5R3Uf0tvIChAnJKpTsPKQxT81I2HznwQzAkWudu2MlMlf+WejXphyvMMOPc50IXwBXXyMKUIzBi9wuHm1NNdRF5+7ILinfLaeZG8KPqJ422kz5fz8/qDSa4xOEa3g2SRS6UIOMrqp5gIQkGosKzQCOyCAEf1mEWO68Za1F00pMH4q99m6teMmiG092gWxkjiPVZ6G91C4kK0lflyC0IYAS8fVQoAWjpD+NWuTBMvkeKaQ7oR2GeDD59n Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 X-MS-Exchange-CrossTenant-Network-Message-Id: 90f1723b-97e6-4b96-7b82-08d75dcf3a16 X-MS-Exchange-CrossTenant-originalarrivaltime: 31 Oct 2019 06:54:50.9360 (UTC) X-MS-Exchange-CrossTenant-fromentityheader: Hosted X-MS-Exchange-CrossTenant-id: 70e1fb47-1155-421d-87fc-2e58f638b6e0 X-MS-Exchange-CrossTenant-mailboxtype: HOSTED X-MS-Exchange-CrossTenant-userprincipalname: oTjOsJRecUrTV5Z2T/xxg6IwClR/BO4IOtV9ny/NaY09anyi5Mg6OXqsrYtu3nSKgMROYFNHvQ7bQiEfaT8jEw== X-MS-Exchange-Transport-CrossTenantHeadersStamped: MWHPR18MB1278 X-OriginatorOrg: marvell.com X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.95,1.0.8 definitions=2019-10-31_02:2019-10-30,2019-10-31 signatures=0 Subject: Re: [dpdk-dev] [EXT] [PATCH v2 5/6] mempool: prevent objects from being across pages X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Hi Olivier, Thanks for reworked patches. With V2, Tests with 512MB & 2M page sizes work fine with octeontx2 mempool = pmd. One more concern is, octeontx fpa mempool driver also has the similar = requirements. How do we address that, Can you suggest the best way to avoid= code duplication in PMDs. Regards A Vamsi=20 > -----Original Message----- > From: Olivier Matz > Sent: Wednesday, October 30, 2019 8:06 PM > To: dev@dpdk.org > Cc: Anatoly Burakov ; Andrew Rybchenko > ; Ferruh Yigit ; > Giridharan, Ganesan ; Jerin Jacob Kollanukkaran > ; Kiran Kumar Kokkilagadda ; > Stephen Hemminger ; Thomas Monjalon > ; Vamsi Krishna Attunuru > Subject: [EXT] [PATCH v2 5/6] mempool: prevent objects from being across > pages >=20 > External Email >=20 > ---------------------------------------------------------------------- > When populating a mempool, ensure that objects are not located across sev= eral > pages, except if user did not request iova contiguous objects. >=20 > Signed-off-by: Vamsi Krishna Attunuru > Signed-off-by: Olivier Matz > --- > drivers/mempool/octeontx2/Makefile | 3 + > drivers/mempool/octeontx2/meson.build | 3 + > drivers/mempool/octeontx2/otx2_mempool_ops.c | 119 ++++++++++++++++--- > lib/librte_mempool/rte_mempool.c | 23 ++-- > lib/librte_mempool/rte_mempool_ops_default.c | 32 ++++- > 5 files changed, 147 insertions(+), 33 deletions(-) >=20 > diff --git a/drivers/mempool/octeontx2/Makefile > b/drivers/mempool/octeontx2/Makefile > index 87cce22c6..d781cbfc6 100644 > --- a/drivers/mempool/octeontx2/Makefile > +++ b/drivers/mempool/octeontx2/Makefile > @@ -27,6 +27,9 @@ EXPORT_MAP :=3D rte_mempool_octeontx2_version.map >=20 > LIBABIVER :=3D 1 >=20 > +# for rte_mempool_get_page_size > +CFLAGS +=3D -DALLOW_EXPERIMENTAL_API > + > # > # all source are stored in SRCS-y > # > diff --git a/drivers/mempool/octeontx2/meson.build > b/drivers/mempool/octeontx2/meson.build > index 9fde40f0e..28f9634da 100644 > --- a/drivers/mempool/octeontx2/meson.build > +++ b/drivers/mempool/octeontx2/meson.build > @@ -21,3 +21,6 @@ foreach flag: extra_flags endforeach >=20 > deps +=3D ['eal', 'mbuf', 'kvargs', 'bus_pci', 'common_octeontx2', 'memp= ool'] > + > +# for rte_mempool_get_page_size > +allow_experimental_apis =3D true > diff --git a/drivers/mempool/octeontx2/otx2_mempool_ops.c > b/drivers/mempool/octeontx2/otx2_mempool_ops.c > index d769575f4..47117aec6 100644 > --- a/drivers/mempool/octeontx2/otx2_mempool_ops.c > +++ b/drivers/mempool/octeontx2/otx2_mempool_ops.c > @@ -713,12 +713,76 @@ static ssize_t > otx2_npa_calc_mem_size(const struct rte_mempool *mp, uint32_t obj_num, > uint32_t pg_shift, size_t *min_chunk_size, size_t *align) { > - /* > - * Simply need space for one more object to be able to > - * fulfill alignment requirements. > - */ > - return rte_mempool_op_calc_mem_size_default(mp, obj_num + 1, > pg_shift, > - min_chunk_size, align); > + size_t total_elt_sz; > + size_t obj_per_page, pg_sz, objs_in_last_page; > + size_t mem_size; > + > + /* derived from rte_mempool_op_calc_mem_size_default() */ > + > + total_elt_sz =3D mp->header_size + mp->elt_size + mp->trailer_size; > + > + if (total_elt_sz =3D=3D 0) { > + mem_size =3D 0; > + } else if (pg_shift =3D=3D 0) { > + /* one object margin to fix alignment */ > + mem_size =3D total_elt_sz * (obj_num + 1); > + } else { > + pg_sz =3D (size_t)1 << pg_shift; > + obj_per_page =3D pg_sz / total_elt_sz; > + > + /* we need to keep one object to fix alignment */ > + if (obj_per_page > 0) > + obj_per_page--; > + > + if (obj_per_page =3D=3D 0) { > + /* > + * Note that if object size is bigger than page size, > + * then it is assumed that pages are grouped in subsets > + * of physically continuous pages big enough to store > + * at least one object. > + */ > + mem_size =3D RTE_ALIGN_CEIL(2 * total_elt_sz, > + pg_sz) * obj_num; > + } else { > + /* In the best case, the allocator will return a > + * page-aligned address. For example, with 5 objs, > + * the required space is as below: > + * | page0 | page1 | page2 (last) | > + * |obj0 |obj1 |xxx|obj2 |obj3 |xxx|obj4| > + * <------------- mem_size -------------> > + */ > + objs_in_last_page =3D ((obj_num - 1) % obj_per_page) + > 1; > + /* room required for the last page */ > + mem_size =3D objs_in_last_page * total_elt_sz; > + /* room required for other pages */ > + mem_size +=3D ((obj_num - objs_in_last_page) / > + obj_per_page) << pg_shift; > + > + /* In the worst case, the allocator returns a > + * non-aligned pointer, wasting up to > + * total_elt_sz. Add a margin for that. > + */ > + mem_size +=3D total_elt_sz - 1; > + } > + } > + > + *min_chunk_size =3D total_elt_sz * 2; > + *align =3D RTE_CACHE_LINE_SIZE; > + > + return mem_size; > +} > + > +/* Returns -1 if object crosses a page boundary, else returns 0 */ > +static int check_obj_bounds(char *obj, size_t pg_sz, size_t elt_sz) { > + if (pg_sz =3D=3D 0) > + return 0; > + if (elt_sz > pg_sz) > + return 0; > + if (RTE_PTR_ALIGN(obj, pg_sz) !=3D RTE_PTR_ALIGN(obj + elt_sz - 1, > pg_sz)) > + return -1; > + return 0; > } >=20 > static int > @@ -726,8 +790,12 @@ otx2_npa_populate(struct rte_mempool *mp, > unsigned int max_objs, void *vaddr, > rte_iova_t iova, size_t len, > rte_mempool_populate_obj_cb_t *obj_cb, void *obj_cb_arg) > { > - size_t total_elt_sz; > + char *va =3D vaddr; > + size_t total_elt_sz, pg_sz; > size_t off; > + unsigned int i; > + void *obj; > + int ret; >=20 > if (iova =3D=3D RTE_BAD_IOVA) > return -EINVAL; > @@ -735,22 +803,45 @@ otx2_npa_populate(struct rte_mempool *mp, > unsigned int max_objs, void *vaddr, > total_elt_sz =3D mp->header_size + mp->elt_size + mp->trailer_size; >=20 > /* Align object start address to a multiple of total_elt_sz */ > - off =3D total_elt_sz - ((uintptr_t)vaddr % total_elt_sz); > + off =3D total_elt_sz - (((uintptr_t)(va - 1) % total_elt_sz) + 1); >=20 > if (len < off) > return -EINVAL; >=20 > - vaddr =3D (char *)vaddr + off; > - iova +=3D off; > - len -=3D off; >=20 > - npa_lf_aura_op_range_set(mp->pool_id, iova, iova + len); > + npa_lf_aura_op_range_set(mp->pool_id, iova + off, iova + len - off); >=20 > if (npa_lf_aura_range_update_check(mp->pool_id) < 0) > return -EBUSY; >=20 > - return rte_mempool_op_populate_default(mp, max_objs, vaddr, iova, > len, > - obj_cb, obj_cb_arg); > + /* the following is derived from rte_mempool_op_populate_default() */ > + > + ret =3D rte_mempool_get_page_size(mp, &pg_sz); > + if (ret < 0) > + return ret; > + > + for (i =3D 0; i < max_objs; i++) { > + /* avoid objects to cross page boundaries, and align > + * offset to a multiple of total_elt_sz. > + */ > + if (check_obj_bounds(va + off, pg_sz, total_elt_sz) < 0) { > + off +=3D RTE_PTR_ALIGN_CEIL(va + off, pg_sz) - (va + > off); > + off +=3D total_elt_sz - (((uintptr_t)(va + off - 1) % > + total_elt_sz) + 1); > + } > + > + if (off + total_elt_sz > len) > + break; > + > + off +=3D mp->header_size; > + obj =3D va + off; > + obj_cb(mp, obj_cb_arg, obj, > + (iova =3D=3D RTE_BAD_IOVA) ? RTE_BAD_IOVA : (iova + off)); > + rte_mempool_ops_enqueue_bulk(mp, &obj, 1); > + off +=3D mp->elt_size + mp->trailer_size; > + } > + > + return i; > } >=20 > static struct rte_mempool_ops otx2_npa_ops =3D { diff --git > a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c > index 758c5410b..d3db9273d 100644 > --- a/lib/librte_mempool/rte_mempool.c > +++ b/lib/librte_mempool/rte_mempool.c > @@ -431,8 +431,6 @@ rte_mempool_get_page_size(struct rte_mempool *mp, > size_t *pg_sz) >=20 > if (!need_iova_contig_obj) > *pg_sz =3D 0; > - else if (!alloc_in_ext_mem && rte_eal_iova_mode() =3D=3D RTE_IOVA_VA) > - *pg_sz =3D 0; > else if (rte_eal_has_hugepages() || alloc_in_ext_mem) > *pg_sz =3D get_min_page_size(mp->socket_id); > else > @@ -481,17 +479,15 @@ rte_mempool_populate_default(struct rte_mempool > *mp) > * then just set page shift and page size to 0, because the user has > * indicated that there's no need to care about anything. > * > - * if we do need contiguous objects, there is also an option to reserve > - * the entire mempool memory as one contiguous block of memory, in > - * which case the page shift and alignment wouldn't matter as well. > + * if we do need contiguous objects (if a mempool driver has its > + * own calc_size() method returning min_chunk_size =3D mem_size), > + * there is also an option to reserve the entire mempool memory > + * as one contiguous block of memory. > * > * if we require contiguous objects, but not necessarily the entire > - * mempool reserved space to be contiguous, then there are two > options. > - * > - * if our IO addresses are virtual, not actual physical (IOVA as VA > - * case), then no page shift needed - our memory allocation will give u= s > - * contiguous IO memory as far as the hardware is concerned, so > - * act as if we're getting contiguous memory. > + * mempool reserved space to be contiguous, pg_sz will be !=3D 0, > + * and the default ops->populate() will take care of not placing > + * objects across pages. > * > * if our IO addresses are physical, we may get memory from bigger > * pages, or we might get memory from smaller pages, and how much > of it @@ -504,11 +500,6 @@ rte_mempool_populate_default(struct > rte_mempool *mp) > * > * If we fail to get enough contiguous memory, then we'll go and > * reserve space in smaller chunks. > - * > - * We also have to take into account the fact that memory that we're > - * going to allocate from can belong to an externally allocated memory > - * area, in which case the assumption of IOVA as VA mode being > - * synonymous with IOVA contiguousness will not hold. > */ >=20 > need_iova_contig_obj =3D !(mp->flags & > MEMPOOL_F_NO_IOVA_CONTIG); diff --git > a/lib/librte_mempool/rte_mempool_ops_default.c > b/lib/librte_mempool/rte_mempool_ops_default.c > index f6aea7662..e5cd4600f 100644 > --- a/lib/librte_mempool/rte_mempool_ops_default.c > +++ b/lib/librte_mempool/rte_mempool_ops_default.c > @@ -61,21 +61,47 @@ rte_mempool_op_calc_mem_size_default(const struct > rte_mempool *mp, > return mem_size; > } >=20 > +/* Returns -1 if object crosses a page boundary, else returns 0 */ > +static int check_obj_bounds(char *obj, size_t pg_sz, size_t elt_sz) { > + if (pg_sz =3D=3D 0) > + return 0; > + if (elt_sz > pg_sz) > + return 0; > + if (RTE_PTR_ALIGN(obj, pg_sz) !=3D RTE_PTR_ALIGN(obj + elt_sz - 1, > pg_sz)) > + return -1; > + return 0; > +} > + > int > rte_mempool_op_populate_default(struct rte_mempool *mp, unsigned int > max_objs, > void *vaddr, rte_iova_t iova, size_t len, > rte_mempool_populate_obj_cb_t *obj_cb, void *obj_cb_arg) { > - size_t total_elt_sz; > + char *va =3D vaddr; > + size_t total_elt_sz, pg_sz; > size_t off; > unsigned int i; > void *obj; > + int ret; > + > + ret =3D rte_mempool_get_page_size(mp, &pg_sz); > + if (ret < 0) > + return ret; >=20 > total_elt_sz =3D mp->header_size + mp->elt_size + mp->trailer_size; >=20 > - for (off =3D 0, i =3D 0; off + total_elt_sz <=3D len && i < max_objs; i= ++) { > + for (off =3D 0, i =3D 0; i < max_objs; i++) { > + /* avoid objects to cross page boundaries */ > + if (check_obj_bounds(va + off, pg_sz, total_elt_sz) < 0) > + off +=3D RTE_PTR_ALIGN_CEIL(va + off, pg_sz) - (va + > off); > + > + if (off + total_elt_sz > len) > + break; > + > off +=3D mp->header_size; > - obj =3D (char *)vaddr + off; > + obj =3D va + off; > obj_cb(mp, obj_cb_arg, obj, > (iova =3D=3D RTE_BAD_IOVA) ? RTE_BAD_IOVA : (iova + off)); > rte_mempool_ops_enqueue_bulk(mp, &obj, 1); > -- > 2.20.1