From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 8C1B7A00BE; Wed, 30 Oct 2019 04:55:58 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 0EAC41BEAA; Wed, 30 Oct 2019 04:55:57 +0100 (CET) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by dpdk.org (Postfix) with ESMTP id 2B11C1BEA2 for ; Wed, 30 Oct 2019 04:55:55 +0100 (CET) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id x9U3teGt031234; Tue, 29 Oct 2019 20:55:52 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : references : in-reply-to : content-type : content-transfer-encoding : mime-version; s=pfpt0818; bh=azNS40SeWus4PS7Ra9H2Tq1weA0nUaeWckifov4tdi8=; b=HHT0LbEEagf/zKfc7Idp548jxpRG/o8P2Ks61GjAB0PoN72YsN+2hJhi8NeqQk7SC/l/ PE0/mY5vsqRUVxxrVNvlWMViEeC4ZUTDqGPfQA2vPzKeoGZVZRPJ+/Tpzvdekpn5Q8hM p2tkxOrQItjD+9ESHrqpKqt+MosZsyPiDotD1y0a5Ebdb0YInJMhINUUG97HXlh+aKTJ 48EmDTu+5cqY3hoAY0FqsBd+lQRRaTWDgNoHl64Q13OCsIBOR5CvaCrWaDfiZ87QHGaF xhLjHDetnXhFQePzK5CTjbJZRNj4ySojHE1okQ7WmUayFc8xnS3olmQ4hiC4kzk8CJX9 dA== Received: from sc-exch02.marvell.com ([199.233.58.182]) by mx0a-0016f401.pphosted.com with ESMTP id 2vxwjm1203-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Tue, 29 Oct 2019 20:55:52 -0700 Received: from SC-EXCH04.marvell.com (10.93.176.84) by SC-EXCH02.marvell.com (10.93.176.82) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Tue, 29 Oct 2019 20:55:51 -0700 Received: from NAM01-SN1-obe.outbound.protection.outlook.com (104.47.32.57) by SC-EXCH04.marvell.com (10.93.176.84) with Microsoft SMTP Server (TLS) id 15.0.1367.3 via Frontend Transport; Tue, 29 Oct 2019 20:55:51 -0700 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=C5eJFWIpFICUZGZeVQ3gR/Y1Xe8ci5Wyz5axysvFzdlmfp99ljQ4D4R4kieyoiBVv5hgM8LR8ieOkVanB/X6T8lVm3bH//Hrj+y0I+KuaAXV6QsNkxw1GSXAiNPklz8mxfO5DnHPrcvQqDN+qlzZ8cRWxbsSwfv0eQ55jpQUtr9vE0wU4eXAlLjnedBbOsf9LFEs1z+aPnHqjW3Zng+HGeV2musFBufQfP4iza/j8/KUABEBqwpbf4OSpXB6TKIUtEBYd8ew/wZb3d46awknI4h7nJNXNldViA4TwVpGRhtL2OQb6arkF1MsGlMhgQmsyVzG873Du3mBb9B8WixV0g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=azNS40SeWus4PS7Ra9H2Tq1weA0nUaeWckifov4tdi8=; b=NBhFJfQ+1Ix0HcM6pKk4yxO52st9vJpUElhDQRCxsr6mwAg2g4hZ5hRpFtVMI8Hbn5prYxFjPBKwLa99BAcOsiIgBxzMdv1vN/AnbeuH6aYOMoIIzVYr5pADoKZbS6kejToFSbR6z3/tcCcba5PYibZKjRhz0z3GiuJ5OqJQgMeBekNLRJM1CCHVRg3GLQjU0A6Uw9De/6gMLJW3rfjOZ89OIYrmRVxXUdJV3KtQ7mpD+GV9n6UvrQ2zKaF8bKKDffnsuQk8MEYROUX8sJbkeeODkFk2Hll7gudlru0gXZZS8BEQDy/5GRnYEXjWtTu3aJskh87IjK6sWmKrtG0WvQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=marvell.com; dmarc=pass action=none header.from=marvell.com; dkim=pass header.d=marvell.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.onmicrosoft.com; s=selector2-marvell-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=azNS40SeWus4PS7Ra9H2Tq1weA0nUaeWckifov4tdi8=; b=O9IxUoTtB2CdMdYWb06hgQK/KlAWDMgV3pcHINMXGFGluZ2YLjqCtKJoTtj6uL5whWAKcM7gFkM0s9NMgJTKenLX1B9RJB0CPW8KBwwVP43wucjZ5b63OAcOFJY1DoaNZPCU5qzSbxI3XQIwsp9xRZ5RNC9DgGRlTiUIXJbwMS8= Received: from MWHPR18MB1645.namprd18.prod.outlook.com (10.173.241.137) by MWHPR18MB1088.namprd18.prod.outlook.com (10.173.123.17) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.2387.23; Wed, 30 Oct 2019 03:55:50 +0000 Received: from MWHPR18MB1645.namprd18.prod.outlook.com ([fe80::b4fd:71ce:2bc4:7afb]) by MWHPR18MB1645.namprd18.prod.outlook.com ([fe80::b4fd:71ce:2bc4:7afb%3]) with mapi id 15.20.2387.027; Wed, 30 Oct 2019 03:55:49 +0000 From: Vamsi Krishna Attunuru To: Olivier Matz , "dev@dpdk.org" CC: Anatoly Burakov , Andrew Rybchenko , Ferruh Yigit , "Giridharan, Ganesan" , Jerin Jacob Kollanukkaran , Kiran Kumar Kokkilagadda , Stephen Hemminger , Thomas Monjalon Thread-Topic: [EXT] [PATCH 5/5] mempool: prevent objects from being across pages Thread-Index: AQHVjZhEb0v9G7D/10SHI0SmD4Wgsqdx3hUggACu3fA= Date: Wed, 30 Oct 2019 03:55:49 +0000 Message-ID: References: <20190719133845.32432-1-olivier.matz@6wind.com> <20191028140122.9592-1-olivier.matz@6wind.com> <20191028140122.9592-6-olivier.matz@6wind.com> In-Reply-To: Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [14.140.231.66] x-ms-publictraffictype: Email x-ms-office365-filtering-correlation-id: a41bc796-92aa-410e-18fd-08d75ced0d6d x-ms-traffictypediagnostic: MWHPR18MB1088: x-ms-exchange-purlcount: 1 x-ms-exchange-transport-forked: True x-microsoft-antispam-prvs: x-ms-oob-tlc-oobclassifiers: OLM:4125; x-forefront-prvs: 02065A9E77 x-forefront-antispam-report: SFV:NSPM; SFS:(10009020)(4636009)(136003)(376002)(39860400002)(346002)(396003)(366004)(43544003)(199004)(189003)(13464003)(3846002)(99286004)(64756008)(66946007)(7736002)(66476007)(478600001)(476003)(446003)(66066001)(305945005)(66556008)(66446008)(8936002)(6116002)(11346002)(76116006)(966005)(8676002)(486006)(2501003)(81166006)(81156014)(55236004)(5660300002)(6506007)(53546011)(4326008)(76176011)(9686003)(316002)(25786009)(52536014)(33656002)(26005)(54906003)(71200400001)(6306002)(71190400001)(256004)(186003)(86362001)(229853002)(110136005)(6246003)(102836004)(6436002)(55016002)(14444005)(7696005)(74316002)(2906002)(14454004); DIR:OUT; SFP:1101; SCL:1; SRVR:MWHPR18MB1088; H:MWHPR18MB1645.namprd18.prod.outlook.com; FPR:; SPF:None; LANG:en; PTR:InfoNoRecords; A:1; MX:1; received-spf: None (protection.outlook.com: marvell.com does not designate permitted sender hosts) x-ms-exchange-senderadcheck: 1 x-microsoft-antispam: BCL:0; x-microsoft-antispam-message-info: V3IKTdLMR0QPZyWt7YUM78SLLs+ZbHQeLL4gQ/fqNfMX42HE0EoIXdOp/3Nvm0Fa8oNCpU7fiN7J2ONWoxQjN27v9uG0GTqpJKjQG/nllpsamVe++HO5Sa0cojUVgbxPvYVKjBUYK60L03GsCDWUoLHEGdBLr0csVKAvz/hr1gD4LnDZqm+ynl6+x/n2C4hcjM956/upLu0LTG4f9Q+pTVMTbnuuDyg01UC5qFRB35ETAwYl40VgjEgM7i1zxKf/EG7e7ldSv0VctleOosif+Z+jiW2xePbo7YecHWR+4sacTA5xb0hENkmMv11nqV7LY0V3vF4/o+cWcfbyewXfadKdIhbjejxouRLo2ajcWjsE/rJ74paRivdeALMJL/mojUwXan+FWZcl12pEt4+ANYwoFJOt2vhW5y9GJGHY/KeyPsLD6Rw3q5HkrUlWZn7jhxv1LASTp27GUKp4anIN1nu3la2HDpcVBZ/kcFtvWTs= Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 X-MS-Exchange-CrossTenant-Network-Message-Id: a41bc796-92aa-410e-18fd-08d75ced0d6d X-MS-Exchange-CrossTenant-originalarrivaltime: 30 Oct 2019 03:55:49.6223 (UTC) X-MS-Exchange-CrossTenant-fromentityheader: Hosted X-MS-Exchange-CrossTenant-id: 70e1fb47-1155-421d-87fc-2e58f638b6e0 X-MS-Exchange-CrossTenant-mailboxtype: HOSTED X-MS-Exchange-CrossTenant-userprincipalname: yx8oU1GHDrOb9FY6ZSLmx5wssnYqfvc1AGsuwQjdTB/XSQJjeVHTkURzX4o0cSlumSScuBk2UOO0byBS0UF2rA== X-MS-Exchange-Transport-CrossTenantHeadersStamped: MWHPR18MB1088 X-OriginatorOrg: marvell.com X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.95,1.0.8 definitions=2019-10-30_02:2019-10-28,2019-10-30 signatures=0 Subject: Re: [dpdk-dev] [EXT] [PATCH 5/5] mempool: prevent objects from being across pages X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Hi Olivier, > -----Original Message----- > From: Vamsi Krishna Attunuru > Sent: Tuesday, October 29, 2019 10:55 PM > To: Olivier Matz ; dev@dpdk.org > Cc: Anatoly Burakov ; Andrew Rybchenko > ; Ferruh Yigit ; > Giridharan, Ganesan ; Jerin Jacob Kollanukkaran > ; Kiran Kumar Kokkilagadda ; > Stephen Hemminger ; Thomas Monjalon > > Subject: RE: [EXT] [PATCH 5/5] mempool: prevent objects from being across > pages >=20 > Hi Olivier, >=20 > > -----Original Message----- > > From: Olivier Matz > > Sent: Monday, October 28, 2019 7:31 PM > > To: dev@dpdk.org > > Cc: Anatoly Burakov ; Andrew Rybchenko > > ; Ferruh Yigit > > ; Giridharan, Ganesan > > ; Jerin Jacob Kollanukkaran > > ; Kiran Kumar Kokkilagadda > > ; Stephen Hemminger > ; > > Thomas Monjalon ; Vamsi Krishna Attunuru > > > > Subject: [EXT] [PATCH 5/5] mempool: prevent objects from being across > > pages > > > > External Email > > > > ---------------------------------------------------------------------- > > When populating a mempool, ensure that objects are not located across > > several pages, except if user did not request iova contiguous objects. > > > > Signed-off-by: Vamsi Krishna Attunuru > > Signed-off-by: Olivier Matz > > --- > > lib/librte_mempool/rte_mempool.c | 23 +++++----------- > > lib/librte_mempool/rte_mempool_ops_default.c | 29 ++++++++++++++++++- > > - > > 2 files changed, 33 insertions(+), 19 deletions(-) > > > > diff --git a/lib/librte_mempool/rte_mempool.c > > b/lib/librte_mempool/rte_mempool.c > > index 7664764e5..b23fd1b06 100644 > > --- a/lib/librte_mempool/rte_mempool.c > > +++ b/lib/librte_mempool/rte_mempool.c > > @@ -428,8 +428,6 @@ rte_mempool_get_page_size(struct rte_mempool > *mp, > > size_t *pg_sz) > > > > if (!need_iova_contig_obj) > > *pg_sz =3D 0; > > - else if (!alloc_in_ext_mem && rte_eal_iova_mode() =3D=3D > > RTE_IOVA_VA) > > - *pg_sz =3D 0; > > else if (rte_eal_has_hugepages() || alloc_in_ext_mem) > > *pg_sz =3D get_min_page_size(mp->socket_id); > > else > > @@ -478,17 +476,15 @@ rte_mempool_populate_default(struct > > rte_mempool *mp) > > * then just set page shift and page size to 0, because the user has > > * indicated that there's no need to care about anything. > > * > > - * if we do need contiguous objects, there is also an option to reser= ve > > - * the entire mempool memory as one contiguous block of memory, > > in > > - * which case the page shift and alignment wouldn't matter as well. > > + * if we do need contiguous objects (if a mempool driver has its > > + * own calc_size() method returning min_chunk_size =3D mem_size), > > + * there is also an option to reserve the entire mempool memory > > + * as one contiguous block of memory. > > * > > * if we require contiguous objects, but not necessarily the entire > > - * mempool reserved space to be contiguous, then there are two > > options. > > - * > > - * if our IO addresses are virtual, not actual physical (IOVA as VA > > - * case), then no page shift needed - our memory allocation will give > > us > > - * contiguous IO memory as far as the hardware is concerned, so > > - * act as if we're getting contiguous memory. > > + * mempool reserved space to be contiguous, pg_sz will be !=3D 0, > > + * and the default ops->populate() will take care of not placing > > + * objects across pages. > > * > > * if our IO addresses are physical, we may get memory from bigger > > * pages, or we might get memory from smaller pages, and how much > of > > it @@ -501,11 +497,6 @@ rte_mempool_populate_default(struct > > rte_mempool *mp) > > * > > * If we fail to get enough contiguous memory, then we'll go and > > * reserve space in smaller chunks. > > - * > > - * We also have to take into account the fact that memory that we're > > - * going to allocate from can belong to an externally allocated > > memory > > - * area, in which case the assumption of IOVA as VA mode being > > - * synonymous with IOVA contiguousness will not hold. > > */ > > > > need_iova_contig_obj =3D !(mp->flags & > MEMPOOL_F_NO_IOVA_CONTIG); diff > > --git a/lib/librte_mempool/rte_mempool_ops_default.c > > b/lib/librte_mempool/rte_mempool_ops_default.c > > index f6aea7662..dd09a0a32 100644 > > --- a/lib/librte_mempool/rte_mempool_ops_default.c > > +++ b/lib/librte_mempool/rte_mempool_ops_default.c > > @@ -61,21 +61,44 @@ rte_mempool_op_calc_mem_size_default(const > > struct rte_mempool *mp, > > return mem_size; > > } > > > > +/* Returns -1 if object crosses a page boundary, else returns 0 */ > > +static int check_obj_bounds(char *obj, size_t pg_sz, size_t elt_sz) { > > + if (pg_sz =3D=3D 0) > > + return 0; > > + if (elt_sz > pg_sz) > > + return 0; > > + if (RTE_PTR_ALIGN(obj, pg_sz) !=3D RTE_PTR_ALIGN(obj + elt_sz - 1, > > pg_sz)) > > + return -1; > > + return 0; > > +} > > + > > int > > rte_mempool_op_populate_default(struct rte_mempool *mp, unsigned int > > max_objs, > > void *vaddr, rte_iova_t iova, size_t len, > > rte_mempool_populate_obj_cb_t *obj_cb, void *obj_cb_arg) { > > - size_t total_elt_sz; > > + char *va =3D vaddr; > > + size_t total_elt_sz, pg_sz; > > size_t off; > > unsigned int i; > > void *obj; > > > > + rte_mempool_get_page_size(mp, &pg_sz); > > + > > total_elt_sz =3D mp->header_size + mp->elt_size + mp->trailer_size; > > > > - for (off =3D 0, i =3D 0; off + total_elt_sz <=3D len && i < max_objs;= i++) { > > + for (off =3D 0, i =3D 0; i < max_objs; i++) { > > + /* align offset to next page start if required */ > > + if (check_obj_bounds(va + off, pg_sz, total_elt_sz) < 0) > > + off +=3D RTE_PTR_ALIGN_CEIL(va + off, pg_sz) - (va + > > off); >=20 > Moving offset to the start of next page and than freeing (vaddr + off + > header_size) to pool, this scheme is not aligning with octeontx2 mempool'= s buf > alignment requirement(buffer address needs to be multiple of buffer size)= . Earlier there was a flag to align the object to total object size, later it= was deprecated(sew below link) after adding calc min chunk size callback. = In this new patch(5/5), while moving to next page, the object addr is being= align to the pg_sz. But oteontx2 mempool hw requires object addresses free= d to it must be aligned with total object size. We need these alignment re= quirement fulfilled to make it work with HW mempool. https://git.dpdk.org/dpdk/commit/lib/librte_mempool/rte_mempool.c?id=3Dce1f= 2c61ed135e4133d0429e86e554bfd4d58cb0 >=20 > > + > > + if (off + total_elt_sz > len) > > + break; > > + > > off +=3D mp->header_size; > > - obj =3D (char *)vaddr + off; > > + obj =3D va + off; > > obj_cb(mp, obj_cb_arg, obj, > > (iova =3D=3D RTE_BAD_IOVA) ? RTE_BAD_IOVA : (iova + off)); > > rte_mempool_ops_enqueue_bulk(mp, &obj, 1); > > -- > > 2.20.1