From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 79F83A00BE; Tue, 29 Oct 2019 18:25:24 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id E466C1BF50; Tue, 29 Oct 2019 18:25:22 +0100 (CET) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by dpdk.org (Postfix) with ESMTP id 7ED951BF4C for ; Tue, 29 Oct 2019 18:25:21 +0100 (CET) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id x9THJiDr024891; Tue, 29 Oct 2019 10:25:20 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : references : in-reply-to : content-type : content-transfer-encoding : mime-version; s=pfpt0818; bh=dzmLewd5iFjJBi97K/0EM/NA7d/jIijT/YI6K9btlLw=; b=Ut/zCpLsUL5H1PJcOLtaeCePAtvrtbSxfmCfAIx04vnj2sXTK/WxzZxjOeZFMVKHOaSY G71gTZOEwMDyxg4SeQ4FBaHxLP6V2WS9a8ro23RRIpc8LUWedwGAjI1vM+KqkC7km5E0 GSpCCfU2JwUv2E4+vx6RLF2mcPa7D2Itzb7kK2bEp6HmehsL9dovTS8PAJefJtqAzJMJ N94yEyuTPQChtBHcvV5PEbl5BVPdu/TYicS41e8OtuvprJrkueeubSxFrpLxobKqXuwH CgGiaUZw1cizQOn8ImJRW5J4uUX3xrHJx4lJNz8NfW6WZarHPdqNwf9QO4NxtojHCUbM 4Q== Received: from sc-exch01.marvell.com ([199.233.58.181]) by mx0b-0016f401.pphosted.com with ESMTP id 2vvnnp2ejn-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Tue, 29 Oct 2019 10:25:20 -0700 Received: from SC-EXCH04.marvell.com (10.93.176.84) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Tue, 29 Oct 2019 10:25:18 -0700 Received: from NAM01-SN1-obe.outbound.protection.outlook.com (104.47.32.56) by SC-EXCH04.marvell.com (10.93.176.84) with Microsoft SMTP Server (TLS) id 15.0.1367.3 via Frontend Transport; Tue, 29 Oct 2019 10:25:18 -0700 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=aZD7bBuwCd9CWW6VtiSblWME+SybKu3G1V/aUMN+qz/O8PbLrSj5iFNCwR+qHSWZ8y4NprDFH0Z/DpZn8u1E0wZR5eCgx7v01E3cCyCJHN+0yqNsVpMw6TX/ya87Knh3tRQgI8X9r4UbNFfGPn3ck9UpBKL7rUAfQ4rG5aj4uDZHycI8zMuISYBMRHY35abLOHVkw4n3PUzCLsNTU8pxfgJNSkR/tCyjvVskPjZ/4+J6jVlgVifwqZdMmto7uLRoYcg3OnRmX7ebwiKbANIRZkv/KNmr87myqjdQdpbx0paEga4GOKkKYZcsqCCG9Q+DAUr0XR3ZSevWKI/MDIBJAg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=dzmLewd5iFjJBi97K/0EM/NA7d/jIijT/YI6K9btlLw=; b=ABB0xFWkapmdWM3ZWDnj8MDJycBTb8B6AEexNH7186O1qeHCKfqYw7puDF+fnz0mwl/cH6FZsE9PkHxfWpQkfFOh8dmTrRzNmMGVao1w1tl0xidvPq+QqV/NAyKUQA/XZbQryIrQubLuw+qsVFZFbFQIEF7DDCWwJMnkSs9NLCrHYekTBgOVa4MnDKtEt/eZtcaVckVx/qeHDpcsOeOCtOozNk0iIqZoi4J7dR6o6Ym2AI/1ldeNe0t/tFaF3kt/cTZwpyhV8Q/ttx21WynuUJY8kt6g9CKIvM3qrFJENzDufMB7iQoGcY/OexQeBw1p4vC3Jac1WnEnIebIxVOf1w== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=marvell.com; dmarc=pass action=none header.from=marvell.com; dkim=pass header.d=marvell.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.onmicrosoft.com; s=selector2-marvell-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=dzmLewd5iFjJBi97K/0EM/NA7d/jIijT/YI6K9btlLw=; b=VzwoJq+ErRcPVfaQ9mr7yVDqBiq/BJCPBHNPsghZVNoYFJ7Hin9b35giR5/7UxPpys4E8pTR3bpdbsrCZhHczQiPv1S5dei+jizCoiMQTcQYDD1i/C6tt7sz3WZaerssUs6iDCR9/LtxlG6TrtYku5Tb9QaqaQLrC8vu00wsP2U= Received: from MWHPR18MB1645.namprd18.prod.outlook.com (10.173.241.137) by MWHPR18MB1151.namprd18.prod.outlook.com (10.173.124.142) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.2387.23; Tue, 29 Oct 2019 17:25:16 +0000 Received: from MWHPR18MB1645.namprd18.prod.outlook.com ([fe80::b4fd:71ce:2bc4:7afb]) by MWHPR18MB1645.namprd18.prod.outlook.com ([fe80::b4fd:71ce:2bc4:7afb%3]) with mapi id 15.20.2387.027; Tue, 29 Oct 2019 17:25:16 +0000 From: Vamsi Krishna Attunuru To: Olivier Matz , "dev@dpdk.org" CC: Anatoly Burakov , Andrew Rybchenko , Ferruh Yigit , "Giridharan, Ganesan" , Jerin Jacob Kollanukkaran , Kiran Kumar Kokkilagadda , Stephen Hemminger , Thomas Monjalon Thread-Topic: [EXT] [PATCH 5/5] mempool: prevent objects from being across pages Thread-Index: AQHVjZhEb0v9G7D/10SHI0SmD4Wgsqdx3hUg Date: Tue, 29 Oct 2019 17:25:16 +0000 Message-ID: References: <20190719133845.32432-1-olivier.matz@6wind.com> <20191028140122.9592-1-olivier.matz@6wind.com> <20191028140122.9592-6-olivier.matz@6wind.com> In-Reply-To: <20191028140122.9592-6-olivier.matz@6wind.com> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [103.227.99.38] x-ms-publictraffictype: Email x-ms-office365-filtering-correlation-id: ad3eb48a-b01e-4763-ffd3-08d75c94f6f5 x-ms-traffictypediagnostic: MWHPR18MB1151: x-ms-exchange-transport-forked: True x-microsoft-antispam-prvs: x-ms-oob-tlc-oobclassifiers: OLM:9508; x-forefront-prvs: 0205EDCD76 x-forefront-antispam-report: SFV:NSPM; SFS:(10009020)(4636009)(376002)(346002)(136003)(366004)(39860400002)(396003)(189003)(199004)(43544003)(13464003)(71200400001)(14454004)(316002)(7736002)(446003)(74316002)(229853002)(86362001)(25786009)(476003)(4326008)(486006)(52536014)(110136005)(14444005)(5660300002)(54906003)(256004)(33656002)(66556008)(76116006)(64756008)(66946007)(66066001)(66446008)(66476007)(6116002)(53546011)(3846002)(102836004)(76176011)(8936002)(9686003)(6436002)(55016002)(26005)(2906002)(11346002)(71190400001)(99286004)(186003)(81166006)(81156014)(2501003)(7696005)(6506007)(478600001)(305945005)(8676002)(6246003); DIR:OUT; SFP:1101; SCL:1; SRVR:MWHPR18MB1151; H:MWHPR18MB1645.namprd18.prod.outlook.com; FPR:; SPF:None; LANG:en; PTR:InfoNoRecords; MX:1; A:1; received-spf: None (protection.outlook.com: marvell.com does not designate permitted sender hosts) x-ms-exchange-senderadcheck: 1 x-microsoft-antispam: BCL:0; x-microsoft-antispam-message-info: FVa2yD0/uC6oVH8GMZw5ScO5QvgsHx2O3HoOzTZrU1RJQF1DX+qIdFRwbD3hXB0M80Dxyoj9D/9R3lIFGjo5ggKAy6GLLeBpfC5wwgvB6hE3aZ8mjFDyYzJCSEidb8kSCZS4/kwznw8q0PwVMoKtY9uwtb9F38UEDQmPxTGVHXXijAIFSYgpdbKppcPOLgwRjoeMtnFCk6wsChyl4LEdn7TzV7AD17y9/SX5uCzvmyPLxQuwKDepZERwRcyfhhtRN4h2syOsiLSOIIY1vAG/yaEGk0XAOF6NLLzUmKwmtkU8TMW5kxiOJhdjOHRPLVjbtrNafPc4jXYTSUJQy3rB9RTKyUXDwYtxc7VopKfGWUYzAduOWuJgzXGLuj22GijkFoyYiJwnzeUChLhduzlbSRnyWqID6+UPtNdvF7Imah6LqBwe/gfnpJTMV4WoJ5er Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 X-MS-Exchange-CrossTenant-Network-Message-Id: ad3eb48a-b01e-4763-ffd3-08d75c94f6f5 X-MS-Exchange-CrossTenant-originalarrivaltime: 29 Oct 2019 17:25:16.1547 (UTC) X-MS-Exchange-CrossTenant-fromentityheader: Hosted X-MS-Exchange-CrossTenant-id: 70e1fb47-1155-421d-87fc-2e58f638b6e0 X-MS-Exchange-CrossTenant-mailboxtype: HOSTED X-MS-Exchange-CrossTenant-userprincipalname: p7pjQpqwDn2BXvlnyGWP0+tfOutbOG1sgD7US4irBeVpOsTFMwk3oaagfxuwa56thhq+Wuq1fd6lRQsaHDqapg== X-MS-Exchange-Transport-CrossTenantHeadersStamped: MWHPR18MB1151 X-OriginatorOrg: marvell.com X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.95,1.0.8 definitions=2019-10-29_05:2019-10-28,2019-10-29 signatures=0 Subject: Re: [dpdk-dev] [EXT] [PATCH 5/5] mempool: prevent objects from being across pages X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Hi Olivier, > -----Original Message----- > From: Olivier Matz > Sent: Monday, October 28, 2019 7:31 PM > To: dev@dpdk.org > Cc: Anatoly Burakov ; Andrew Rybchenko > ; Ferruh Yigit ; > Giridharan, Ganesan ; Jerin Jacob Kollanukkaran > ; Kiran Kumar Kokkilagadda > ; Stephen Hemminger > ; Thomas Monjalon ; > Vamsi Krishna Attunuru > Subject: [EXT] [PATCH 5/5] mempool: prevent objects from being across > pages >=20 > External Email >=20 > ---------------------------------------------------------------------- > When populating a mempool, ensure that objects are not located across > several pages, except if user did not request iova contiguous objects. >=20 > Signed-off-by: Vamsi Krishna Attunuru > Signed-off-by: Olivier Matz > --- > lib/librte_mempool/rte_mempool.c | 23 +++++----------- > lib/librte_mempool/rte_mempool_ops_default.c | 29 ++++++++++++++++++- > - > 2 files changed, 33 insertions(+), 19 deletions(-) >=20 > diff --git a/lib/librte_mempool/rte_mempool.c > b/lib/librte_mempool/rte_mempool.c > index 7664764e5..b23fd1b06 100644 > --- a/lib/librte_mempool/rte_mempool.c > +++ b/lib/librte_mempool/rte_mempool.c > @@ -428,8 +428,6 @@ rte_mempool_get_page_size(struct rte_mempool > *mp, size_t *pg_sz) >=20 > if (!need_iova_contig_obj) > *pg_sz =3D 0; > - else if (!alloc_in_ext_mem && rte_eal_iova_mode() =3D=3D > RTE_IOVA_VA) > - *pg_sz =3D 0; > else if (rte_eal_has_hugepages() || alloc_in_ext_mem) > *pg_sz =3D get_min_page_size(mp->socket_id); > else > @@ -478,17 +476,15 @@ rte_mempool_populate_default(struct > rte_mempool *mp) > * then just set page shift and page size to 0, because the user has > * indicated that there's no need to care about anything. > * > - * if we do need contiguous objects, there is also an option to reserve > - * the entire mempool memory as one contiguous block of memory, > in > - * which case the page shift and alignment wouldn't matter as well. > + * if we do need contiguous objects (if a mempool driver has its > + * own calc_size() method returning min_chunk_size =3D mem_size), > + * there is also an option to reserve the entire mempool memory > + * as one contiguous block of memory. > * > * if we require contiguous objects, but not necessarily the entire > - * mempool reserved space to be contiguous, then there are two > options. > - * > - * if our IO addresses are virtual, not actual physical (IOVA as VA > - * case), then no page shift needed - our memory allocation will give > us > - * contiguous IO memory as far as the hardware is concerned, so > - * act as if we're getting contiguous memory. > + * mempool reserved space to be contiguous, pg_sz will be !=3D 0, > + * and the default ops->populate() will take care of not placing > + * objects across pages. > * > * if our IO addresses are physical, we may get memory from bigger > * pages, or we might get memory from smaller pages, and how > much of it @@ -501,11 +497,6 @@ rte_mempool_populate_default(struct > rte_mempool *mp) > * > * If we fail to get enough contiguous memory, then we'll go and > * reserve space in smaller chunks. > - * > - * We also have to take into account the fact that memory that we're > - * going to allocate from can belong to an externally allocated > memory > - * area, in which case the assumption of IOVA as VA mode being > - * synonymous with IOVA contiguousness will not hold. > */ >=20 > need_iova_contig_obj =3D !(mp->flags & > MEMPOOL_F_NO_IOVA_CONTIG); diff --git > a/lib/librte_mempool/rte_mempool_ops_default.c > b/lib/librte_mempool/rte_mempool_ops_default.c > index f6aea7662..dd09a0a32 100644 > --- a/lib/librte_mempool/rte_mempool_ops_default.c > +++ b/lib/librte_mempool/rte_mempool_ops_default.c > @@ -61,21 +61,44 @@ rte_mempool_op_calc_mem_size_default(const > struct rte_mempool *mp, > return mem_size; > } >=20 > +/* Returns -1 if object crosses a page boundary, else returns 0 */ > +static int check_obj_bounds(char *obj, size_t pg_sz, size_t elt_sz) { > + if (pg_sz =3D=3D 0) > + return 0; > + if (elt_sz > pg_sz) > + return 0; > + if (RTE_PTR_ALIGN(obj, pg_sz) !=3D RTE_PTR_ALIGN(obj + elt_sz - 1, > pg_sz)) > + return -1; > + return 0; > +} > + > int > rte_mempool_op_populate_default(struct rte_mempool *mp, unsigned int > max_objs, > void *vaddr, rte_iova_t iova, size_t len, > rte_mempool_populate_obj_cb_t *obj_cb, void *obj_cb_arg) > { > - size_t total_elt_sz; > + char *va =3D vaddr; > + size_t total_elt_sz, pg_sz; > size_t off; > unsigned int i; > void *obj; >=20 > + rte_mempool_get_page_size(mp, &pg_sz); > + > total_elt_sz =3D mp->header_size + mp->elt_size + mp->trailer_size; >=20 > - for (off =3D 0, i =3D 0; off + total_elt_sz <=3D len && i < max_objs; i= ++) { > + for (off =3D 0, i =3D 0; i < max_objs; i++) { > + /* align offset to next page start if required */ > + if (check_obj_bounds(va + off, pg_sz, total_elt_sz) < 0) > + off +=3D RTE_PTR_ALIGN_CEIL(va + off, pg_sz) - (va + > off); Moving offset to the start of next page and than freeing (vaddr + off + hea= der_size) to pool, this scheme is not aligning with octeontx2 mempool's buf= alignment requirement(buffer address needs to be multiple of buffer size). > + > + if (off + total_elt_sz > len) > + break; > + > off +=3D mp->header_size; > - obj =3D (char *)vaddr + off; > + obj =3D va + off; > obj_cb(mp, obj_cb_arg, obj, > (iova =3D=3D RTE_BAD_IOVA) ? RTE_BAD_IOVA : (iova + off)); > rte_mempool_ops_enqueue_bulk(mp, &obj, 1); > -- > 2.20.1