From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 45C26A04C0 for ; Tue, 19 Nov 2019 02:45:36 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 39BE5B62; Tue, 19 Nov 2019 02:45:36 +0100 (CET) Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by dpdk.org (Postfix) with ESMTP id 4E294B62; Tue, 19 Nov 2019 02:45:34 +0100 (CET) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga003.jf.intel.com ([10.7.209.27]) by fmsmga104.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 18 Nov 2019 17:45:33 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.68,322,1569308400"; d="scan'208";a="209041581" Received: from fmsmsx106.amr.corp.intel.com ([10.18.124.204]) by orsmga003.jf.intel.com with ESMTP; 18 Nov 2019 17:45:32 -0800 Received: from fmsmsx154.amr.corp.intel.com (10.18.116.70) by FMSMSX106.amr.corp.intel.com (10.18.124.204) with Microsoft SMTP Server (TLS) id 14.3.439.0; Mon, 18 Nov 2019 17:45:31 -0800 Received: from shsmsx104.ccr.corp.intel.com (10.239.4.70) by FMSMSX154.amr.corp.intel.com (10.18.116.70) with Microsoft SMTP Server (TLS) id 14.3.439.0; Mon, 18 Nov 2019 17:45:31 -0800 Received: from shsmsx103.ccr.corp.intel.com ([169.254.4.60]) by SHSMSX104.ccr.corp.intel.com ([169.254.5.127]) with mapi id 14.03.0439.000; Tue, 19 Nov 2019 09:45:29 +0800 From: "Li, WenjieX A" To: "Burakov, Anatoly" , "dev@dpdk.org" CC: Olivier Matz , Andrew Rybchenko , "david.marchand@redhat.com" , "stable@dpdk.org" , "Chen, BoX C" Thread-Topic: [dpdk-stable] [PATCH 1/2] mempool: use actual IOVA addresses when populating Thread-Index: AQHVmvOkIe7oJTcWxk2g6y74YDdz06eRwBJw Date: Tue, 19 Nov 2019 01:45:29 +0000 Message-ID: <8688172CD5C0B74590FAE19D9579F94B537E95EC@SHSMSX103.ccr.corp.intel.com> References: <825d02ef7f7b6ab65a36d9fa4719847228537384.1573739893.git.anatoly.burakov@intel.com> In-Reply-To: <825d02ef7f7b6ab65a36d9fa4719847228537384.1573739893.git.anatoly.burakov@intel.com> Accept-Language: zh-CN, en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [10.239.127.40] Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Subject: Re: [dpdk-stable] [PATCH 1/2] mempool: use actual IOVA addresses when populating X-BeenThere: stable@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches for DPDK stable branches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: stable-bounces@dpdk.org Sender: "stable" Tested-by: Chen, BoX C > -----Original Message----- > From: stable [mailto:stable-bounces@dpdk.org] On Behalf Of Anatoly Burako= v > Sent: Thursday, November 14, 2019 9:58 PM > To: dev@dpdk.org > Cc: Olivier Matz ; Andrew Rybchenko > ; david.marchand@redhat.com; stable@dpdk.org > Subject: [dpdk-stable] [PATCH 1/2] mempool: use actual IOVA addresses whe= n > populating >=20 > Currently, when mempool is being populated, we get IOVA address of every > segment using rte_mem_virt2iova(). This works for internal memory, but do= es > not really work for external memory, and does not work on platforms which > return RTE_BAD_IOVA as a result of this call (such as FreeBSD). Moreover,= even > when it works, the function in question will do unnecessary pagewalks in = IOVA > as PA mode, as it falls back to rte_mem_virt2phy() instead of just doing = a lookup > in internal memseg table. >=20 > To fix it, replace the call to first attempt to look through the internal= memseg > table (this takes care of internal and external memory), and fall back to > rte_mem_virt2iova() when unable to perform VA->IOVA translation via memse= g > table. >=20 > Fixes: 66cc45e293ed ("mem: replace memseg with memseg lists") > Cc: stable@dpdk.org >=20 > Signed-off-by: Anatoly Burakov > --- > lib/librte_mempool/rte_mempool.c | 17 +++++++++++++++-- > 1 file changed, 15 insertions(+), 2 deletions(-) >=20 > diff --git a/lib/librte_mempool/rte_mempool.c > b/lib/librte_mempool/rte_mempool.c > index 40cae3eb67..8da2e471c7 100644 > --- a/lib/librte_mempool/rte_mempool.c > +++ b/lib/librte_mempool/rte_mempool.c > @@ -356,6 +356,19 @@ rte_mempool_populate_iova(struct rte_mempool *mp, > char *vaddr, > return ret; > } >=20 > +static rte_iova_t > +get_iova(void *addr) > +{ > + struct rte_memseg *ms; > + > + /* try registered memory first */ > + ms =3D rte_mem_virt2memseg(addr, NULL); > + if (ms =3D=3D NULL || ms->iova =3D=3D RTE_BAD_IOVA) > + /* fall back to actual physical address */ > + return rte_mem_virt2iova(addr); > + return ms->iova + RTE_PTR_DIFF(addr, ms->addr); } > + > /* Populate the mempool with a virtual area. Return the number of > * objects added, or a negative value on error. > */ > @@ -375,7 +388,7 @@ rte_mempool_populate_virt(struct rte_mempool *mp, > char *addr, > for (off =3D 0; off < len && > mp->populated_size < mp->size; off +=3D phys_len) { >=20 > - iova =3D rte_mem_virt2iova(addr + off); > + iova =3D get_iova(addr + off); >=20 > if (iova =3D=3D RTE_BAD_IOVA && rte_eal_has_hugepages()) { > ret =3D -EINVAL; > @@ -391,7 +404,7 @@ rte_mempool_populate_virt(struct rte_mempool *mp, > char *addr, > phys_len =3D RTE_MIN(phys_len + pg_sz, len - off)) { > rte_iova_t iova_tmp; >=20 > - iova_tmp =3D rte_mem_virt2iova(addr + off + phys_len); > + iova_tmp =3D get_iova(addr + off + phys_len); >=20 > if (iova_tmp =3D=3D RTE_BAD_IOVA || > iova_tmp !=3D iova + phys_len) > -- > 2.17.1