From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id DB1ACA00BE; Mon, 28 Oct 2019 15:07:11 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 4E57E1BF58; Mon, 28 Oct 2019 15:07:08 +0100 (CET) Received: from mail-wm1-f68.google.com (mail-wm1-f68.google.com [209.85.128.68]) by dpdk.org (Postfix) with ESMTP id B69F01BF50 for ; Mon, 28 Oct 2019 15:07:06 +0100 (CET) Received: by mail-wm1-f68.google.com with SMTP id q130so9286345wme.2 for ; Mon, 28 Oct 2019 07:07:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=6wind.com; s=google; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to:user-agent; bh=3c+hMVUtR4wzqis1o/+OgQKkoIU2rWRJyTrWr0TX9vQ=; b=X/SczfOSdJdsOpIEOnQ7sdR5eguUr40q95q5vsB0priaRVuXBb62F5eqv25/ZcqXb9 Aph+O/qWsOKPCeZpkFOb5rES03tDPLDC8zMZWSQ1v2eHYp/WlXPNIru+6DhgSAPGxsyb wMvRp9qb7mAhBbfWyoS4D4M5MvZ9bBzSok0kpC8Ii0dk+QNM6DdUyhwmSjOjxmCs5Fqt WZB1cu1gLdnRfDLFFCJcpBNeYOIvGgRKLuv+ZyEqxxnVqEHZDUOgVZEd/m2qBGa6Ff0C 5wa/yD5ABe98NLDHED+zgNPNEd8tTAhbyVGT6IdvvfpU+2XSGuDJOsDxAL8q2ZJEdx7W huMw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to:user-agent; bh=3c+hMVUtR4wzqis1o/+OgQKkoIU2rWRJyTrWr0TX9vQ=; b=GD/XPSqoZ6nfHEMpkqTJ0TTgNreoAW/AwHGZ1E3BJiWjGy8K5NEXxRQqvmYmJhQ1Pd q+qcVmEJZpf6nNxSlqc1IRpkOFDf1g70MH7YaJshCCPeEHGSBTdFHZEcpIJ/YrR4H1vs 2vGB6hPXQm7Uc77YzSuVuxqM61miDdgfzhntqsVnhDmVydPkDao5Pg+/wxmFnQISgOmD 9id7ku+B40beDCa2QhQwth3UUtuL3+/+ARi1HW09hLU/X6kGRLjTQFPiur2kOWrZs2zp SRJkv9fDLRf3gWu1aq4WiSFxvezm6TjEAAEFXZba8Cq3At6C1YqSGvzbYdv/wq8247j4 WybQ== X-Gm-Message-State: APjAAAXfRlAb9ViWyzYJ/fOHlvSSbeKSLk/irCvPBbr5wTqYumJ90XZG 9Qt2w0WGiw2HL78vXwMaT8mM3FUJrMA= X-Google-Smtp-Source: APXvYqyqJu28nwpn2ruzxH0Tj87vREejltKZuajdcDRj7qv25ibynkcG/uWrI6teOpuyYTVfVz47Xw== X-Received: by 2002:a05:600c:24d2:: with SMTP id 18mr122154wmu.139.1572271626442; Mon, 28 Oct 2019 07:07:06 -0700 (PDT) Received: from 6wind.com (2a01cb0c0005a6000226b0fffeed02fc.ipv6.abo.wanadoo.fr. [2a01:cb0c:5:a600:226:b0ff:feed:2fc]) by smtp.gmail.com with ESMTPSA id v81sm14727006wmg.4.2019.10.28.07.07.05 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 28 Oct 2019 07:07:05 -0700 (PDT) Date: Mon, 28 Oct 2019 15:07:05 +0100 From: Olivier Matz To: Andrew Rybchenko Cc: Vamsi Krishna Attunuru , dev@dpdk.org, Thomas Monjalon , Anatoly Burakov , Jerin Jacob Kollanukkaran , Kokkilagadda , Ferruh Yigit Message-ID: <20191028140705.d7t6b7s4k5av5lpr@platinum> References: <20190719133845.32432-1-olivier.matz@6wind.com> <20190719133845.32432-5-olivier.matz@6wind.com> <002c206e-b963-a932-1f57-6e7edb17c74b@solarflare.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <002c206e-b963-a932-1f57-6e7edb17c74b@solarflare.com> User-Agent: NeoMutt/20180716 Subject: Re: [dpdk-dev] [RFC 4/4] mempool: prevent objects from being across pages X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" On Wed, Aug 07, 2019 at 06:21:58PM +0300, Andrew Rybchenko wrote: > On 7/19/19 4:38 PM, Olivier Matz wrote: > > When using iova contiguous memory and objets smaller than page size, > > ensure that objects are not located across several pages. > > It looks like as an attempt to make exception a generic rule and > I think it is not a good idea. > > mempool has a notion of IOVA contiguous/non-contiguous objects > depending if PA or VA. rte_mempool_op_populate_default() gets > a memory chunk which is contiguous in VA and, if iova is not bad, > IOVA-contiguous. The patch always enforces page boundaries even > if it is not required. For example, if memory chunk is IOVA_PA > contiguous, the patch could result in holes and extra memory usage Yes, it may increase memory usage, but the amount should be limited. On the other hand, the new patchset provides enhancements that will reduce the memory consumption. More importantly, it will fix the KNI + IOVA=VA issue. I also wonder if this problem couldn't happen in case IOVA=PA. Are there any guarantees that on all architectures a PA-contiguous in always VA-contiguous in the kernel? > > > Signed-off-by: Vamsi Krishna Attunuru > > Signed-off-by: Olivier Matz > > --- > > lib/librte_mempool/rte_mempool_ops_default.c | 39 ++++++++++++++++++++++++++-- > > 1 file changed, 37 insertions(+), 2 deletions(-) > > > > diff --git a/lib/librte_mempool/rte_mempool_ops_default.c b/lib/librte_mempool/rte_mempool_ops_default.c > > index 4e2bfc82d..2bbd67367 100644 > > --- a/lib/librte_mempool/rte_mempool_ops_default.c > > +++ b/lib/librte_mempool/rte_mempool_ops_default.c > > @@ -45,19 +45,54 @@ rte_mempool_op_calc_mem_size_default(const struct rte_mempool *mp, > > return mem_size; > > } > > +/* Returns -1 if object falls on a page boundary, else returns 0 */ > > +static inline int > > +mempool_check_obj_bounds(void *obj, uint64_t pg_sz, size_t elt_sz) > > +{ > > + uintptr_t page_end, elt_addr = (uintptr_t)obj; > > + uint32_t pg_shift; > > + uint64_t page_mask; > > + > > + if (pg_sz == 0) > > + return 0; > > + if (elt_sz > pg_sz) > > + return 0; > > + > > + pg_shift = rte_bsf32(pg_sz); > > + page_mask = ~((1ull << pg_shift) - 1); > > + page_end = (elt_addr & page_mask) + pg_sz; > > + > > + if (elt_addr + elt_sz > page_end) > > + return -1; > > + > > + return 0; > > +} > > + > > int > > rte_mempool_op_populate_default(struct rte_mempool *mp, unsigned int max_objs, > > void *vaddr, rte_iova_t iova, size_t len, > > rte_mempool_populate_obj_cb_t *obj_cb, void *obj_cb_arg) > > { > > - size_t total_elt_sz; > > + size_t total_elt_sz, pg_sz; > > size_t off; > > unsigned int i; > > void *obj; > > + rte_mempool_get_page_size(mp, &pg_sz); > > + > > total_elt_sz = mp->header_size + mp->elt_size + mp->trailer_size; > > - for (off = 0, i = 0; off + total_elt_sz <= len && i < max_objs; i++) { > > + for (off = 0, i = 0; i < max_objs; i++) { > > + /* align offset to next page start if required */ > > + if (mempool_check_obj_bounds((char *)vaddr + off, > > + pg_sz, total_elt_sz) < 0) { > > + off += RTE_PTR_ALIGN_CEIL((char *)vaddr + off, pg_sz) - > > + ((char *)vaddr + off); > > + } > > + > > + if (off + total_elt_sz > len) > > + break; > > + > > off += mp->header_size; > > obj = (char *)vaddr + off; > > obj_cb(mp, obj_cb_arg, obj, >