From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 081DBA00BE; Mon, 28 Oct 2019 15:07:25 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 99A5A1BF86; Mon, 28 Oct 2019 15:07:14 +0100 (CET) Received: from mail-wm1-f66.google.com (mail-wm1-f66.google.com [209.85.128.66]) by dpdk.org (Postfix) with ESMTP id 9C7ED1BF71 for ; Mon, 28 Oct 2019 15:07:13 +0100 (CET) Received: by mail-wm1-f66.google.com with SMTP id q70so9618475wme.1 for ; Mon, 28 Oct 2019 07:07:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=6wind.com; s=google; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to:user-agent; bh=Kq6ADNjbUsGRBO1ubZ5QhFer9VZ1BC8ps2s7IDac7yI=; b=hGvspqswoOJBmjO7B6tIodf9S7s91KAVLCBkB5MUHFcrGojRp3BEWgA+3mIvtDCsAS +qHh0VY4t0aUVup0I2buSciVR6kcBP9DAQQDp05KyYxfcgbdv++SFtmRvkV4E9tWkscu ByHQimcZL/qvSLE5l17+om8WyezhxVj2FdnS/HtLVBh1hTRn0/ZDYeBObVegZcrGfKGI MB/ZJG7GmPICD99LCRTIUk84NRC7T5IQOQwnZ+VoD4suVcV8SCrXWAOHh1p+Q95IE2+c zHdTqVWpX5rJ6alS/w3vVRpfT9BR7Lb8ibSP2DhWfxTDRHILEFuSYXyAYoCbWN587uGp mgzQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to:user-agent; bh=Kq6ADNjbUsGRBO1ubZ5QhFer9VZ1BC8ps2s7IDac7yI=; b=MlIhua8MUMtHvxCnJFD+RJNo4clMa2ZvphZoT1/Li1z4Zoh/ElkzCYobuDpGk7hXPJ TY5HOYDQdSLsDdkhBrTpAcCUZrug1XXmL27tnVVemrSgPzrsAWT24UeodEFBrlNVnyd+ +bo/mUSdpEjCDIUgWkebRTUjld9nZ4E3HqooSdOr8mLjZKroVHg9d9/X1flHGXPKFMRG YstTqOHsEqvYcmUtCdhJ0W1iSVhZXUPwEpHOI7juS3BdQyDOi7MX3Ww+yWu/BtsYj7pr SrYBud+xdPAgLydHDdN3rEhZD6VHTEkuQ3HPn91k1EuX9yX0nsWf2EFKCwCNVD5r24Vk q8zQ== X-Gm-Message-State: APjAAAW+otN97fFsy5fvp/Hbs3qiedkaOKlO6yqwalhaDcTOQm5Xzssr oXPLIUSXTPUIBGwqcejPBbyV+NzIU6c= X-Google-Smtp-Source: APXvYqyGX0IJuLKyFCbgQUusiRu7Az2wzTbIN2xnglJxfmKWtMXLWWl2gfxHAbKSA4fusmfcMWkLcw== X-Received: by 2002:a1c:e912:: with SMTP id q18mr177920wmc.42.1572271633275; Mon, 28 Oct 2019 07:07:13 -0700 (PDT) Received: from 6wind.com (2a01cb0c0005a6000226b0fffeed02fc.ipv6.abo.wanadoo.fr. [2a01:cb0c:5:a600:226:b0ff:feed:2fc]) by smtp.gmail.com with ESMTPSA id g69sm12327936wme.31.2019.10.28.07.07.12 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 28 Oct 2019 07:07:12 -0700 (PDT) Date: Mon, 28 Oct 2019 15:07:11 +0100 From: Olivier Matz To: "Burakov, Anatoly" Cc: Vamsi Krishna Attunuru , dev@dpdk.org, Andrew Rybchenko , Thomas Monjalon , Jerin Jacob Kollanukkaran , Kokkilagadda , Ferruh Yigit Message-ID: <20191028140711.ppvai3oeribgh434@platinum> References: <20190719133845.32432-1-olivier.matz@6wind.com> <20190719133845.32432-5-olivier.matz@6wind.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: NeoMutt/20180716 Subject: Re: [dpdk-dev] [RFC 4/4] mempool: prevent objects from being across pages X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Hi Anatoly, On Fri, Jul 19, 2019 at 03:03:29PM +0100, Burakov, Anatoly wrote: > On 19-Jul-19 2:38 PM, Olivier Matz wrote: > > When using iova contiguous memory and objets smaller than page size, > > ensure that objects are not located across several pages. > > > > Signed-off-by: Vamsi Krishna Attunuru > > Signed-off-by: Olivier Matz > > --- > > lib/librte_mempool/rte_mempool_ops_default.c | 39 ++++++++++++++++++++++++++-- > > 1 file changed, 37 insertions(+), 2 deletions(-) > > > > diff --git a/lib/librte_mempool/rte_mempool_ops_default.c b/lib/librte_mempool/rte_mempool_ops_default.c > > index 4e2bfc82d..2bbd67367 100644 > > --- a/lib/librte_mempool/rte_mempool_ops_default.c > > +++ b/lib/librte_mempool/rte_mempool_ops_default.c > > @@ -45,19 +45,54 @@ rte_mempool_op_calc_mem_size_default(const struct rte_mempool *mp, > > return mem_size; > > } > > +/* Returns -1 if object falls on a page boundary, else returns 0 */ > > +static inline int > > +mempool_check_obj_bounds(void *obj, uint64_t pg_sz, size_t elt_sz) > > +{ > > + uintptr_t page_end, elt_addr = (uintptr_t)obj; > > + uint32_t pg_shift; > > + uint64_t page_mask; > > + > > + if (pg_sz == 0) > > + return 0; > > + if (elt_sz > pg_sz) > > + return 0; > > + > > + pg_shift = rte_bsf32(pg_sz); > > + page_mask = ~((1ull << pg_shift) - 1); > > + page_end = (elt_addr & page_mask) + pg_sz; > > This looks like RTE_PTR_ALIGN should do this without the magic? E.g. > > page_end = RTE_PTR_ALIGN(elt_addr, pg_sz) > > would that not be equivalent? Yes, I simplified this part in the new version, thanks. Olivier