From mboxrd@z Thu Jan  1 00:00:00 1970
Return-Path: <dev-bounces@dpdk.org>
Received: from dpdk.org (dpdk.org [92.243.14.124])
	by inbox.dpdk.org (Postfix) with ESMTP id 56CBBA00BE;
	Mon, 28 Oct 2019 15:06:45 +0100 (CET)
Received: from [92.243.14.124] (localhost [127.0.0.1])
	by dpdk.org (Postfix) with ESMTP id E9C701BEF4;
	Mon, 28 Oct 2019 15:06:44 +0100 (CET)
Received: from mail-wm1-f67.google.com (mail-wm1-f67.google.com
 [209.85.128.67]) by dpdk.org (Postfix) with ESMTP id 422A81BEE0
 for <dev@dpdk.org>; Mon, 28 Oct 2019 15:06:43 +0100 (CET)
Received: by mail-wm1-f67.google.com with SMTP id q130so9284867wme.2
 for <dev@dpdk.org>; Mon, 28 Oct 2019 07:06:43 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=6wind.com; s=google;
 h=date:from:to:cc:subject:message-id:references:mime-version
 :content-disposition:in-reply-to:user-agent;
 bh=DHjZtBEaQk26/RjdIbd9fbM/WX2VgUG9N/ChwkEJh6o=;
 b=A+SNTMdA4OP8H8A5LQGAcXCNlLj4atSFIc1hCizzD4bV+RdyxC6nlIumgdXIj/lXe2
 X+nHMhun6YFwgRbwU0PUTR+j05yGXSeX5axj9VnYyqz3vM7N7FPx/AyLzGiy1BBAzKb2
 HtyMLfI4H7k1Xou3U0lEXCZHQWvpdT/CW8LvkGX8GytVV/Ydi/fcfExD8mJaD7uLc1Jo
 YaVyQgs3J4pRDZJDuzZB+oz+1oYAQi6o/61h0yTjtN1tV0B5YmsQIRCXzjJIfjkmhA9s
 FekDKD9G4Aywqv4NrNwRPHSJfjYYYqZXOpwz8nsoYDflnftoJi9KBwWMiQrPDjaPfuwC
 rgfw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:date:from:to:cc:subject:message-id:references
 :mime-version:content-disposition:in-reply-to:user-agent;
 bh=DHjZtBEaQk26/RjdIbd9fbM/WX2VgUG9N/ChwkEJh6o=;
 b=iLWmGAAHoiwS/xlnPVLFBtY+tiRmAGhETq/ygo1+nKLBo5k4H+0TlNkEPrSubFSvSF
 Z/41eJDK0/I4ntbgNeseYxxRG/25sbaPna++llM97fRL2A0p6xmFDTuhcjCHx2zUEh1T
 pDRDTOa9AD/rNlqOi7aKAytW5y8j2Lz2xlOywqk72fC5gojD8EZCd2Bg4a3lr16Shsfj
 b08rNJfjXC3pKrL3zW3urb4wBSBBYSR7aFQ0YN52MZNNRN14gqFxuld6WslUPAwGQIkB
 YeElB7ab3TQWIBHELVoOidDwE/RGYjtXtSnOxjeLzKoUk1WZ5OM3SFCO0bLvQ50QELyC
 nm4g==
X-Gm-Message-State: APjAAAVDQ620zgKGG0C2ek/QNm1lowmBMZlWjrTrhhYKhvH4KLNEJ9YC
 n5togNLnZx5QG2sYm2uBhglwAg==
X-Google-Smtp-Source: APXvYqzSZdqotqu4Jcjvf7aE5s6eDNnfhDdaMWBEMPRUJ3vQdCQpBOszEU0KCLuuYQg2V0OhhEQS4w==
X-Received: by 2002:a05:600c:22cf:: with SMTP id
 15mr136794wmg.148.1572271602922; 
 Mon, 28 Oct 2019 07:06:42 -0700 (PDT)
Received: from 6wind.com (2a01cb0c0005a6000226b0fffeed02fc.ipv6.abo.wanadoo.fr.
 [2a01:cb0c:5:a600:226:b0ff:feed:2fc])
 by smtp.gmail.com with ESMTPSA id x127sm7481723wmx.18.2019.10.28.07.06.42
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 28 Oct 2019 07:06:42 -0700 (PDT)
Date: Mon, 28 Oct 2019 15:06:41 +0100
From: Olivier Matz <olivier.matz@6wind.com>
To: Andrew Rybchenko <arybchenko@solarflare.com>
Cc: Vamsi Krishna Attunuru <vattunuru@marvell.com>, dev@dpdk.org,
 Thomas Monjalon <thomas@monjalon.net>,
 Anatoly Burakov <anatoly.burakov@intel.com>,
 Jerin Jacob Kollanukkaran <jerinj@marvell.com>,
 Kokkilagadda <kirankumark@marvell.com>,
 Ferruh Yigit <ferruh.yigit@intel.com>
Message-ID: <20191028140641.zmpurzs4pksdyib6@platinum>
References: <CH2PR18MB338160CD8EF16EEB45EED387A6C80@CH2PR18MB3381.namprd18.prod.outlook.com>
 <20190719133845.32432-1-olivier.matz@6wind.com>
 <c081187a-b1dc-6d1b-bd26-4c304bcf6308@solarflare.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <c081187a-b1dc-6d1b-bd26-4c304bcf6308@solarflare.com>
User-Agent: NeoMutt/20180716
Subject: Re: [dpdk-dev] ***Spam*** [RFC 0/4] mempool: avoid objects
 allocations across pages
X-BeenThere: dev@dpdk.org
X-Mailman-Version: 2.1.15
Precedence: list
List-Id: DPDK patches and discussions <dev.dpdk.org>
List-Unsubscribe: <https://mails.dpdk.org/options/dev>,
 <mailto:dev-request@dpdk.org?subject=unsubscribe>
List-Archive: <http://mails.dpdk.org/archives/dev/>
List-Post: <mailto:dev@dpdk.org>
List-Help: <mailto:dev-request@dpdk.org?subject=help>
List-Subscribe: <https://mails.dpdk.org/listinfo/dev>,
 <mailto:dev-request@dpdk.org?subject=subscribe>
Errors-To: dev-bounces@dpdk.org
Sender: "dev" <dev-bounces@dpdk.org>

Hi Andrew,

Better late than never, here are answers to your comments.
I'm sending a new version of this patchset that addresses
them.

On Wed, Aug 07, 2019 at 06:21:01PM +0300, Andrew Rybchenko wrote:
> On 7/19/19 4:38 PM, Olivier Matz wrote:
> > When IOVA mode is VA, a mempool can be created with objects that
> > are not physically contiguous, which breaks KNI.
> > 
> > To solve this, this patchset changes the default behavior of mempool
> > populate function, to prevent objects from being located across pages.
> 
> I'll provide top level review notes on individual patches, but what
> I don't understand in general, why do we add a rule to respect
> page boundaries in any case even when it is not absolutely required.
> It may add holes. Can it make negative impact on performance?

In terms of memory consumption, the amount of wasted space is not
significant as soon as hugepages are used. In terms of performance, I
don't forsee any change.

> I think that KNI VA-mode requirements are very specific.
> It is VA-mode, but page boundaries should be respected even
> if VA is contiguous.

Yes, but on the other hand, changing the behavior does not hurt the
other use-cases in my opinion. I mean, having the ability to allocate an
IOVA-contiguous area at once does not bring a big added value,
especially given that there is no guarantee that it will success, and we
can fallback to a chunk-based allocation.


Olivier


> 
> > Olivier Matz (4):
> >    mempool: clarify default populate function
> >    mempool: unalign size when calculating required mem amount
> >    mempool: introduce function to get mempool page size
> >    mempool: prevent objects from being across pages
> > 
> >   lib/librte_mempool/rte_mempool.c             | 106 +++++++++++----------------
> >   lib/librte_mempool/rte_mempool.h             |   8 +-
> >   lib/librte_mempool/rte_mempool_ops.c         |   4 +-
> >   lib/librte_mempool/rte_mempool_ops_default.c |  39 +++++++++-
> >   4 files changed, 90 insertions(+), 67 deletions(-)
> > 
> > ---
> > 
> > Hi,
> > 
> > > @Olivier,
> > > Any suggestions..?
> > I took some time to go a bit deeper. I still think we can change the
> > default behavior to avoid objects to be located accross pages. But
> > it is more complex that I expected.
> > 
> > I made a draft patchset, that, in short:
> > - cleans/renames variables
> > - removes the optimistic full iova contiguous allocation
> > - changes return value of calc_mem_size to return the unaligned size,
> >    therefore the allocation is smaller in case of big hugepages
> > - changes rte_mempool_op_populate_default() to prevent allocation
> >    of objects accross multiple pages
> > 
> > Andrew, Anatoly, did I miss something?
> > Vamsi, can you check if it solves your issue?
> > 
> > Anyway, even if validate the patchset it and make it work, I'm afraid
> > this is not something that could go in 19.08.
> > 
> > The only alternative I see is a specific mempool allocation function
> > when used in iova=va mode + kni, as you proposed previously.
> > 
> > It can probably be implemented without adding a flag, starting from
> > rte_mempool_create(), and replacing rte_mempool_populate_default(mp) by
> > something else: allocate pages one by one, and call
> > rte_mempool_populate_iova() for each of them.
> > 
> > Hope it helps. Unfortunately, I may not have too much time to spend on
> > it in the coming days.
> > 
> > Regards,
> > Olivier
>