From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail.droids-corp.org (zoll.droids-corp.org [94.23.50.67]) by dpdk.org (Postfix) with ESMTP id 04434AAA9 for ; Mon, 16 Apr 2018 17:33:45 +0200 (CEST) Received: from lfbn-lil-1-700-92.w81-254.abo.wanadoo.fr ([81.254.37.92] helo=droids-corp.org) by mail.droids-corp.org with esmtpsa (TLS1.0:RSA_AES_256_CBC_SHA1:256) (Exim 4.89) (envelope-from ) id 1f868Y-0001I6-7a; Mon, 16 Apr 2018 17:33:48 +0200 Received: by droids-corp.org (sSMTP sendmail emulation); Mon, 16 Apr 2018 17:33:41 +0200 Date: Mon, 16 Apr 2018 17:33:41 +0200 From: Olivier Matz To: Andrew Rybchenko Cc: dev@dpdk.org, Anatoly Burakov Message-ID: <20180416153341.pko6s2xxzb6nv6m6@platinum> References: <1511539591-20966-1-git-send-email-arybchenko@solarflare.com> <1523885080-17168-1-git-send-email-arybchenko@solarflare.com> <1523885080-17168-5-git-send-email-arybchenko@solarflare.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1523885080-17168-5-git-send-email-arybchenko@solarflare.com> User-Agent: NeoMutt/20170113 (1.7.2) Subject: Re: [dpdk-dev] [PATCH v4 04/11] mempool: add op to calculate memory size to be allocated X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 16 Apr 2018 15:33:46 -0000 On Mon, Apr 16, 2018 at 02:24:33PM +0100, Andrew Rybchenko wrote: > Size of memory chunk required to populate mempool objects depends > on how objects are stored in the memory. Different mempool drivers > may have different requirements and a new operation allows to > calculate memory size in accordance with driver requirements and > advertise requirements on minimum memory chunk size and alignment > in a generic way. > > Bump ABI version since the patch breaks it. > > Suggested-by: Olivier Matz > Signed-off-by: Andrew Rybchenko [...] > @@ -643,39 +633,35 @@ rte_mempool_populate_default(struct rte_mempool *mp) > * 1G page on a 10MB memzone). If we fail to get enough contiguous > * memory, then we'll go and reserve space page-by-page. > */ > - no_pageshift = no_contig || force_contig || > - rte_eal_iova_mode() == RTE_IOVA_VA; > + no_pageshift = no_contig || rte_eal_iova_mode() == RTE_IOVA_VA; > try_contig = !no_contig && !no_pageshift && rte_eal_has_hugepages(); In case there is a v5 for another reason, I think the last line is equivalent to: try_contig = !no_pageshift && rte_eal_has_hugepages(); Otherwise: Acked-by: Olivier Matz