From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-wm0-f44.google.com (mail-wm0-f44.google.com [74.125.82.44]) by dpdk.org (Postfix) with ESMTP id E837B11D9 for ; Tue, 31 Jan 2017 11:31:55 +0100 (CET) Received: by mail-wm0-f44.google.com with SMTP id c85so254380222wmi.1 for ; Tue, 31 Jan 2017 02:31:55 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=6wind-com.20150623.gappssmtp.com; s=20150623; h=date:from:to:cc:subject:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=yJyw3DiMVVOj+LULEwaLfejuilACuCrs692JAP2aSIk=; b=JkMz7jYfbqycVyU86hCwwNCLwgF8bm0TDktzQTZdMIAqdmcbZjPEVUhqGtoO3kuUYo NbZbkgz7ytjGMhHZNNfXOsVaBPKJwgB2gHcrpV2luUrly4Kr1b0zw1e/5+vbhQ/z9C4N B9EZBFx8sO+J8h1J+oOIXsw9xi/cL8ovBShoxSQV0L/x3OGeo6I5XLQZw/JcGTGW7Pa8 nLmK0zcMlNXPS0YqnebEKYSY768tutjLN+DZx8fbjoxHvqi0dslQ49IwygWySWgULey1 AI+X6Et8X050yI9LSfWjc03ow0c2QZdNhzo5VwchgeNTm3wTqYKw8AdmrLMkoL8ouNh6 5R9A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=yJyw3DiMVVOj+LULEwaLfejuilACuCrs692JAP2aSIk=; b=FV2Xs7z8XOY9RHav9YDW6+Gwe/e4tKKeRaxAJyLKYlyf0IZIclkIvqotgG8q3mlDw6 hYsohfRWxuwqTubmjvg4g0VrEKP3tqK+IBKq1Ak/OAlbOv+QV3oQCRmz3Q4d7r+INE4s 9gCHCzZ0dvd+B37GWwQJnxqScN1Y07LJwn40AnYEGeLI3q/xqa38M3kVLDUBq0Y2FziR DgQiaw1HERdMJXv+OQh3JL1aQEXSNXf1m0PQy9aepUi7GC/mevJ7tz3xdVskD0xyt4E3 QwsKUwqnUshb78B8VDy5qWLycoO0MP7GEKFgi0CYvHWhnosvlCBowXFSFhVkT2htqGxf DEIw== X-Gm-Message-State: AIkVDXJctfWLSMisDAaoh2fTIzTVedf/cmbR9NsAyh8Mo1x6avTHEt9dmlojDA204xwDQ871 X-Received: by 10.28.197.77 with SMTP id v74mr19345217wmf.30.1485858715655; Tue, 31 Jan 2017 02:31:55 -0800 (PST) Received: from platinum (2a01cb0c03c651000226b0fffeed02fc.ipv6.abo.wanadoo.fr. [2a01:cb0c:3c6:5100:226:b0ff:feed:2fc]) by smtp.gmail.com with ESMTPSA id x135sm23123355wme.23.2017.01.31.02.31.55 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Tue, 31 Jan 2017 02:31:55 -0800 (PST) Date: Tue, 31 Jan 2017 11:31:51 +0100 From: Olivier Matz To: Cc: , , , Message-ID: <20170131113151.3f8e07a0@platinum> In-Reply-To: <1484925221-18431-1-git-send-email-santosh.shukla@caviumnetworks.com> References: <1484922017-26030-1-git-send-email-santosh.shukla@caviumnetworks.com> <1484925221-18431-1-git-send-email-santosh.shukla@caviumnetworks.com> X-Mailer: Claws Mail 3.14.1 (GTK+ 2.24.31; x86_64-pc-linux-gnu) MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit Subject: Re: [dpdk-dev] [PATCH v2] mempool: Introduce _populate_mz_range api X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 31 Jan 2017 10:31:56 -0000 Hi Santosh, I guess this patch is targeted for 17.05, right? Please see some other comments below. On Fri, 20 Jan 2017 20:43:41 +0530, wrote: > From: Santosh Shukla > > HW pool manager e.g. Cavium SoC need s/w to program start and > end address of pool. Currently there is no such api in ext-mempool. Today, the mempool objects are not necessarily contiguous in virtual or physical memory. The only assumption that can be made is that each object is contiguous (virtually and physically). If the flag MEMPOOL_F_NO_PHYS_CONTIG is passed, each object is assured to be contiguous virtually. > So introducing _populate_mz_range API which will let HW(pool manager) > know about hugepage mapped virtual start and end address. rte_mempool_ops_populate_mz_range() looks a bit long. What about rte_mempool_ops_populate() instead? > diff --git a/lib/librte_mempool/rte_mempool.c > b/lib/librte_mempool/rte_mempool.c index 1c2aed8..9a39f5c 100644 > --- a/lib/librte_mempool/rte_mempool.c > +++ b/lib/librte_mempool/rte_mempool.c > @@ -568,6 +568,10 @@ static unsigned optimize_object_size(unsigned > obj_size) else > paddr = mz->phys_addr; > > + /* Populate mz range */ > + if ((mp->flags & MEMPOOL_F_POOL_CREATED) == 0) > + rte_mempool_ops_populate_mz_range(mp, mz); > + > if (rte_eal_has_hugepages() Given what I've said above, I think the populate() callback should be in rte_mempool_populate_phys() instead of rte_mempool_populate_default(). It would be called for each contiguous zone. > --- a/lib/librte_mempool/rte_mempool.h > +++ b/lib/librte_mempool/rte_mempool.h > @@ -387,6 +387,12 @@ typedef int (*rte_mempool_dequeue_t)(struct > rte_mempool *mp, */ > typedef unsigned (*rte_mempool_get_count)(const struct rte_mempool > *mp); +/** > + * Set the memzone va/pa addr range and len in the external pool. > + */ > +typedef void (*rte_mempool_populate_mz_range_t)(struct rte_mempool > *mp, > + const struct rte_memzone *mz); > + And this API would be: typedef void (*rte_mempool_populate_t)(struct rte_mempool *mp, char *vaddr, phys_addr_t paddr, size_t len) If your hw absolutly needs a contiguous memory, a solution could be: - add a new flag MEMPOOL_F_CONTIG (maybe a better nale could be found) saying that all the mempool objects must be contiguous. - add the ops_populate() callback in rte_mempool_populate_phys(), as suggested above Then: /* create an empty mempool */ rte_mempool_create_empty(...); /* set the handler: * in the ext handler, the mempool flags are updated with * MEMPOOL_F_CONTIG */ rte_mempool_set_ops_byname(..., "my_hardware"); /* if MEMPOOL_F_CONTIG is set, all populate() functions should ensure * that there is only one contiguous zone */ rte_mempool_populate_default(...); Regards, Olivier