From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from proxy.6wind.com (host.76.145.23.62.rev.coltfrance.com [62.23.145.76]) by dpdk.org (Postfix) with ESMTP id DB2B3160 for ; Tue, 19 Dec 2017 12:02:17 +0100 (CET) Received: from core.dev.6wind.com (unknown [10.0.0.1]) by proxy.6wind.com (Postfix) with ESMTPS id E3355116A5B; Tue, 19 Dec 2017 11:53:38 +0100 (CET) Received: from [10.16.0.195] (helo=6wind.com) by core.dev.6wind.com with smtp (Exim 4.84_2) (envelope-from ) id 1eRFew-0003Cp-Dz; Tue, 19 Dec 2017 12:02:07 +0100 Received: by 6wind.com (sSMTP sendmail emulation); Tue, 19 Dec 2017 12:02:06 +0100 Date: Tue, 19 Dec 2017 12:02:06 +0100 From: Olivier MATZ To: Hemant Agrawal Cc: dev@dpdk.org Message-ID: <20171219110204.uqxw4xy66o65pnjz@glumotte.dev.6wind.com> References: <1512563473-19969-1-git-send-email-hemant.agrawal@nxp.com> <20171219102456.ghipiyb2ig43d4nk@glumotte.dev.6wind.com> <830d3e36-921b-ef7a-51a4-6d135b15e973@nxp.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <830d3e36-921b-ef7a-51a4-6d135b15e973@nxp.com> User-Agent: NeoMutt/20170113 (1.7.2) Subject: Re: [dpdk-dev] [PATCH 1/2] mempool: indicate the usages of multi memzones X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 19 Dec 2017 11:02:18 -0000 On Tue, Dec 19, 2017 at 04:16:33PM +0530, Hemant Agrawal wrote: > Hi Olivier, > > On 12/19/2017 3:54 PM, Olivier MATZ wrote: > > Hi Hemant, > > > > On Wed, Dec 06, 2017 at 06:01:12PM +0530, Hemant Agrawal wrote: > > > This is required for the optimizations w.r.t hw mempools. > > > They will use different kind of optimizations if the buffers > > > are from single contiguous memzone. > > > > > > Signed-off-by: Hemant Agrawal > > > --- > > > lib/librte_mempool/rte_mempool.c | 7 +++++-- > > > lib/librte_mempool/rte_mempool.h | 5 +++++ > > > 2 files changed, 10 insertions(+), 2 deletions(-) > > > > > > diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c > > > index d50dba4..9d3737c 100644 > > > --- a/lib/librte_mempool/rte_mempool.c > > > +++ b/lib/librte_mempool/rte_mempool.c > > > @@ -387,13 +387,16 @@ rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr, > > > total_elt_sz = mp->header_size + mp->elt_size + mp->trailer_size; > > > > > > /* Detect pool area has sufficient space for elements */ > > > - if (mp->flags & MEMPOOL_F_CAPA_PHYS_CONTIG) { > > > - if (len < total_elt_sz * mp->size) { > > > + if (len < total_elt_sz * mp->size) { > > > + if (mp->flags & MEMPOOL_F_CAPA_PHYS_CONTIG) { > > > RTE_LOG(ERR, MEMPOOL, > > > "pool area %" PRIx64 " not enough\n", > > > (uint64_t)len); > > > return -ENOSPC; > > > } > > > + } else { > > > + /* Memory will be allocated from multiple memzones */ > > > + mp->flags |= MEMPOOL_F_MULTI_MEMZONE; > > > } > > > > > > memhdr = rte_zmalloc("MEMPOOL_MEMHDR", sizeof(*memhdr), 0); > > > diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h > > > index 721227f..394a4fe 100644 > > > --- a/lib/librte_mempool/rte_mempool.h > > > +++ b/lib/librte_mempool/rte_mempool.h > > > @@ -292,6 +292,11 @@ struct rte_mempool { > > > */ > > > #define MEMPOOL_F_CAPA_BLK_ALIGNED_OBJECTS 0x0080 > > > > > > +/* Indicates that the mempool buffers are allocated from multiple memzones > > > + * the buffer may or may not be physically contiguous. > > > + */ > > > +#define MEMPOOL_F_MULTI_MEMZONE 0x0100 > > > + > > > /** > > > * @internal When debug is enabled, store some statistics. > > > * > > > -- > > > 2.7.4 > > > > > > > I'm not confortable with adding more and more flags, as I explained > > here: http://dpdk.org/ml/archives/dev/2017-December/083909.html > > This particular flag is not about how to populate mempool. This is just > indicating how the mempool was populated - a status flag. This information > is just helpful for the PMDs. > > At least I am not able to see that this particular flag is being very driver > specific. That's true, I commented too fast :) And what about using mp->nb_mem_chunks instead? Would it do the job in your use-case? > > It makes the generic code very complex, and probably buggy (many > > flags are incompatible with other flags). > > > > I'm thinking about moving the populate_* functions in the drivers > > (this is described a bit more in the link above). What do you think > > about this approach? > > > > The idea is good and it will give fine control to the individual mempools to > populate the memory the way they want. However, on the downside, it will > also lead to lot of duplicate code or similar code. It may also lead to a > maintenance issue for the mempool PMD owner. Yes, that will be the drawback. If we do this, we should try to keep some common helpers in the mempool lib.