From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-wr0-f174.google.com (mail-wr0-f174.google.com [209.85.128.174]) by dpdk.org (Postfix) with ESMTP id 13B30326C for ; Tue, 4 Apr 2017 11:13:39 +0200 (CEST) Received: by mail-wr0-f174.google.com with SMTP id k6so200633445wre.2 for ; Tue, 04 Apr 2017 02:13:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=6wind-com.20150623.gappssmtp.com; s=20150623; h=date:from:to:cc:subject:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=JPssiZm+6iuhVxxz5pzOJhGvcRp7gLP7U5G1LN7mPSY=; b=v82qQb5mux1BOZK9ErDRqJ0sg8eZv3LH1I3jyJsiV90di85Fm3vTJxjTLCg63OFqmP nJq+wsPFP92sYn6QP2282poVfTKt3/AuCuBdId+Pf214O+meVHyyodKVGIka2Bm/DHRT HMlTDCO8hdmdkvimvi8R4jwQbFABMwdOgTx2u/NqQafmUYbUS2OVuzkHzDBn76dZsKlI irzHTVCGBvpN6AozcY3zKNpuyjyo/NUK21yVUdH5Vc6PxYY4lKxu7rj3ma9wiFPNS3t1 JUbk7n6Zq6ewINgzeeFR0dHb7V0ZDJ+T72OPmfxc6mkhAHveGzjw8sFzk3N5CKDaYzSr BTcQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=JPssiZm+6iuhVxxz5pzOJhGvcRp7gLP7U5G1LN7mPSY=; b=Fh57vY84IlLg8l0BU3hU4cOI5S2EFOVNo9O/iu+R/aKXjfxId3ABVy5XUzlVNCREmQ koQY6X82YfqO5Y59uHeccHLA23WvXbQQMdyzTv77+k4cq4jrDi78GiKSh1ibL9JO4Gne Zjy+e8z1WbuDOVFWINZQu59Krrrv6qsYKg1G/wPxpwOtsdYPeljjTtfDfctNkL4x3ZRg YZ9SrY1rCVWZtFLX+etKGyLouoMJp007WOtZhqiyJBjcdo7khfoToxgy9cBsaVWF0TBB itAvtNlIDe1/kAuv0Rily4gKP2onUneueNCWZtPiO/a0mZ3giV0cjgOfQLo9/mWI9fxZ 7A+g== X-Gm-Message-State: AFeK/H3hYvTib+C0s8EMxpUvlDAWyc8NWi4PsCL9WgKiep9SnybOw7ZQrifg3WTyejyvq4nD X-Received: by 10.223.151.193 with SMTP id t1mr20386280wrb.149.1491297219530; Tue, 04 Apr 2017 02:13:39 -0700 (PDT) Received: from neon (host.78.145.23.62.rev.coltfrance.com. [62.23.145.78]) by smtp.gmail.com with ESMTPSA id z88sm18341333wrb.1.2017.04.04.02.13.39 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Tue, 04 Apr 2017 02:13:39 -0700 (PDT) Date: Tue, 4 Apr 2017 11:13:37 +0200 From: Olivier MATZ To: Hemant Agrawal Cc: Thomas Monjalon , , Message-ID: <20170404111337.7ecf8502@neon> In-Reply-To: References: <1491210729-9755-1-git-send-email-hemant.agrawal@nxp.com> <20170403171958.5ff2f3ab@platinum> <3c18f62b-252b-5184-07e0-0b4cb136d467@nxp.com> <6998997.LVRpf6MECD@xps13> X-Mailer: Claws Mail 3.14.1 (GTK+ 2.24.31; x86_64-pc-linux-gnu) MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit Subject: Re: [dpdk-dev] [PATCH] mempool: introduce flag to indicate hw mempool X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 04 Apr 2017 09:13:40 -0000 Hi Hemant, On Tue, 4 Apr 2017 12:59:08 +0530 Hemant Agrawal wrote: > Hi Thomas/Olivier, > > On 4/4/2017 12:28 PM, Thomas Monjalon wrote: > > 2017-04-04 11:05, Hemant Agrawal: > >> Hi Olivier, > >> > >> On 4/3/2017 8:49 PM, Olivier Matz wrote: > >>> Hi Hemant, > >>> > >>> On Mon, 3 Apr 2017 14:42:09 +0530, Hemant Agrawal > >>> wrote: > >>>> Hardware pools need to distinguish between buffers allocated > >>>> using software or hardware backed pools. > >>>> > >>>> Some HW NICs may choose to autonomously free the pickets during > >>>> transmit if the packet is from HW pool. While they should not do > >>>> it for software backed pools. > >>>> > >>>> Such flag would also help when multiple pools are being handled > >>>> by a PMD, saving costly compare operations for any internal > >>>> marker. > >>>> > >>>> Signed-off-by: Hemant Agrawal > >>>> --- > >>>> lib/librte_mempool/rte_mempool.h | 5 +++++ > >>>> 1 file changed, 5 insertions(+) > >>>> > >>>> diff --git a/lib/librte_mempool/rte_mempool.h > >>>> b/lib/librte_mempool/rte_mempool.h index 991feaa..91dbd21 100644 > >>>> --- a/lib/librte_mempool/rte_mempool.h > >>>> +++ b/lib/librte_mempool/rte_mempool.h > >>>> @@ -263,6 +263,11 @@ struct rte_mempool { > >>>> #define MEMPOOL_F_SC_GET 0x0008 /**< Default get is > >>>> "single-consumer".*/ #define MEMPOOL_F_POOL_CREATED > >>>> 0x0010 /**< Internal: pool is created. */ #define > >>>> MEMPOOL_F_NO_PHYS_CONTIG 0x0020 /**< Don't need physically > >>>> contiguous objs. */ +#define MEMPOOL_F_HW_POOL (1 << > >>>> ((sizeof(int) * 8) - 1)) /**< Internal: > >>>> + * Hardware offloaded pool. This information may be used > >>>> by the > >>>> + * NIC or other hw. Some NICs autonomously free the HW > >>>> backed pool packets. */ + > >>>> +/**< Don't need physically contiguous objs. */ > >>>> > >>>> /** > >>>> * @internal When debug is enabled, store some statistics. > >>> > >>> > >>> One thing is still not clear to me: in your driver, you check > >>> this flag: > >>> - if it is unset, you reallocate a packet from your hw pool, you > >>> copy some metadata, and you send it to the hw. > >>> - if it is set, you assume that you can call mempool_to_bpid(mp) > >>> and directly send it to the hw. > >>> > >>> I think this is not correct. The test you want to do in your > >>> driver is: "is it the pool that I registered for my hardware"? > >>> It is not: > >>> "is it a hardware managed pool?". > >>> I think what you are doing here prevents to use 2 hardware > >>> mempools at the same time, because they would all have this flag, > >>> and mempool_to_bpid() would probably crash. > >>> > >> > >> No, I am only trying to differentiate between hw and software pool > >> packets. I don't see a possiblity of having two different > >> orthogonal hw mempool types working in the system. At any point of > >> time when you are running DPDK on a particular type of hardware, > >> you will only have *one* type of hardware backed pools in your > >> implementation. The number of mempool instances may be many but > >> all will able to work with mempool_to_bpid(). > > > > No you could have different HW mempools on one system. > > Please imagine PCI NICs which provide a mempool. > > (other argument: never say never ;) > > > Thanks. Good Advice :) > > >> The application may send packet allocated from a *ring* pool > >> instead of using "hw" pool. > >> > >> So, it is sufficient to just check if the pool is offloaded or > >> not. HW can take care of all the supported pools. > >> > >>> Instead, can't you just compare the mempool pointer to a value > >>> stored internally in the driver? > >> > >> There can be more than one instance of mempool, the driver is > >> capable of supporting multiple hw offloaded mempools. Each dpaa2 > >> PMD port may have different mempool instance registered. > >> > >> So, pointer comparison is not practical unless I start storing the > >> mempool driver pointer. > > > > Is it difficult to store this pointer? > > > > Yes! Something is workable here. > PMD stores the "rte_mempool_ops_table" ops_index for dpaa2 (the > default buffer pool). The mbuf contains the pool pointer, which will > also have the pool->ops_index. so, it can be compared on per packet > basis. > > Olivier, do you see any issue with above approach. > Sorry I missed this mail. Yes, I think this approach should work. Olivier