From mboxrd@z Thu Jan  1 00:00:00 1970
Return-Path: <dev-bounces@dpdk.org>
Received: from dpdk.org (dpdk.org [92.243.14.124])
	by inbox.dpdk.org (Postfix) with ESMTP id 464E3A0529;
	Thu,  9 Jul 2020 18:18:33 +0200 (CEST)
Received: from [92.243.14.124] (localhost [127.0.0.1])
	by dpdk.org (Postfix) with ESMTP id 95E251E900;
	Thu,  9 Jul 2020 18:18:32 +0200 (CEST)
Received: from mail-wm1-f68.google.com (mail-wm1-f68.google.com
 [209.85.128.68]) by dpdk.org (Postfix) with ESMTP id 0FEDC1E8EF
 for <dev@dpdk.org>; Thu,  9 Jul 2020 18:18:31 +0200 (CEST)
Received: by mail-wm1-f68.google.com with SMTP id f139so2447889wmf.5
 for <dev@dpdk.org>; Thu, 09 Jul 2020 09:18:31 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=6wind.com; s=google;
 h=date:from:to:cc:subject:message-id:references:mime-version
 :content-disposition:in-reply-to:user-agent;
 bh=B5BjrXdZoIjXfRAALAhf3VJAbzj74lSSY6x8+Mh9yQc=;
 b=IpU/hyBZf8U4RSf94ifE3EpXgqkNYAAoWN7eVJGaB9U0jzCw5ei6QkE+TbgmGVPzSm
 HW0NE+aN8fRiTkjaBo7393FenSWn8+Tk7L6ezrrtwNEbQvPKmXTsGNl5XhJA0/+DorZv
 nGLoZh8zRx1kflGKJwYIuDTJ02M4prDxTqq+rm9Xz5N+MJWD3eagJVCXhUqY4ODztTlB
 1fsWLHGRH/SbiQsHkod3eu3XXosKdC7FBxqEpaBYHqpqLoSjt9duw+HciGgqpBMRYbNJ
 wUKeQ7EZcWSo0Mv51s4qm9484/FAhQYG8ktZS3psgJQCjciP8xIP9gA1aUipD0PjJj4k
 61gA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:date:from:to:cc:subject:message-id:references
 :mime-version:content-disposition:in-reply-to:user-agent;
 bh=B5BjrXdZoIjXfRAALAhf3VJAbzj74lSSY6x8+Mh9yQc=;
 b=Fz4XoOIummwGpekqCtXh0xk997wAlH3gWrmCERB+SNOx+sP0RVNHNB7Ziw88JFEQR0
 bK2uonaR03gGIdIIH/SlJjLK/V0r1TWaKIGj/SUD8ewcJYpTpl0EAHmkfIb/8UJ5td9k
 Gq5O+dz1kqbfDpyhElRniKELG6byCwspeoW/ufCUD7ZIDnP6oR8YUZSA9AphT0dLkGec
 m1G0TxI6pzkubHVjDoZ3Bw5LUhYjQaPsU9+m+SzWpWUutI6UBbdlzoSlWvKssAcxGVF7
 BOXbhHCVkPa2kjbsISQDmxCT7JpRarQR2H+3RJuVzaTcDxPBGvoYCHWsC8qIjGnRVHBZ
 0/MA==
X-Gm-Message-State: AOAM53141nFIBO1pvh4Y3V+9ZAZI1eSzzfjRX2mEPKzlkv4onzLswLJo
 kCyDPxQ0/DribaxZqhDwybNrog==
X-Google-Smtp-Source: ABdhPJyEaaHoaAnBigTG5dN/B7qrxJqXNhAE+/7b1an6b/qhPKlbHOlYbtwhQjKEinMvRVy//ZTRgA==
X-Received: by 2002:a1c:7e44:: with SMTP id z65mr793185wmc.52.1594311510660;
 Thu, 09 Jul 2020 09:18:30 -0700 (PDT)
Received: from 6wind.com (2a01cb0c0005a600345636f7e65ed1a0.ipv6.abo.wanadoo.fr.
 [2a01:cb0c:5:a600:3456:36f7:e65e:d1a0])
 by smtp.gmail.com with ESMTPSA id v66sm5612459wme.13.2020.07.09.09.18.29
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 09 Jul 2020 09:18:29 -0700 (PDT)
Date: Thu, 9 Jul 2020 18:18:29 +0200
From: Olivier Matz <olivier.matz@6wind.com>
To: Konstantin Ananyev <konstantin.ananyev@intel.com>
Cc: dev@dpdk.org, arybchenko@solarflare.com, jielong.zjl@antfin.com,
 gage.eads@intel.com
Message-ID: <20200709161829.GV5869@platinum>
References: <20200521132027.28219-1-konstantin.ananyev@intel.com>
 <20200629161024.29059-1-konstantin.ananyev@intel.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20200629161024.29059-1-konstantin.ananyev@intel.com>
User-Agent: Mutt/1.10.1 (2018-07-13)
Subject: Re: [dpdk-dev] [PATCH v2] mempool/ring: add support for new ring
	sync modes
X-BeenThere: dev@dpdk.org
X-Mailman-Version: 2.1.15
Precedence: list
List-Id: DPDK patches and discussions <dev.dpdk.org>
List-Unsubscribe: <https://mails.dpdk.org/options/dev>,
 <mailto:dev-request@dpdk.org?subject=unsubscribe>
List-Archive: <http://mails.dpdk.org/archives/dev/>
List-Post: <mailto:dev@dpdk.org>
List-Help: <mailto:dev-request@dpdk.org?subject=help>
List-Subscribe: <https://mails.dpdk.org/listinfo/dev>,
 <mailto:dev-request@dpdk.org?subject=subscribe>
Errors-To: dev-bounces@dpdk.org
Sender: "dev" <dev-bounces@dpdk.org>

Hi Konstantin,

On Mon, Jun 29, 2020 at 05:10:24PM +0100, Konstantin Ananyev wrote:
> v2:
>  - update Release Notes (as per comments)
> 
> Two new sync modes were introduced into rte_ring:
> relaxed tail sync (RTS) and head/tail sync (HTS).
> This change provides user with ability to select these
> modes for ring based mempool via mempool ops API.
> 
> Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> Acked-by: Gage Eads <gage.eads@intel.com>
> ---
>  doc/guides/rel_notes/release_20_08.rst  |  6 ++
>  drivers/mempool/ring/rte_mempool_ring.c | 97 ++++++++++++++++++++++---
>  2 files changed, 94 insertions(+), 9 deletions(-)
> 
> diff --git a/doc/guides/rel_notes/release_20_08.rst b/doc/guides/rel_notes/release_20_08.rst
> index eaaf11c37..7bdcf3aac 100644
> --- a/doc/guides/rel_notes/release_20_08.rst
> +++ b/doc/guides/rel_notes/release_20_08.rst
> @@ -84,6 +84,12 @@ New Features
>    * Dump ``rte_flow`` memory consumption.
>    * Measure packet per second forwarding.
>  
> +* **Added support for new sync modes into mempool ring driver.**
> +
> +  Added ability to select new ring synchronisation modes:
> +  ``relaxed tail sync (ring_mt_rts)`` and ``head/tail sync (ring_mt_hts)``
> +  via mempool ops API.
> +
>  
>  Removed Items
>  -------------
> diff --git a/drivers/mempool/ring/rte_mempool_ring.c b/drivers/mempool/ring/rte_mempool_ring.c
> index bc123fc52..15ec7dee7 100644
> --- a/drivers/mempool/ring/rte_mempool_ring.c
> +++ b/drivers/mempool/ring/rte_mempool_ring.c
> @@ -25,6 +25,22 @@ common_ring_sp_enqueue(struct rte_mempool *mp, void * const *obj_table,
>  			obj_table, n, NULL) == 0 ? -ENOBUFS : 0;
>  }
>  
> +static int
> +rts_ring_mp_enqueue(struct rte_mempool *mp, void * const *obj_table,
> +	unsigned int n)
> +{
> +	return rte_ring_mp_rts_enqueue_bulk(mp->pool_data,
> +			obj_table, n, NULL) == 0 ? -ENOBUFS : 0;
> +}
> +
> +static int
> +hts_ring_mp_enqueue(struct rte_mempool *mp, void * const *obj_table,
> +	unsigned int n)
> +{
> +	return rte_ring_mp_hts_enqueue_bulk(mp->pool_data,
> +			obj_table, n, NULL) == 0 ? -ENOBUFS : 0;
> +}
> +
>  static int
>  common_ring_mc_dequeue(struct rte_mempool *mp, void **obj_table, unsigned n)
>  {
> @@ -39,17 +55,30 @@ common_ring_sc_dequeue(struct rte_mempool *mp, void **obj_table, unsigned n)
>  			obj_table, n, NULL) == 0 ? -ENOBUFS : 0;
>  }
>  
> +static int
> +rts_ring_mc_dequeue(struct rte_mempool *mp, void **obj_table, unsigned int n)
> +{
> +	return rte_ring_mc_rts_dequeue_bulk(mp->pool_data,
> +			obj_table, n, NULL) == 0 ? -ENOBUFS : 0;
> +}
> +
> +static int
> +hts_ring_mc_dequeue(struct rte_mempool *mp, void **obj_table, unsigned int n)
> +{
> +	return rte_ring_mc_hts_dequeue_bulk(mp->pool_data,
> +			obj_table, n, NULL) == 0 ? -ENOBUFS : 0;
> +}
> +
>  static unsigned
>  common_ring_get_count(const struct rte_mempool *mp)
>  {
>  	return rte_ring_count(mp->pool_data);
>  }
>  
> -
>  static int
> -common_ring_alloc(struct rte_mempool *mp)
> +ring_alloc(struct rte_mempool *mp, uint32_t rg_flags)
>  {
> -	int rg_flags = 0, ret;
> +	int ret;
>  	char rg_name[RTE_RING_NAMESIZE];
>  	struct rte_ring *r;
>  
> @@ -60,12 +89,6 @@ common_ring_alloc(struct rte_mempool *mp)
>  		return -rte_errno;
>  	}
>  
> -	/* ring flags */
> -	if (mp->flags & MEMPOOL_F_SP_PUT)
> -		rg_flags |= RING_F_SP_ENQ;
> -	if (mp->flags & MEMPOOL_F_SC_GET)
> -		rg_flags |= RING_F_SC_DEQ;
> -
>  	/*
>  	 * Allocate the ring that will be used to store objects.
>  	 * Ring functions will return appropriate errors if we are
> @@ -82,6 +105,40 @@ common_ring_alloc(struct rte_mempool *mp)
>  	return 0;
>  }
>  
> +static int
> +common_ring_alloc(struct rte_mempool *mp)
> +{
> +	uint32_t rg_flags;
> +
> +	rg_flags = 0;

Maybe it could go on the same line

> +
> +	/* ring flags */

Not sure we need to keep this comment

> +	if (mp->flags & MEMPOOL_F_SP_PUT)
> +		rg_flags |= RING_F_SP_ENQ;
> +	if (mp->flags & MEMPOOL_F_SC_GET)
> +		rg_flags |= RING_F_SC_DEQ;
> +
> +	return ring_alloc(mp, rg_flags);
> +}
> +
> +static int
> +rts_ring_alloc(struct rte_mempool *mp)
> +{
> +	if ((mp->flags & (MEMPOOL_F_SP_PUT | MEMPOOL_F_SC_GET)) != 0)
> +		return -EINVAL;

Why do we need this? It is a problem to allow sc/sp in this mode (even
if it's not optimal)?

> +
> +	return ring_alloc(mp, RING_F_MP_RTS_ENQ | RING_F_MC_RTS_DEQ);
> +}
> +
> +static int
> +hts_ring_alloc(struct rte_mempool *mp)
> +{
> +	if ((mp->flags & (MEMPOOL_F_SP_PUT | MEMPOOL_F_SC_GET)) != 0)
> +		return -EINVAL;
> +
> +	return ring_alloc(mp, RING_F_MP_HTS_ENQ | RING_F_MC_HTS_DEQ);
> +}
> +
>  static void
>  common_ring_free(struct rte_mempool *mp)
>  {
> @@ -130,7 +187,29 @@ static const struct rte_mempool_ops ops_sp_mc = {
>  	.get_count = common_ring_get_count,
>  };
>  
> +/* ops for mempool with ring in MT_RTS sync mode */
> +static const struct rte_mempool_ops ops_mt_rts = {
> +	.name = "ring_mt_rts",
> +	.alloc = rts_ring_alloc,
> +	.free = common_ring_free,
> +	.enqueue = rts_ring_mp_enqueue,
> +	.dequeue = rts_ring_mc_dequeue,
> +	.get_count = common_ring_get_count,
> +};
> +
> +/* ops for mempool with ring in MT_HTS sync mode */
> +static const struct rte_mempool_ops ops_mt_hts = {
> +	.name = "ring_mt_hts",
> +	.alloc = hts_ring_alloc,
> +	.free = common_ring_free,
> +	.enqueue = hts_ring_mp_enqueue,
> +	.dequeue = hts_ring_mc_dequeue,
> +	.get_count = common_ring_get_count,
> +};
> +
>  MEMPOOL_REGISTER_OPS(ops_mp_mc);
>  MEMPOOL_REGISTER_OPS(ops_sp_sc);
>  MEMPOOL_REGISTER_OPS(ops_mp_sc);
>  MEMPOOL_REGISTER_OPS(ops_sp_mc);
> +MEMPOOL_REGISTER_OPS(ops_mt_rts);
> +MEMPOOL_REGISTER_OPS(ops_mt_hts);

Not really related to your patch, but I think we need a function to
dump the name of available mempool ops. We could even add a description.
The problem we have is that a user does not know on which criteria is
should use a driver or another (except for platform drivers).


Olivier