From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id EAD74A0540; Mon, 13 Jul 2020 15:30:57 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 996FD1D5F3; Mon, 13 Jul 2020 15:30:57 +0200 (CEST) Received: from mail-wm1-f67.google.com (mail-wm1-f67.google.com [209.85.128.67]) by dpdk.org (Postfix) with ESMTP id 625951C02C for ; Mon, 13 Jul 2020 15:30:56 +0200 (CEST) Received: by mail-wm1-f67.google.com with SMTP id 22so13329406wmg.1 for ; Mon, 13 Jul 2020 06:30:56 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=6wind.com; s=google; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to:user-agent; bh=a96e2gLb1fXtJagZ3+FgDrIfuhHa8Sk1/lqE+bVh878=; b=PfKhRwY1ZqS8hBmjD09MFJI0dczb3sjsBrFxwyHDMnIPPXzVBrjBza5JMkZTyo+2rI /XAWdojlcKiqYXJ23AL2oSGoA/e08yzMizc3/C6m55sGsyfh/ulKjHA0O8O0ilGtjeD2 FGsxHfrH+8vEZFJ2VrWPDuONbE5Dl/JA1ZVyMbn5I8WEcrTeGIDxTBZ7VMqp6maEpAwY 8+RzCgWEfFwq9ZUluBnamUlgjJgpsG0yQkPvcy5T6EJKzicx3yTpBaTecVOd9u4xNB+5 SaX8pKoWw6LOLbaWQNHOPMj4x7u/LZ6UWizmokWOgunyhfjqfUeZ+ubGcjj0GpQtem4j osoQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to:user-agent; bh=a96e2gLb1fXtJagZ3+FgDrIfuhHa8Sk1/lqE+bVh878=; b=RljwkZurdFoO+QyhmLuILBQmvQ2u8569ttnVR6nDIF9+IYrCsJPmLXfz9TgJCA0ZqZ 8nvp0g6Lff4p4mGEE4CbapHPM/Kvh56OSQvuKPFg24lNKqb39FbryAd6tObkBlpvzXYa VQunBLLOwF8lVTi1TbuZl2c7sQVGV8Uf9EfxYVKwnkLOa4t9F4hd3zapRBNgmWO7r/j7 j0CbhW5pbxKWxAqCSSqTc4XGWPt+vSZshe7sYOlwZdpp13gZg6B8Li/5hpZ3Rcacz64e qdcUhD1RHed4rx686muEmx7poVIuKNTC46fEnEIvtk5L0jqkvZqpdxzPHnTvAp9X5sC7 eZ8g== X-Gm-Message-State: AOAM533bTzxqaoKKXdcEmQ15vlhdAfHtGFALikWd72N1IMUaKPBcRX6/ v6dRDgNBPORz7DnwoMdvCcc0RA== X-Google-Smtp-Source: ABdhPJzM2PaKAXVXPqaQfPUi1RN9ZQCrYVbvRU4PiSWJAJFpKi57b83ERangkBnLeSnAdtiArb8WaA== X-Received: by 2002:a1c:b686:: with SMTP id g128mr19677556wmf.145.1594647056018; Mon, 13 Jul 2020 06:30:56 -0700 (PDT) Received: from 6wind.com (2a01cb0c0005a600345636f7e65ed1a0.ipv6.abo.wanadoo.fr. [2a01:cb0c:5:a600:3456:36f7:e65e:d1a0]) by smtp.gmail.com with ESMTPSA id k18sm24085736wrx.34.2020.07.13.06.30.55 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 13 Jul 2020 06:30:55 -0700 (PDT) Date: Mon, 13 Jul 2020 15:30:54 +0200 From: Olivier Matz To: "Ananyev, Konstantin" Cc: "dev@dpdk.org" , "arybchenko@solarflare.com" , "jielong.zjl@antfin.com" , "Eads, Gage" Message-ID: <20200713133054.GN5869@platinum> References: <20200521132027.28219-1-konstantin.ananyev@intel.com> <20200629161024.29059-1-konstantin.ananyev@intel.com> <20200709161829.GV5869@platinum> <20200710125249.GZ5869@platinum> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.10.1 (2018-07-13) Subject: Re: [dpdk-dev] [PATCH v2] mempool/ring: add support for new ring sync modes X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Hi Konstantin, On Fri, Jul 10, 2020 at 03:20:12PM +0000, Ananyev, Konstantin wrote: > > > > > > Hi Olivier, > > > > > Hi Konstantin, > > > > > > On Thu, Jul 09, 2020 at 05:55:30PM +0000, Ananyev, Konstantin wrote: > > > > Hi Olivier, > > > > > > > > > Hi Konstantin, > > > > > > > > > > On Mon, Jun 29, 2020 at 05:10:24PM +0100, Konstantin Ananyev wrote: > > > > > > v2: > > > > > > - update Release Notes (as per comments) > > > > > > > > > > > > Two new sync modes were introduced into rte_ring: > > > > > > relaxed tail sync (RTS) and head/tail sync (HTS). > > > > > > This change provides user with ability to select these > > > > > > modes for ring based mempool via mempool ops API. > > > > > > > > > > > > Signed-off-by: Konstantin Ananyev > > > > > > Acked-by: Gage Eads > > > > > > --- > > > > > > doc/guides/rel_notes/release_20_08.rst | 6 ++ > > > > > > drivers/mempool/ring/rte_mempool_ring.c | 97 ++++++++++++++++++++++--- > > > > > > 2 files changed, 94 insertions(+), 9 deletions(-) > > > > > > > > > > > > diff --git a/doc/guides/rel_notes/release_20_08.rst b/doc/guides/rel_notes/release_20_08.rst > > > > > > index eaaf11c37..7bdcf3aac 100644 > > > > > > --- a/doc/guides/rel_notes/release_20_08.rst > > > > > > +++ b/doc/guides/rel_notes/release_20_08.rst > > > > > > @@ -84,6 +84,12 @@ New Features > > > > > > * Dump ``rte_flow`` memory consumption. > > > > > > * Measure packet per second forwarding. > > > > > > > > > > > > +* **Added support for new sync modes into mempool ring driver.** > > > > > > + > > > > > > + Added ability to select new ring synchronisation modes: > > > > > > + ``relaxed tail sync (ring_mt_rts)`` and ``head/tail sync (ring_mt_hts)`` > > > > > > + via mempool ops API. > > > > > > + > > > > > > > > > > > > Removed Items > > > > > > ------------- > > > > > > diff --git a/drivers/mempool/ring/rte_mempool_ring.c b/drivers/mempool/ring/rte_mempool_ring.c > > > > > > index bc123fc52..15ec7dee7 100644 > > > > > > --- a/drivers/mempool/ring/rte_mempool_ring.c > > > > > > +++ b/drivers/mempool/ring/rte_mempool_ring.c > > > > > > @@ -25,6 +25,22 @@ common_ring_sp_enqueue(struct rte_mempool *mp, void * const *obj_table, > > > > > > obj_table, n, NULL) == 0 ? -ENOBUFS : 0; > > > > > > } > > > > > > > > > > > > +static int > > > > > > +rts_ring_mp_enqueue(struct rte_mempool *mp, void * const *obj_table, > > > > > > + unsigned int n) > > > > > > +{ > > > > > > + return rte_ring_mp_rts_enqueue_bulk(mp->pool_data, > > > > > > + obj_table, n, NULL) == 0 ? -ENOBUFS : 0; > > > > > > +} > > > > > > + > > > > > > +static int > > > > > > +hts_ring_mp_enqueue(struct rte_mempool *mp, void * const *obj_table, > > > > > > + unsigned int n) > > > > > > +{ > > > > > > + return rte_ring_mp_hts_enqueue_bulk(mp->pool_data, > > > > > > + obj_table, n, NULL) == 0 ? -ENOBUFS : 0; > > > > > > +} > > > > > > + > > > > > > static int > > > > > > common_ring_mc_dequeue(struct rte_mempool *mp, void **obj_table, unsigned n) > > > > > > { > > > > > > @@ -39,17 +55,30 @@ common_ring_sc_dequeue(struct rte_mempool *mp, void **obj_table, unsigned n) > > > > > > obj_table, n, NULL) == 0 ? -ENOBUFS : 0; > > > > > > } > > > > > > > > > > > > +static int > > > > > > +rts_ring_mc_dequeue(struct rte_mempool *mp, void **obj_table, unsigned int n) > > > > > > +{ > > > > > > + return rte_ring_mc_rts_dequeue_bulk(mp->pool_data, > > > > > > + obj_table, n, NULL) == 0 ? -ENOBUFS : 0; > > > > > > +} > > > > > > + > > > > > > +static int > > > > > > +hts_ring_mc_dequeue(struct rte_mempool *mp, void **obj_table, unsigned int n) > > > > > > +{ > > > > > > + return rte_ring_mc_hts_dequeue_bulk(mp->pool_data, > > > > > > + obj_table, n, NULL) == 0 ? -ENOBUFS : 0; > > > > > > +} > > > > > > + > > > > > > static unsigned > > > > > > common_ring_get_count(const struct rte_mempool *mp) > > > > > > { > > > > > > return rte_ring_count(mp->pool_data); > > > > > > } > > > > > > > > > > > > - > > > > > > static int > > > > > > -common_ring_alloc(struct rte_mempool *mp) > > > > > > +ring_alloc(struct rte_mempool *mp, uint32_t rg_flags) > > > > > > { > > > > > > - int rg_flags = 0, ret; > > > > > > + int ret; > > > > > > char rg_name[RTE_RING_NAMESIZE]; > > > > > > struct rte_ring *r; > > > > > > > > > > > > @@ -60,12 +89,6 @@ common_ring_alloc(struct rte_mempool *mp) > > > > > > return -rte_errno; > > > > > > } > > > > > > > > > > > > - /* ring flags */ > > > > > > - if (mp->flags & MEMPOOL_F_SP_PUT) > > > > > > - rg_flags |= RING_F_SP_ENQ; > > > > > > - if (mp->flags & MEMPOOL_F_SC_GET) > > > > > > - rg_flags |= RING_F_SC_DEQ; > > > > > > - > > > > > > /* > > > > > > * Allocate the ring that will be used to store objects. > > > > > > * Ring functions will return appropriate errors if we are > > > > > > @@ -82,6 +105,40 @@ common_ring_alloc(struct rte_mempool *mp) > > > > > > return 0; > > > > > > } > > > > > > > > > > > > +static int > > > > > > +common_ring_alloc(struct rte_mempool *mp) > > > > > > +{ > > > > > > + uint32_t rg_flags; > > > > > > + > > > > > > + rg_flags = 0; > > > > > > > > > > Maybe it could go on the same line > > > > > > > > > > > + > > > > > > + /* ring flags */ > > > > > > > > > > Not sure we need to keep this comment > > > > > > > > > > > + if (mp->flags & MEMPOOL_F_SP_PUT) > > > > > > + rg_flags |= RING_F_SP_ENQ; > > > > > > + if (mp->flags & MEMPOOL_F_SC_GET) > > > > > > + rg_flags |= RING_F_SC_DEQ; > > > > > > + > > > > > > + return ring_alloc(mp, rg_flags); > > > > > > +} > > > > > > + > > > > > > +static int > > > > > > +rts_ring_alloc(struct rte_mempool *mp) > > > > > > +{ > > > > > > + if ((mp->flags & (MEMPOOL_F_SP_PUT | MEMPOOL_F_SC_GET)) != 0) > > > > > > + return -EINVAL; > > > > > > > > > > Why do we need this? It is a problem to allow sc/sp in this mode (even > > > > > if it's not optimal)? > > > > > > > > These new sync modes (RTS, HTS) are for MT. > > > > For SP/SC - there is simply no point to use MT sync modes. > > > > I suppose there are few choices: > > > > 1. Make F_SP_PUT/F_SC_GET flags silently override expected ops behaviour > > > > and create actual ring with ST sync mode for prod/cons. > > > > 2. Report an error. > > > > 3. Silently ignore these flags. > > > > > > > > As I can see for "ring_mp_mc" ops, we doing #1, > > > > while for "stack" we are doing #3. > > > > For RTS/HTS I chosoe #2, as it seems cleaner to me. > > > > Any thoughts from your side what preferable behaviour should be? > > > > > > The F_SP_PUT/F_SC_GET are only used in rte_mempool_create() to select > > > the default ops among (ring_sp_sc, ring_mp_sc, ring_sp_mc, > > > ring_mp_mc). > > > > As I understand, nothing prevents user from doing: > > > > mp = rte_mempool_create_empty(name, n, elt_size, cache_size, > > sizeof(struct rte_pktmbuf_pool_private), socket_id, 0); > > Apologies, hit send accidently. > I meant user can do: > > mp = rte_mempool_create_empty(..., F_SP_PUT | F_SC_GET); > rte_mempool_set_ops_byname(mp, "ring_mp_mc", NULL); > > An in that case, he'll get SP/SC ring underneath. It looks it's not the case. Since commit 449c49b93a6b ("mempool: support handler operations"), the flags SP_PUT/SC_GET are converted into a call to rte_mempool_set_ops_byname() in rte_mempool_create() only. In rte_mempool_create_empty(), these flags are ignored. It is expected that the user calls rte_mempool_set_ops_byname() by itself. I don't think it is a good behavior: 1/ The documentation of rte_mempool_create_empty() does not say that the flags are ignored, and a user can expect that F_SP_PUT | F_SC_GET sets the default ops like rte_mempool_create(). 2/ If rte_mempool_set_ops_byname() is not called after rte_mempool_create_empty() (and it looks it happens in dpdk's code), the default ops are the ones registered at index 0. This depends on the link order. So I propose to move the following code in rte_mempool_create_empty(). if ((flags & MEMPOOL_F_SP_PUT) && (flags & MEMPOOL_F_SC_GET)) ret = rte_mempool_set_ops_byname(mp, "ring_sp_sc", NULL); else if (flags & MEMPOOL_F_SP_PUT) ret = rte_mempool_set_ops_byname(mp, "ring_sp_mc", NULL); else if (flags & MEMPOOL_F_SC_GET) ret = rte_mempool_set_ops_byname(mp, "ring_mp_sc", NULL); else ret = rte_mempool_set_ops_byname(mp, "ring_mp_mc", NULL); What do you think? > > > > > > > >I don't think we should look at it when using specific ops. > > > > > > So I'll tend to say 3. is the correct thing to do. > > > > Ok, will resend v3 then. > >