From: Olivier Matz <olivier.matz@6wind.com>
To: "Ananyev, Konstantin" <konstantin.ananyev@intel.com>
Cc: "dev@dpdk.org" <dev@dpdk.org>,
"arybchenko@solarflare.com" <arybchenko@solarflare.com>,
"jielong.zjl@antfin.com" <jielong.zjl@antfin.com>,
"Eads, Gage" <gage.eads@intel.com>
Subject: Re: [dpdk-dev] [PATCH v2] mempool/ring: add support for new ring sync modes
Date: Mon, 13 Jul 2020 15:30:54 +0200 [thread overview]
Message-ID: <20200713133054.GN5869@platinum> (raw)
In-Reply-To: <BYAPR11MB33013FCC3A71FF635F7CD2419A650@BYAPR11MB3301.namprd11.prod.outlook.com>
Hi Konstantin,
On Fri, Jul 10, 2020 at 03:20:12PM +0000, Ananyev, Konstantin wrote:
>
>
> >
> > Hi Olivier,
> >
> > > Hi Konstantin,
> > >
> > > On Thu, Jul 09, 2020 at 05:55:30PM +0000, Ananyev, Konstantin wrote:
> > > > Hi Olivier,
> > > >
> > > > > Hi Konstantin,
> > > > >
> > > > > On Mon, Jun 29, 2020 at 05:10:24PM +0100, Konstantin Ananyev wrote:
> > > > > > v2:
> > > > > > - update Release Notes (as per comments)
> > > > > >
> > > > > > Two new sync modes were introduced into rte_ring:
> > > > > > relaxed tail sync (RTS) and head/tail sync (HTS).
> > > > > > This change provides user with ability to select these
> > > > > > modes for ring based mempool via mempool ops API.
> > > > > >
> > > > > > Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> > > > > > Acked-by: Gage Eads <gage.eads@intel.com>
> > > > > > ---
> > > > > > doc/guides/rel_notes/release_20_08.rst | 6 ++
> > > > > > drivers/mempool/ring/rte_mempool_ring.c | 97 ++++++++++++++++++++++---
> > > > > > 2 files changed, 94 insertions(+), 9 deletions(-)
> > > > > >
> > > > > > diff --git a/doc/guides/rel_notes/release_20_08.rst b/doc/guides/rel_notes/release_20_08.rst
> > > > > > index eaaf11c37..7bdcf3aac 100644
> > > > > > --- a/doc/guides/rel_notes/release_20_08.rst
> > > > > > +++ b/doc/guides/rel_notes/release_20_08.rst
> > > > > > @@ -84,6 +84,12 @@ New Features
> > > > > > * Dump ``rte_flow`` memory consumption.
> > > > > > * Measure packet per second forwarding.
> > > > > >
> > > > > > +* **Added support for new sync modes into mempool ring driver.**
> > > > > > +
> > > > > > + Added ability to select new ring synchronisation modes:
> > > > > > + ``relaxed tail sync (ring_mt_rts)`` and ``head/tail sync (ring_mt_hts)``
> > > > > > + via mempool ops API.
> > > > > > +
> > > > > >
> > > > > > Removed Items
> > > > > > -------------
> > > > > > diff --git a/drivers/mempool/ring/rte_mempool_ring.c b/drivers/mempool/ring/rte_mempool_ring.c
> > > > > > index bc123fc52..15ec7dee7 100644
> > > > > > --- a/drivers/mempool/ring/rte_mempool_ring.c
> > > > > > +++ b/drivers/mempool/ring/rte_mempool_ring.c
> > > > > > @@ -25,6 +25,22 @@ common_ring_sp_enqueue(struct rte_mempool *mp, void * const *obj_table,
> > > > > > obj_table, n, NULL) == 0 ? -ENOBUFS : 0;
> > > > > > }
> > > > > >
> > > > > > +static int
> > > > > > +rts_ring_mp_enqueue(struct rte_mempool *mp, void * const *obj_table,
> > > > > > + unsigned int n)
> > > > > > +{
> > > > > > + return rte_ring_mp_rts_enqueue_bulk(mp->pool_data,
> > > > > > + obj_table, n, NULL) == 0 ? -ENOBUFS : 0;
> > > > > > +}
> > > > > > +
> > > > > > +static int
> > > > > > +hts_ring_mp_enqueue(struct rte_mempool *mp, void * const *obj_table,
> > > > > > + unsigned int n)
> > > > > > +{
> > > > > > + return rte_ring_mp_hts_enqueue_bulk(mp->pool_data,
> > > > > > + obj_table, n, NULL) == 0 ? -ENOBUFS : 0;
> > > > > > +}
> > > > > > +
> > > > > > static int
> > > > > > common_ring_mc_dequeue(struct rte_mempool *mp, void **obj_table, unsigned n)
> > > > > > {
> > > > > > @@ -39,17 +55,30 @@ common_ring_sc_dequeue(struct rte_mempool *mp, void **obj_table, unsigned n)
> > > > > > obj_table, n, NULL) == 0 ? -ENOBUFS : 0;
> > > > > > }
> > > > > >
> > > > > > +static int
> > > > > > +rts_ring_mc_dequeue(struct rte_mempool *mp, void **obj_table, unsigned int n)
> > > > > > +{
> > > > > > + return rte_ring_mc_rts_dequeue_bulk(mp->pool_data,
> > > > > > + obj_table, n, NULL) == 0 ? -ENOBUFS : 0;
> > > > > > +}
> > > > > > +
> > > > > > +static int
> > > > > > +hts_ring_mc_dequeue(struct rte_mempool *mp, void **obj_table, unsigned int n)
> > > > > > +{
> > > > > > + return rte_ring_mc_hts_dequeue_bulk(mp->pool_data,
> > > > > > + obj_table, n, NULL) == 0 ? -ENOBUFS : 0;
> > > > > > +}
> > > > > > +
> > > > > > static unsigned
> > > > > > common_ring_get_count(const struct rte_mempool *mp)
> > > > > > {
> > > > > > return rte_ring_count(mp->pool_data);
> > > > > > }
> > > > > >
> > > > > > -
> > > > > > static int
> > > > > > -common_ring_alloc(struct rte_mempool *mp)
> > > > > > +ring_alloc(struct rte_mempool *mp, uint32_t rg_flags)
> > > > > > {
> > > > > > - int rg_flags = 0, ret;
> > > > > > + int ret;
> > > > > > char rg_name[RTE_RING_NAMESIZE];
> > > > > > struct rte_ring *r;
> > > > > >
> > > > > > @@ -60,12 +89,6 @@ common_ring_alloc(struct rte_mempool *mp)
> > > > > > return -rte_errno;
> > > > > > }
> > > > > >
> > > > > > - /* ring flags */
> > > > > > - if (mp->flags & MEMPOOL_F_SP_PUT)
> > > > > > - rg_flags |= RING_F_SP_ENQ;
> > > > > > - if (mp->flags & MEMPOOL_F_SC_GET)
> > > > > > - rg_flags |= RING_F_SC_DEQ;
> > > > > > -
> > > > > > /*
> > > > > > * Allocate the ring that will be used to store objects.
> > > > > > * Ring functions will return appropriate errors if we are
> > > > > > @@ -82,6 +105,40 @@ common_ring_alloc(struct rte_mempool *mp)
> > > > > > return 0;
> > > > > > }
> > > > > >
> > > > > > +static int
> > > > > > +common_ring_alloc(struct rte_mempool *mp)
> > > > > > +{
> > > > > > + uint32_t rg_flags;
> > > > > > +
> > > > > > + rg_flags = 0;
> > > > >
> > > > > Maybe it could go on the same line
> > > > >
> > > > > > +
> > > > > > + /* ring flags */
> > > > >
> > > > > Not sure we need to keep this comment
> > > > >
> > > > > > + if (mp->flags & MEMPOOL_F_SP_PUT)
> > > > > > + rg_flags |= RING_F_SP_ENQ;
> > > > > > + if (mp->flags & MEMPOOL_F_SC_GET)
> > > > > > + rg_flags |= RING_F_SC_DEQ;
> > > > > > +
> > > > > > + return ring_alloc(mp, rg_flags);
> > > > > > +}
> > > > > > +
> > > > > > +static int
> > > > > > +rts_ring_alloc(struct rte_mempool *mp)
> > > > > > +{
> > > > > > + if ((mp->flags & (MEMPOOL_F_SP_PUT | MEMPOOL_F_SC_GET)) != 0)
> > > > > > + return -EINVAL;
> > > > >
> > > > > Why do we need this? It is a problem to allow sc/sp in this mode (even
> > > > > if it's not optimal)?
> > > >
> > > > These new sync modes (RTS, HTS) are for MT.
> > > > For SP/SC - there is simply no point to use MT sync modes.
> > > > I suppose there are few choices:
> > > > 1. Make F_SP_PUT/F_SC_GET flags silently override expected ops behaviour
> > > > and create actual ring with ST sync mode for prod/cons.
> > > > 2. Report an error.
> > > > 3. Silently ignore these flags.
> > > >
> > > > As I can see for "ring_mp_mc" ops, we doing #1,
> > > > while for "stack" we are doing #3.
> > > > For RTS/HTS I chosoe #2, as it seems cleaner to me.
> > > > Any thoughts from your side what preferable behaviour should be?
> > >
> > > The F_SP_PUT/F_SC_GET are only used in rte_mempool_create() to select
> > > the default ops among (ring_sp_sc, ring_mp_sc, ring_sp_mc,
> > > ring_mp_mc).
> >
> > As I understand, nothing prevents user from doing:
> >
> > mp = rte_mempool_create_empty(name, n, elt_size, cache_size,
> > sizeof(struct rte_pktmbuf_pool_private), socket_id, 0);
>
> Apologies, hit send accidently.
> I meant user can do:
>
> mp = rte_mempool_create_empty(..., F_SP_PUT | F_SC_GET);
> rte_mempool_set_ops_byname(mp, "ring_mp_mc", NULL);
>
> An in that case, he'll get SP/SC ring underneath.
It looks it's not the case. Since commit 449c49b93a6b ("mempool: support
handler operations"), the flags SP_PUT/SC_GET are converted into a call
to rte_mempool_set_ops_byname() in rte_mempool_create() only.
In rte_mempool_create_empty(), these flags are ignored. It is expected
that the user calls rte_mempool_set_ops_byname() by itself.
I don't think it is a good behavior:
1/ The documentation of rte_mempool_create_empty() does not say that the
flags are ignored, and a user can expect that F_SP_PUT | F_SC_GET
sets the default ops like rte_mempool_create().
2/ If rte_mempool_set_ops_byname() is not called after
rte_mempool_create_empty() (and it looks it happens in dpdk's code),
the default ops are the ones registered at index 0. This depends on
the link order.
So I propose to move the following code in
rte_mempool_create_empty().
if ((flags & MEMPOOL_F_SP_PUT) && (flags & MEMPOOL_F_SC_GET))
ret = rte_mempool_set_ops_byname(mp, "ring_sp_sc", NULL);
else if (flags & MEMPOOL_F_SP_PUT)
ret = rte_mempool_set_ops_byname(mp, "ring_sp_mc", NULL);
else if (flags & MEMPOOL_F_SC_GET)
ret = rte_mempool_set_ops_byname(mp, "ring_mp_sc", NULL);
else
ret = rte_mempool_set_ops_byname(mp, "ring_mp_mc", NULL);
What do you think?
>
> >
> >
> > >I don't think we should look at it when using specific ops.
> > >
> > > So I'll tend to say 3. is the correct thing to do.
> >
> > Ok, will resend v3 then.
> >
next prev parent reply other threads:[~2020-07-13 13:30 UTC|newest]
Thread overview: 25+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-05-21 13:20 [dpdk-dev] [PATCH 20.08] " Konstantin Ananyev
2020-06-29 16:10 ` [dpdk-dev] [PATCH v2] " Konstantin Ananyev
2020-07-09 16:18 ` Olivier Matz
2020-07-09 17:55 ` Ananyev, Konstantin
2020-07-10 12:52 ` Olivier Matz
2020-07-10 15:15 ` Ananyev, Konstantin
2020-07-10 15:20 ` Ananyev, Konstantin
2020-07-13 13:30 ` Olivier Matz [this message]
2020-07-13 14:46 ` Ananyev, Konstantin
2020-07-13 15:00 ` Olivier Matz
2020-07-13 16:29 ` Ananyev, Konstantin
2020-07-10 16:21 ` [dpdk-dev] [PATCH v3] " Konstantin Ananyev
2020-07-10 22:44 ` Thomas Monjalon
2020-07-13 12:58 ` Ananyev, Konstantin
2020-07-13 13:57 ` Thomas Monjalon
2020-07-13 15:50 ` [dpdk-dev] [PATCH v4 0/2] " Konstantin Ananyev
2020-07-13 15:50 ` [dpdk-dev] [PATCH v4 1/2] doc: add ring based mempool guide Konstantin Ananyev
2020-07-13 15:50 ` [dpdk-dev] [PATCH v4 2/2] mempool/ring: add support for new ring sync modes Konstantin Ananyev
2020-07-13 17:37 ` Thomas Monjalon
2020-07-14 9:16 ` Ananyev, Konstantin
2020-07-15 9:59 ` Thomas Monjalon
2020-07-15 14:58 ` [dpdk-dev] [PATCH v5 0/2] " Konstantin Ananyev
2020-07-15 14:58 ` [dpdk-dev] [PATCH v5 1/2] doc: add ring based mempool guide Konstantin Ananyev
2020-07-15 14:58 ` [dpdk-dev] [PATCH v5 2/2] mempool/ring: add support for new ring sync modes Konstantin Ananyev
2020-07-21 17:25 ` David Marchand
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20200713133054.GN5869@platinum \
--to=olivier.matz@6wind.com \
--cc=arybchenko@solarflare.com \
--cc=dev@dpdk.org \
--cc=gage.eads@intel.com \
--cc=jielong.zjl@antfin.com \
--cc=konstantin.ananyev@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).