From mboxrd@z Thu Jan  1 00:00:00 1970
Return-Path: <dev-bounces@dpdk.org>
Received: from dpdk.org (dpdk.org [92.243.14.124])
	by inbox.dpdk.org (Postfix) with ESMTP id E41B2A0540;
	Mon, 13 Jul 2020 16:46:45 +0200 (CEST)
Received: from [92.243.14.124] (localhost [127.0.0.1])
	by dpdk.org (Postfix) with ESMTP id 84C241D8D2;
	Mon, 13 Jul 2020 16:46:45 +0200 (CEST)
Received: from mga07.intel.com (mga07.intel.com [134.134.136.100])
 by dpdk.org (Postfix) with ESMTP id 4883D1D8D0
 for <dev@dpdk.org>; Mon, 13 Jul 2020 16:46:44 +0200 (CEST)
IronPort-SDR: LrxhxkiD4+OBDXMNvBk55Xwu4bOVpyxJYjcL+VqlKpkH6iu5iMEFlrPAxUOgqbtoWfWtlxRM4n
 puS/jg1MX49g==
X-IronPort-AV: E=McAfee;i="6000,8403,9681"; a="213455140"
X-IronPort-AV: E=Sophos;i="5.75,347,1589266800"; d="scan'208";a="213455140"
X-Amp-Result: SKIPPED(no attachment in message)
X-Amp-File-Uploaded: False
Received: from orsmga002.jf.intel.com ([10.7.209.21])
 by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 13 Jul 2020 07:46:42 -0700
IronPort-SDR: Ck4qWm4rm96HE4m14NYWh6JDRZNf31WgW/8JvVGp/3Kvtid2yuWllevZFlvsqJPlW1OpV4YySY
 OV5y+N/kC02w==
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="5.75,347,1589266800"; d="scan'208";a="299217999"
Received: from fmsmsx604.amr.corp.intel.com ([10.18.126.84])
 by orsmga002.jf.intel.com with ESMTP; 13 Jul 2020 07:46:42 -0700
Received: from fmsmsx604.amr.corp.intel.com (10.18.126.84) by
 fmsmsx604.amr.corp.intel.com (10.18.126.84) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.1.1713.5; Mon, 13 Jul 2020 07:46:40 -0700
Received: from FMSEDG001.ED.cps.intel.com (10.1.192.133) by
 fmsmsx604.amr.corp.intel.com (10.18.126.84) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256) id 15.1.1713.5
 via Frontend Transport; Mon, 13 Jul 2020 07:46:40 -0700
Received: from NAM12-MW2-obe.outbound.protection.outlook.com (104.47.66.46) by
 edgegateway.intel.com (192.55.55.68) with Microsoft SMTP Server
 (TLS) id 14.3.439.0; Mon, 13 Jul 2020 07:46:36 -0700
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=jrM5pwNU07J6ofBr9D1WDM14oUzGCLtL16swMasiXdv94njbSUDBxb8q3zOO2Q2VFWuOnjBh9bvV9/X6+C8Brv0OLhOhWRvY9mUx4pz9Jg8Hw1fh5bh2YBhIfbZJTcPWWN/0wWS+od24LnkEmRzVf6VXMXIidRzp1eL8YhM52Zilxcl5MkS97bUgAPs4FEYDVdIoFJ0/E4ahdjehrlDp7whhX0cthDQxLPDM5BnCfVuxuezTc6+F+ujB+nxScsaU3GwulXpSkmo5OK78r24BKkbvynz2imNRUJSuGy0R/V0cANWDloq4p0uhZ/nawYigox/+dz2xUgcdR77JBXK3gw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; 
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=soBOABHJ7vIRS0T85IG2lI1o69R+pz29f4KyyO5zbIk=;
 b=i0KHWHVMJqqjAIdxtOUDy5vHIF5Pe0mR2cX9phNNqEZY77QQyojL+uhe+p+Q1VTh3t1Lp4oww6DpFLKn9oAhAVuBhMN1KSDb40SqaoY/Wn15SBgO+Mno5eFNA+ZfUZpbqyhu5dnQfbVnqBdZcritVhy0GaiEHrq7tM3dvL/MDLJqOkVX+95TInSoxhpu0qgYc+1XnU6tZcQT1ds21HGQroVuTZ1ubL3q4o5L+rkLkXoAkQcmnidC11qc1dnvA/ZmZopVa0iDPhcapnbkiOTQ/8/cOPn9u6ryfqdPEjyHQj4RReKjvzFzHqPtm7JtWOBy6yGXVoXVuEeuCJKAmj0LOQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=intel.com; dmarc=pass action=none header.from=intel.com;
 dkim=pass header.d=intel.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=intel.onmicrosoft.com; 
 s=selector2-intel-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=soBOABHJ7vIRS0T85IG2lI1o69R+pz29f4KyyO5zbIk=;
 b=GmHzCzJF+/i2h9R6jaoXouj4Yx98Gv7sLPaUjko5GpuhOUp1F1J7Twx/oeydDpVM6MBoKdHYpAuXxlr7gSXI6G6A9qGkxcva+hETZE8fYYNM8k8L9uRTbmQ4ebvbLOhmH7Ggl9+rd7yBvwAM7DF8JKhvg9S1qBI8RuX+WS9D1Hs=
Received: from BYAPR11MB3301.namprd11.prod.outlook.com (2603:10b6:a03:7f::26)
 by BY5PR11MB4104.namprd11.prod.outlook.com (2603:10b6:a03:18f::12)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3174.20; Mon, 13 Jul
 2020 14:46:35 +0000
Received: from BYAPR11MB3301.namprd11.prod.outlook.com
 ([fe80::f160:29ab:b8f9:4189]) by BYAPR11MB3301.namprd11.prod.outlook.com
 ([fe80::f160:29ab:b8f9:4189%6]) with mapi id 15.20.3174.025; Mon, 13 Jul 2020
 14:46:35 +0000
From: "Ananyev, Konstantin" <konstantin.ananyev@intel.com>
To: Olivier Matz <olivier.matz@6wind.com>
CC: "dev@dpdk.org" <dev@dpdk.org>, "arybchenko@solarflare.com"
 <arybchenko@solarflare.com>, "jielong.zjl@antfin.com"
 <jielong.zjl@antfin.com>, "Eads, Gage" <gage.eads@intel.com>
Thread-Topic: [PATCH v2] mempool/ring: add support for new ring sync modes
Thread-Index: AQHWTi/h8GtyGouqIUabQ46ZzlWz+aj/fMmAgAAHuBCAAVEngIAAJyIggAABBlCABJl6AIAACghg
Date: Mon, 13 Jul 2020 14:46:35 +0000
Message-ID: <BYAPR11MB3301164FB30EE83156962EDD9A600@BYAPR11MB3301.namprd11.prod.outlook.com>
References: <20200521132027.28219-1-konstantin.ananyev@intel.com>
 <20200629161024.29059-1-konstantin.ananyev@intel.com>
 <20200709161829.GV5869@platinum>
 <BYAPR11MB3301B58168D608B7B97B57489A640@BYAPR11MB3301.namprd11.prod.outlook.com>
 <20200710125249.GZ5869@platinum>
 <BYAPR11MB330162F6719AA9747FDD4B5C9A650@BYAPR11MB3301.namprd11.prod.outlook.com>
 <BYAPR11MB33013FCC3A71FF635F7CD2419A650@BYAPR11MB3301.namprd11.prod.outlook.com>
 <20200713133054.GN5869@platinum>
In-Reply-To: <20200713133054.GN5869@platinum>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-titus-metadata-40: eyJDYXRlZ29yeUxhYmVscyI6IiIsIk1ldGFkYXRhIjp7Im5zIjoiaHR0cDpcL1wvd3d3LnRpdHVzLmNvbVwvbnNcL0ludGVsMyIsImlkIjoiNmVmNGFhMmQtNmFkOC00MDhmLTk0N2EtMDk1N2JkZjFiNTk2IiwicHJvcHMiOlt7Im4iOiJDVFBDbGFzc2lmaWNhdGlvbiIsInZhbHMiOlt7InZhbHVlIjoiQ1RQX05UIn1dfV19LCJTdWJqZWN0TGFiZWxzIjpbXSwiVE1DVmVyc2lvbiI6IjE3LjEwLjE4MDQuNDkiLCJUcnVzdGVkTGFiZWxIYXNoIjoiemkrRGxGMGVIQ0sxS2N4Rnp1c3lYS3FWZWpkK1pTdHowd2czM2RwQktcL1dzY3BVeDh4WlpYNjZzSkN4Z2xWWU8ifQ==
dlp-product: dlpe-windows
dlp-reaction: no-action
dlp-version: 11.2.0.6
x-ctpclassification: CTP_NT
authentication-results: 6wind.com; dkim=none (message not signed)
 header.d=none;6wind.com; dmarc=none action=none header.from=intel.com;
x-originating-ip: [46.7.38.224]
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: e09685b4-fcc0-4ce9-767a-08d8273b8ab2
x-ms-traffictypediagnostic: BY5PR11MB4104:
x-ms-exchange-transport-forked: True
x-microsoft-antispam-prvs: <BY5PR11MB4104EB1476E96A3ACAAEFBCD9A600@BY5PR11MB4104.namprd11.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:10000;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: 0kLTT3PQF3+qgLH2BcNmNektFq7VSJjCNGUXcdSOSAMR19HlSc0QyQagb6hjBbvQZ4pxc7lKIQ+025Wex6I7B6xxICcp14nntYgWlXuBB06Q6I7olzwuHmLkyBb9KOkuknlc1TW81JiWTd8Ros6pfcI69xTQEbdPigG/StVA2iMxu7A4K08XOt+xnpvd/3nmkbZYWqIBXXpe8PdzLJ1hICWgiXAhNOmqEr2pqlyJxT4ItJoJQCrlR4/TIjiS30mEEg9A0xCZrZAvtU+EVKbKWFNH/Bf5k7ECayLFsJsGreicxQSmN7mSdnwDjq4+M3LK2F67uaf7kOB54sFrYu7vow==
x-forefront-antispam-report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:;
 IPV:NLI; SFV:NSPM; H:BYAPR11MB3301.namprd11.prod.outlook.com; PTR:; CAT:NONE;
 SFTY:;
 SFS:(4636009)(136003)(39860400002)(366004)(346002)(396003)(376002)(478600001)(2906002)(186003)(4326008)(5660300002)(71200400001)(7696005)(107886003)(8676002)(6916009)(26005)(83380400001)(52536014)(64756008)(66476007)(8936002)(86362001)(6506007)(66446008)(66556008)(9686003)(66946007)(55016002)(33656002)(54906003)(76116006)(316002);
 DIR:OUT; SFP:1102; 
x-ms-exchange-antispam-messagedata: G8+5vSVVX+xm4iLg2lwpGFO7jSJtuoPRxCR7nm4S3YYKLk10tlm/tXHnvZHMhmBC5dt8Ky4a84w6Q2AY9eJZa/6mcepROgJkXb1/QGpHOj/DhapxmJ4+1Yf1vuEtOiTRP75SZgCtfvfzTsWPxcKgJCygy8T8aUOHN9+N8V5Qker2zqm/aXvRzIcr5iLr/eyGAyOGmwnzp9zThR4zwMU0pDNTLO2IvPYeURVyEkKkqH4ejhdCF4zfcvVGQU2RAI6ZaUHZG5SZ0SLfRHgAyBxTomQKN+GzozIHKwE4KDEyotqD3412ks4FtpyCYsr3nJ32gjEw1Cl+shZoBQHpR3+/WEWwyhIPyYjhD3OqNB+pOf+NbQl75n6G1WfTw6W7d32ukgbIheJqSLeRZY0sUbKIfspYhzD0LEL+qT6PexaSr3chvTs7CbOXKUPy1H5F+j8T3SJs2icSZBB7A1NNSSS+IFm0tri2AJhxiJnPVB8ewbQ=
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BYAPR11MB3301.namprd11.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: e09685b4-fcc0-4ce9-767a-08d8273b8ab2
X-MS-Exchange-CrossTenant-originalarrivaltime: 13 Jul 2020 14:46:35.5181 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 46c98d88-e344-4ed4-8496-4ed7712e255d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: uIvuh1GwLbEkf4tTrtgCPWAMfmhD37l+75sf5RrcptELylmUgRIJtFwkKoEKS6FacF7pDmrdEUiPhTvLVl5MaUvS7sMNqQG/OaPIHtXe6cc=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BY5PR11MB4104
X-OriginatorOrg: intel.com
Subject: Re: [dpdk-dev] [PATCH v2] mempool/ring: add support for new ring
	sync modes
X-BeenThere: dev@dpdk.org
X-Mailman-Version: 2.1.15
Precedence: list
List-Id: DPDK patches and discussions <dev.dpdk.org>
List-Unsubscribe: <https://mails.dpdk.org/options/dev>,
 <mailto:dev-request@dpdk.org?subject=unsubscribe>
List-Archive: <http://mails.dpdk.org/archives/dev/>
List-Post: <mailto:dev@dpdk.org>
List-Help: <mailto:dev-request@dpdk.org?subject=help>
List-Subscribe: <https://mails.dpdk.org/listinfo/dev>,
 <mailto:dev-request@dpdk.org?subject=subscribe>
Errors-To: dev-bounces@dpdk.org
Sender: "dev" <dev-bounces@dpdk.org>

Hi Olivier,

> Hi Konstantin,
>=20
> On Fri, Jul 10, 2020 at 03:20:12PM +0000, Ananyev, Konstantin wrote:
> >
> >
> > >
> > > Hi Olivier,
> > >
> > > > Hi Konstantin,
> > > >
> > > > On Thu, Jul 09, 2020 at 05:55:30PM +0000, Ananyev, Konstantin wrote=
:
> > > > > Hi Olivier,
> > > > >
> > > > > > Hi Konstantin,
> > > > > >
> > > > > > On Mon, Jun 29, 2020 at 05:10:24PM +0100, Konstantin Ananyev wr=
ote:
> > > > > > > v2:
> > > > > > >  - update Release Notes (as per comments)
> > > > > > >
> > > > > > > Two new sync modes were introduced into rte_ring:
> > > > > > > relaxed tail sync (RTS) and head/tail sync (HTS).
> > > > > > > This change provides user with ability to select these
> > > > > > > modes for ring based mempool via mempool ops API.
> > > > > > >
> > > > > > > Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.c=
om>
> > > > > > > Acked-by: Gage Eads <gage.eads@intel.com>
> > > > > > > ---
> > > > > > >  doc/guides/rel_notes/release_20_08.rst  |  6 ++
> > > > > > >  drivers/mempool/ring/rte_mempool_ring.c | 97 +++++++++++++++=
+++++++---
> > > > > > >  2 files changed, 94 insertions(+), 9 deletions(-)
> > > > > > >
> > > > > > > diff --git a/doc/guides/rel_notes/release_20_08.rst b/doc/gui=
des/rel_notes/release_20_08.rst
> > > > > > > index eaaf11c37..7bdcf3aac 100644
> > > > > > > --- a/doc/guides/rel_notes/release_20_08.rst
> > > > > > > +++ b/doc/guides/rel_notes/release_20_08.rst
> > > > > > > @@ -84,6 +84,12 @@ New Features
> > > > > > >    * Dump ``rte_flow`` memory consumption.
> > > > > > >    * Measure packet per second forwarding.
> > > > > > >
> > > > > > > +* **Added support for new sync modes into mempool ring drive=
r.**
> > > > > > > +
> > > > > > > +  Added ability to select new ring synchronisation modes:
> > > > > > > +  ``relaxed tail sync (ring_mt_rts)`` and ``head/tail sync (=
ring_mt_hts)``
> > > > > > > +  via mempool ops API.
> > > > > > > +
> > > > > > >
> > > > > > >  Removed Items
> > > > > > >  -------------
> > > > > > > diff --git a/drivers/mempool/ring/rte_mempool_ring.c b/driver=
s/mempool/ring/rte_mempool_ring.c
> > > > > > > index bc123fc52..15ec7dee7 100644
> > > > > > > --- a/drivers/mempool/ring/rte_mempool_ring.c
> > > > > > > +++ b/drivers/mempool/ring/rte_mempool_ring.c
> > > > > > > @@ -25,6 +25,22 @@ common_ring_sp_enqueue(struct rte_mempool =
*mp, void * const *obj_table,
> > > > > > >  			obj_table, n, NULL) =3D=3D 0 ? -ENOBUFS : 0;
> > > > > > >  }
> > > > > > >
> > > > > > > +static int
> > > > > > > +rts_ring_mp_enqueue(struct rte_mempool *mp, void * const *ob=
j_table,
> > > > > > > +	unsigned int n)
> > > > > > > +{
> > > > > > > +	return rte_ring_mp_rts_enqueue_bulk(mp->pool_data,
> > > > > > > +			obj_table, n, NULL) =3D=3D 0 ? -ENOBUFS : 0;
> > > > > > > +}
> > > > > > > +
> > > > > > > +static int
> > > > > > > +hts_ring_mp_enqueue(struct rte_mempool *mp, void * const *ob=
j_table,
> > > > > > > +	unsigned int n)
> > > > > > > +{
> > > > > > > +	return rte_ring_mp_hts_enqueue_bulk(mp->pool_data,
> > > > > > > +			obj_table, n, NULL) =3D=3D 0 ? -ENOBUFS : 0;
> > > > > > > +}
> > > > > > > +
> > > > > > >  static int
> > > > > > >  common_ring_mc_dequeue(struct rte_mempool *mp, void **obj_ta=
ble, unsigned n)
> > > > > > >  {
> > > > > > > @@ -39,17 +55,30 @@ common_ring_sc_dequeue(struct rte_mempool=
 *mp, void **obj_table, unsigned n)
> > > > > > >  			obj_table, n, NULL) =3D=3D 0 ? -ENOBUFS : 0;
> > > > > > >  }
> > > > > > >
> > > > > > > +static int
> > > > > > > +rts_ring_mc_dequeue(struct rte_mempool *mp, void **obj_table=
, unsigned int n)
> > > > > > > +{
> > > > > > > +	return rte_ring_mc_rts_dequeue_bulk(mp->pool_data,
> > > > > > > +			obj_table, n, NULL) =3D=3D 0 ? -ENOBUFS : 0;
> > > > > > > +}
> > > > > > > +
> > > > > > > +static int
> > > > > > > +hts_ring_mc_dequeue(struct rte_mempool *mp, void **obj_table=
, unsigned int n)
> > > > > > > +{
> > > > > > > +	return rte_ring_mc_hts_dequeue_bulk(mp->pool_data,
> > > > > > > +			obj_table, n, NULL) =3D=3D 0 ? -ENOBUFS : 0;
> > > > > > > +}
> > > > > > > +
> > > > > > >  static unsigned
> > > > > > >  common_ring_get_count(const struct rte_mempool *mp)
> > > > > > >  {
> > > > > > >  	return rte_ring_count(mp->pool_data);
> > > > > > >  }
> > > > > > >
> > > > > > > -
> > > > > > >  static int
> > > > > > > -common_ring_alloc(struct rte_mempool *mp)
> > > > > > > +ring_alloc(struct rte_mempool *mp, uint32_t rg_flags)
> > > > > > >  {
> > > > > > > -	int rg_flags =3D 0, ret;
> > > > > > > +	int ret;
> > > > > > >  	char rg_name[RTE_RING_NAMESIZE];
> > > > > > >  	struct rte_ring *r;
> > > > > > >
> > > > > > > @@ -60,12 +89,6 @@ common_ring_alloc(struct rte_mempool *mp)
> > > > > > >  		return -rte_errno;
> > > > > > >  	}
> > > > > > >
> > > > > > > -	/* ring flags */
> > > > > > > -	if (mp->flags & MEMPOOL_F_SP_PUT)
> > > > > > > -		rg_flags |=3D RING_F_SP_ENQ;
> > > > > > > -	if (mp->flags & MEMPOOL_F_SC_GET)
> > > > > > > -		rg_flags |=3D RING_F_SC_DEQ;
> > > > > > > -
> > > > > > >  	/*
> > > > > > >  	 * Allocate the ring that will be used to store objects.
> > > > > > >  	 * Ring functions will return appropriate errors if we are
> > > > > > > @@ -82,6 +105,40 @@ common_ring_alloc(struct rte_mempool *mp)
> > > > > > >  	return 0;
> > > > > > >  }
> > > > > > >
> > > > > > > +static int
> > > > > > > +common_ring_alloc(struct rte_mempool *mp)
> > > > > > > +{
> > > > > > > +	uint32_t rg_flags;
> > > > > > > +
> > > > > > > +	rg_flags =3D 0;
> > > > > >
> > > > > > Maybe it could go on the same line
> > > > > >
> > > > > > > +
> > > > > > > +	/* ring flags */
> > > > > >
> > > > > > Not sure we need to keep this comment
> > > > > >
> > > > > > > +	if (mp->flags & MEMPOOL_F_SP_PUT)
> > > > > > > +		rg_flags |=3D RING_F_SP_ENQ;
> > > > > > > +	if (mp->flags & MEMPOOL_F_SC_GET)
> > > > > > > +		rg_flags |=3D RING_F_SC_DEQ;
> > > > > > > +
> > > > > > > +	return ring_alloc(mp, rg_flags);
> > > > > > > +}
> > > > > > > +
> > > > > > > +static int
> > > > > > > +rts_ring_alloc(struct rte_mempool *mp)
> > > > > > > +{
> > > > > > > +	if ((mp->flags & (MEMPOOL_F_SP_PUT | MEMPOOL_F_SC_GET)) !=
=3D 0)
> > > > > > > +		return -EINVAL;
> > > > > >
> > > > > > Why do we need this? It is a problem to allow sc/sp in this mod=
e (even
> > > > > > if it's not optimal)?
> > > > >
> > > > > These new sync modes (RTS, HTS) are for MT.
> > > > > For SP/SC - there is simply no point to use MT sync modes.
> > > > > I suppose there are few choices:
> > > > > 1. Make F_SP_PUT/F_SC_GET flags silently override expected ops be=
haviour
> > > > >    and create actual ring with ST sync mode for prod/cons.
> > > > > 2. Report an error.
> > > > > 3. Silently ignore these flags.
> > > > >
> > > > > As I can see for  "ring_mp_mc" ops, we doing #1,
> > > > > while for "stack" we are doing #3.
> > > > > For RTS/HTS I chosoe #2, as it seems cleaner to me.
> > > > > Any thoughts from your side what preferable behaviour should be?
> > > >
> > > > The F_SP_PUT/F_SC_GET are only used in rte_mempool_create() to sele=
ct
> > > > the default ops among (ring_sp_sc, ring_mp_sc, ring_sp_mc,
> > > > ring_mp_mc).
> > >
> > > As I understand, nothing prevents user from doing:
> > >
> > > mp =3D rte_mempool_create_empty(name, n, elt_size, cache_size,
> > >                  sizeof(struct rte_pktmbuf_pool_private), socket_id, =
0);
> >
> > Apologies, hit send accidently.
> > I meant user can do:
> >
> > mp =3D rte_mempool_create_empty(..., F_SP_PUT | F_SC_GET);
> > rte_mempool_set_ops_byname(mp, "ring_mp_mc", NULL);
> >
> > An in that case, he'll get SP/SC ring underneath.
>=20
> It looks it's not the case. Since commit 449c49b93a6b ("mempool: support
> handler operations"), the flags SP_PUT/SC_GET are converted into a call
> to rte_mempool_set_ops_byname() in rte_mempool_create() only.
>=20
> In rte_mempool_create_empty(), these flags are ignored. It is expected
> that the user calls rte_mempool_set_ops_byname() by itself.

As I understand the code - not exactly.
rte_mempool_create_empty() doesn't make any specific actions based on 'flag=
s' value,
but it does store it's value inside mp->flags.
Later, when mempool_ops_alloc_once() is called these flags will be used by
common_ring_alloc() and might override selected by ops ring behaviour.

>=20
> I don't think it is a good behavior:
>=20
> 1/ The documentation of rte_mempool_create_empty() does not say that the
>    flags are ignored, and a user can expect that F_SP_PUT | F_SC_GET
>    sets the default ops like rte_mempool_create().
>=20
> 2/ If rte_mempool_set_ops_byname() is not called after
>    rte_mempool_create_empty() (and it looks it happens in dpdk's code),
>    the default ops are the ones registered at index 0. This depends on
>    the link order.
>=20
> So I propose to move the following code in
> rte_mempool_create_empty().
>=20
> 	if ((flags & MEMPOOL_F_SP_PUT) && (flags & MEMPOOL_F_SC_GET))
> 		ret =3D rte_mempool_set_ops_byname(mp, "ring_sp_sc", NULL);
> 	else if (flags & MEMPOOL_F_SP_PUT)
> 		ret =3D rte_mempool_set_ops_byname(mp, "ring_sp_mc", NULL);
> 	else if (flags & MEMPOOL_F_SC_GET)
> 		ret =3D rte_mempool_set_ops_byname(mp, "ring_mp_sc", NULL);
> 	else
> 		ret =3D rte_mempool_set_ops_byname(mp, "ring_mp_mc", NULL);
>=20
> What do you think?

I think it will be a good thing - as in that case we'll always have
"ring_mp_mc" selected as default one.
As another thought, it porbably would be good to deprecate and later remove
MEMPOOL_F_SP_PUT and MEMPOOL_F_SC_GET completely.
These days user can select this behaviour via mempool ops and such dualism
just makes things more error-prone and harder to maintain.
Especially as we don't have clear policy what should be the higher priority
for sync mode selection: mempool ops or flags.=20