From mboxrd@z Thu Jan  1 00:00:00 1970
Return-Path: <dev-bounces@dpdk.org>
Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124])
	by inbox.dpdk.org (Postfix) with ESMTP id 3120C42A68;
	Fri,  5 May 2023 04:10:45 +0200 (CEST)
Received: from mails.dpdk.org (localhost [127.0.0.1])
	by mails.dpdk.org (Postfix) with ESMTP id 1D71B40ED7;
	Fri,  5 May 2023 04:10:45 +0200 (CEST)
Received: from mga09.intel.com (mga09.intel.com [134.134.136.24])
 by mails.dpdk.org (Postfix) with ESMTP id 29812406BA
 for <dev@dpdk.org>; Fri,  5 May 2023 04:10:43 +0200 (CEST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple;
 d=intel.com; i=@intel.com; q=dns/txt; s=Intel;
 t=1683252643; x=1714788643;
 h=from:to:cc:subject:date:message-id:references:
 in-reply-to:content-transfer-encoding:mime-version;
 bh=19zOdhvlPMo+SXcM7KGnrk5GjWmMm3qDd72GkHeaYps=;
 b=fe91NWRJLbab4xLW1EfwxEETNYE+urrd6d1Drnb1pKRJ8XCerMdgZs4t
 N/5m3WEXdsN8LsA5YyZD3rBwfdgsNLu00H9d0L4ccdgRkt1grflZxshh2
 jYHGC5nLdSy+zNA23AlDAAPLq+LdqEjvrhLJtW7UsdYejWeD5pDu/+GVf
 Q6WcZJGbou2q0uYXDdsU76XJ2WVhsUCBODyzSFwgxWV7aPA6JhCibMYG1
 K+CYAzj0gXEj3EX0XYeoflFlICEBrf7x1PUqQjW37orI6ky/teoOgRep0
 dXrU5/S/Dljx4iPLbJej3VR0JSTTMnrfFsVScFZpCAt/3ID1XbL7612D/ Q==;
X-IronPort-AV: E=McAfee;i="6600,9927,10700"; a="351238364"
X-IronPort-AV: E=Sophos;i="5.99,250,1677571200"; d="scan'208";a="351238364"
Received: from orsmga008.jf.intel.com ([10.7.209.65])
 by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 04 May 2023 19:10:42 -0700
X-ExtLoop1: 1
X-IronPort-AV: E=McAfee;i="6600,9927,10700"; a="727837303"
X-IronPort-AV: E=Sophos;i="5.99,250,1677571200"; d="scan'208";a="727837303"
Received: from orsmsx602.amr.corp.intel.com ([10.22.229.15])
 by orsmga008.jf.intel.com with ESMTP; 04 May 2023 19:10:42 -0700
Received: from orsmsx610.amr.corp.intel.com (10.22.229.23) by
 ORSMSX602.amr.corp.intel.com (10.22.229.15) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.1.2507.23; Thu, 4 May 2023 19:10:41 -0700
Received: from ORSEDG602.ED.cps.intel.com (10.7.248.7) by
 orsmsx610.amr.corp.intel.com (10.22.229.23) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.1.2507.23 via Frontend Transport; Thu, 4 May 2023 19:10:41 -0700
Received: from NAM04-BN8-obe.outbound.protection.outlook.com (104.47.74.43) by
 edgegateway.intel.com (134.134.137.103) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.1.2507.23; Thu, 4 May 2023 19:10:40 -0700
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=TZO5BGJhFeFNIquNYQxIMPACy03bORn0JEXthEbp9bxqE4qk6owu3gS36UnkiWrs36Fflpvu3IwYo57IQ9Q8ebD8Gm0ExJLm3sPPTZ7E8q1NSG3JSbgGxtSotQ+b0hLgQS/ZVohvtbDpG3Ixd4eLfw3Ws1jc0A/4HNT+sSgb9LeGQidUn2JJm712eSoDNkjdoaEXFZINpu+J5bGcen1TF8D70gizBn2jbUlQqkiTWRJxgniudHmXFHVeCn1pXfga7Yvik5GPHvuVhsGvAq1lFZsfxaLBplN5EVViuLnD3HKUfFyn3WpOK4uB1H0v2Vpqm38Hu13bAfG1uvav4GXiPw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; 
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=N/4rloidpmkjJpcv1CWskJ1olymytZV+ne05Vp1uQNU=;
 b=NDXDwYACsN1e9610xspWzBaNNmlbxMqYJ9Bp7Vp45iMuTRiKd0m0bQpk7eoyEsmOzIGovQ2TDYZIFEGFWB/BosJY09d0sxyPAYwhFED8YEDuRg05xP6Z+6UGb74PJ6NrFpbjrnMhpoDXSR8rWA/cEtXYHU9j5oy7sIj0NzWOcNQvQnR3QMW/Uk5htYRxt3InTU7ayfQS9hbtZs0Jp0hi6shlvPJ6VIhPELw8LcqXme4+8wZ5JYpVTZDqFlRUQVyLO7ALwt3+edN5XHsL+zhch6fWkztSG6/ZLk3pDD4hR5GnJhlCVYpisWYP1LVj5k31UHI/LsvCXRlbI2Z3peUz6A==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=intel.com; dmarc=pass action=none header.from=intel.com;
 dkim=pass header.d=intel.com; arc=none
Received: from SN7PR11MB6775.namprd11.prod.outlook.com (2603:10b6:806:264::21)
 by PH7PR11MB7497.namprd11.prod.outlook.com (2603:10b6:510:270::13)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.26; Fri, 5 May
 2023 02:10:38 +0000
Received: from SN7PR11MB6775.namprd11.prod.outlook.com
 ([fe80::7972:8de5:546c:65f0]) by SN7PR11MB6775.namprd11.prod.outlook.com
 ([fe80::7972:8de5:546c:65f0%8]) with mapi id 15.20.6340.031; Fri, 5 May 2023
 02:10:38 +0000
From: "Yan, Zhirun" <zhirun.yan@intel.com>
To: Pavan Nikhilesh Bhagavatula <pbhagavatula@marvell.com>, "dev@dpdk.org"
 <dev@dpdk.org>, Jerin Jacob Kollanukkaran <jerinj@marvell.com>, "Kiran Kumar
 Kokkilagadda" <kirankumark@marvell.com>, Nithin Kumar Dabilpuram
 <ndabilpuram@marvell.com>, "stephen@networkplumber.org"
 <stephen@networkplumber.org>
CC: "Liang, Cunming" <cunming.liang@intel.com>, "Wang, Haiyue"
 <haiyue.wang@intel.com>
Subject: RE: [EXT] [PATCH v5 09/15] graph: introduce stream moving cross cores
Thread-Topic: [EXT] [PATCH v5 09/15] graph: introduce stream moving cross cores
Thread-Index: AQHZY4XKiEA03t7av0m328Uda8381q8/aMoAgAu2nQA=
Date: Fri, 5 May 2023 02:10:38 +0000
Message-ID: <SN7PR11MB67758225C9E0BED19EDEB3CA85729@SN7PR11MB6775.namprd11.prod.outlook.com>
References: <20230330061834.3118201-1-zhirun.yan@intel.com>
 <20230331040306.3143693-1-zhirun.yan@intel.com>
 <20230331040306.3143693-10-zhirun.yan@intel.com>
 <PH0PR18MB40869E0C454B4ECA52D38D60DE6A9@PH0PR18MB4086.namprd18.prod.outlook.com>
In-Reply-To: <PH0PR18MB40869E0C454B4ECA52D38D60DE6A9@PH0PR18MB4086.namprd18.prod.outlook.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=intel.com;
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: SN7PR11MB6775:EE_|PH7PR11MB7497:EE_
x-ms-office365-filtering-correlation-id: cfc81ea4-6dfd-4a3a-aabb-08db4d0deb8b
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: Qwyo3VYW6x1c+lNG0CIDrOQVCOUQzK3Ds2ZQmNvnABufH3Mln0C9hoaTcipzy9/NeZbTJhbSMmwWZEGP05e9Jo1/bQGyx+uv4Hs3WoNWY4WVIkpn7pYP4/JAZ36csShvFYUFLGTQ8g6wN54Dq2RY0A60VD92CkToOSA7fmZQ8zHLP6Kxev4WacEvp9aB3Et4oml17zatLSLE+RgaKWwT/mnNeze4CefABe3OdH/vFw7hoOFzd0PAayhhAhTNZ6yCEZRaEb1l1eyr4RbRcy4Dp2a6Sr2idN0PBlFSr1r5ufi/OzUlxM9Lw1DuUaygeiO8We6t8hntXXZTaAgdEfKztCm2Gwo8Gn0X2W11n0theYpTO/X9uPFkhce+03bqrBvzw5uFXzIhPhSvGHTsh5L3HIIFeIKHV0CCJ5LTeWVituE3XBJyz0cK2c9nmQXbllNQ72G2p88jZrV4Om8D5XnCp/2z8ogoJKE5WbikmfpBVd/AdIZ221muE0mgsMwFOSMFRaWD7w4f5xxohz7zIqf+He8ghQjL5ev7ptIwCDdOElMwAH8rMrjSFV6VRqb+tkC5MQZ5djdkgNyG0Jf+56ec6mPTpa5Ok30IApjvqCDJPp/cbvnNXskScIRhpjD0eoRt
x-forefront-antispam-report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:;
 IPV:NLI; SFV:NSPM; H:SN7PR11MB6775.namprd11.prod.outlook.com; PTR:; CAT:NONE;
 SFS:(13230028)(376002)(136003)(346002)(39860400002)(366004)(396003)(451199021)(7696005)(55016003)(83380400001)(26005)(53546011)(9686003)(6506007)(107886003)(186003)(38100700002)(38070700005)(86362001)(82960400001)(122000001)(33656002)(71200400001)(8676002)(8936002)(2906002)(4326008)(316002)(76116006)(66946007)(66556008)(66446008)(64756008)(66476007)(110136005)(52536014)(41300700001)(478600001)(5660300002)(54906003);
 DIR:OUT; SFP:1102; 
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0: =?us-ascii?Q?xkBJp5xfw1x9EoNe4hv18W3aXVCreiCNll005IXtXdkGE8m/Frk+2e4O1DLB?=
 =?us-ascii?Q?e2GogrPwr7DYf2OCMYStBcD84b/0BzFT1Ths268ayQuF8k8sBBLo5FFve496?=
 =?us-ascii?Q?qaES6jye8KDzHqrp/N9qHM+eTmH5FE2istXN627mBAre9xpnPU2XyZtV/qQS?=
 =?us-ascii?Q?5va8BHLPuTfCEGwdvEhkwBmOdxaGBiWd3/YW5veN3b+Pwd3HKb6KlX9pzEfq?=
 =?us-ascii?Q?FcsKZFFTCMiclaJA4dozlRZfaZQMlhKQJ10hEwxUsdl8WAbWdMe3N3kvg0Wc?=
 =?us-ascii?Q?b6Udbg/BZxfJH9DT5QBra040Tyi3gi5Q51xqkJAwaB47nWzb3vvQXMxjPH7k?=
 =?us-ascii?Q?FwZazXtAHNUC0dJUiI+sJZZK7u/7TYuX6pohyAF2tipSCr7jPGkh1n+G8dJI?=
 =?us-ascii?Q?EVsRbrGbsJXALX6fJ72+mpSnRisXOEhgbWPP5qA+kIDH8i/Wy59XFiO6Qh4M?=
 =?us-ascii?Q?ZFayiQTv2XQa1Kb4jl963+WIuMqMz/kcYpa53gplBId3fHJVg4ZTcoV/Y4DS?=
 =?us-ascii?Q?AVl2r8+PoiuEf87eRKXYHJ5pfHjGAjObYOF9UqGNt/NwP2xM+KQ9IiQK24Av?=
 =?us-ascii?Q?/tnetvZ4KeHs74QBprls+cY7N1XjZ9NCKq8f6HfyxZfGvOmQpr2Sin2jWUXJ?=
 =?us-ascii?Q?7ozI8Co62/ljEdTZ3vM+gTAaB5JZrCpTE+kryGqg/JnLsjtaOUVahBZb77pK?=
 =?us-ascii?Q?WCQpu3QUW3Y4gDYVf6znDujPvm3NdZkdlGom3DLygSWdCfuvpC1VerLuFZFC?=
 =?us-ascii?Q?VU4vvBJGpgaJDf0QmCoTGeherU15Uk8KRH66xp0qGFtNz1dO2hIgARE0QSXB?=
 =?us-ascii?Q?DjtMzQySZ/ZdIBlcPE7oiUAlHH3vkKu381sDQB/wiXrJeZvuBLizj2NZdtkE?=
 =?us-ascii?Q?5R0XLvvXLY3bTt0G4y2La+PqvmUSFvHtw2HDPmVe65KFVK/iU1DbVUilWSGa?=
 =?us-ascii?Q?BHOBQxjfDFSg8GoIEGZrUYWUPF/L8xwTEuFEn7LHvIC8eYTGDOj7R+DZDozY?=
 =?us-ascii?Q?kBAOD6OlB23VhpE2fMZa07xucS12QIR0YuEuNeJ20yqLXu3Bkxz0hxALCqKU?=
 =?us-ascii?Q?TeQr/kqh88LWLWI0wQzf212bVViF6tNTVCUr6Urje3B6/LpKsGYAcCf1LLCJ?=
 =?us-ascii?Q?4yC3Kr5YZ1BYeWRQ/C6lMx8hy+X52HHkHPQkO5t/WRneczgnIsWFM7Amhz5h?=
 =?us-ascii?Q?A6bvYDxSs6dg07uUAffYMzA/E72/0k64xuNVu4By2AXCxxLohfC/uHvbg+vL?=
 =?us-ascii?Q?ZNr2jtNyIn5L9Xy/ZK2EqMS8d/eUfXxUgEcX53zzc7fGr9FLTwlBDAz9LEZ7?=
 =?us-ascii?Q?YcqzDw4VlD2v35Pff51/bJgN7zMbur3fhoPkCLC+iRwUC1s/8cgeTsLZoaUj?=
 =?us-ascii?Q?64gmUEcpeEqHGQGhApiJhOy6zh/xWLXPc70M/JvKuOn2cOSNPcu+9HQiNsAa?=
 =?us-ascii?Q?7++2OcGw1Okr4VgiFWrg+d/rOztlMFIOwY9otZ7/clgJ/qStu2RVolXojfrO?=
 =?us-ascii?Q?8INXYH0KqnG7d+SJRCEfKVeT2x0R5ipdOmUJnIEMCFLa3Fhmo6fjOS0pin2D?=
 =?us-ascii?Q?tj3dOAWEpOl0+s8wjkFgNp/6/H4UYN8YIELPy5+b?=
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: SN7PR11MB6775.namprd11.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: cfc81ea4-6dfd-4a3a-aabb-08db4d0deb8b
X-MS-Exchange-CrossTenant-originalarrivaltime: 05 May 2023 02:10:38.4447 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 46c98d88-e344-4ed4-8496-4ed7712e255d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: nVsi5q3fB33ucxO3yHnRI0JD0SfwDHzkPCS4bbFOKEd8v3yn4VyexJf9OA75R4LafSFS1NHH4tgR5ytN8KeDUw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH7PR11MB7497
X-OriginatorOrg: intel.com
X-BeenThere: dev@dpdk.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: DPDK patches and discussions <dev.dpdk.org>
List-Unsubscribe: <https://mails.dpdk.org/options/dev>,
 <mailto:dev-request@dpdk.org?subject=unsubscribe>
List-Archive: <http://mails.dpdk.org/archives/dev/>
List-Post: <mailto:dev@dpdk.org>
List-Help: <mailto:dev-request@dpdk.org?subject=help>
List-Subscribe: <https://mails.dpdk.org/listinfo/dev>,
 <mailto:dev-request@dpdk.org?subject=subscribe>
Errors-To: dev-bounces@dpdk.org



> -----Original Message-----
> From: Pavan Nikhilesh Bhagavatula <pbhagavatula@marvell.com>
> Sent: Thursday, April 27, 2023 10:53 PM
> To: Yan, Zhirun <zhirun.yan@intel.com>; dev@dpdk.org; Jerin Jacob
> Kollanukkaran <jerinj@marvell.com>; Kiran Kumar Kokkilagadda
> <kirankumark@marvell.com>; Nithin Kumar Dabilpuram
> <ndabilpuram@marvell.com>; stephen@networkplumber.org
> Cc: Liang, Cunming <cunming.liang@intel.com>; Wang, Haiyue
> <haiyue.wang@intel.com>
> Subject: RE: [EXT] [PATCH v5 09/15] graph: introduce stream moving cross =
cores
>=20
> > This patch introduces key functions to allow a worker thread to enable
> > enqueue and move streams of objects to the next nodes over different
> > cores.
> >
> > Signed-off-by: Haiyue Wang <haiyue.wang@intel.com>
> > Signed-off-by: Cunming Liang <cunming.liang@intel.com>
> > Signed-off-by: Zhirun Yan <zhirun.yan@intel.com>
> > ---
> >  lib/graph/graph_private.h            |  27 +++++
> >  lib/graph/meson.build                |   2 +-
> >  lib/graph/rte_graph_model_dispatch.c | 145
> > +++++++++++++++++++++++++++
> >  lib/graph/rte_graph_model_dispatch.h |  37 +++++++
> >  lib/graph/version.map                |   2 +
> >  5 files changed, 212 insertions(+), 1 deletion(-)
> >
> > diff --git a/lib/graph/graph_private.h b/lib/graph/graph_private.h
> > index b66b18ebbc..e1a2a4bfd8 100644
> > --- a/lib/graph/graph_private.h
> > +++ b/lib/graph/graph_private.h
> > @@ -366,4 +366,31 @@ void graph_dump(FILE *f, struct graph *g);
> >   */
> >  void node_dump(FILE *f, struct node *n);
> >
> > +/**
> > + * @internal
> > + *
> > + * Create the graph schedule work queue. And all cloned graphs
> > +attached to
> > the
> > + * parent graph MUST be destroyed together for fast schedule design
> > limitation.
> > + *
> > + * @param _graph
> > + *   The graph object
> > + * @param _parent_graph
> > + *   The parent graph object which holds the run-queue head.
> > + *
> > + * @return
> > + *   - 0: Success.
> > + *   - <0: Graph schedule work queue related error.
> > + */
> > +int graph_sched_wq_create(struct graph *_graph, struct graph
> > *_parent_graph);
> > +
> > +/**
> > + * @internal
> > + *
> > + * Destroy the graph schedule work queue.
> > + *
> > + * @param _graph
> > + *   The graph object
> > + */
> > +void graph_sched_wq_destroy(struct graph *_graph);
> > +
> >  #endif /* _RTE_GRAPH_PRIVATE_H_ */
> > diff --git a/lib/graph/meson.build b/lib/graph/meson.build index
> > c729d984b6..e21affa280 100644
> > --- a/lib/graph/meson.build
> > +++ b/lib/graph/meson.build
> > @@ -20,4 +20,4 @@ sources =3D files(
> >  )
> >  headers =3D files('rte_graph.h', 'rte_graph_worker.h')
> >
> > -deps +=3D ['eal', 'pcapng']
> > +deps +=3D ['eal', 'pcapng', 'mempool', 'ring']
> > diff --git a/lib/graph/rte_graph_model_dispatch.c
> > b/lib/graph/rte_graph_model_dispatch.c
> > index 4a2f99496d..a300fefb85 100644
> > --- a/lib/graph/rte_graph_model_dispatch.c
> > +++ b/lib/graph/rte_graph_model_dispatch.c
> > @@ -5,6 +5,151 @@
> >  #include "graph_private.h"
> >  #include "rte_graph_model_dispatch.h"
> >
> > +int
> > +graph_sched_wq_create(struct graph *_graph, struct graph
> > *_parent_graph)
> > +{
> > +	struct rte_graph *parent_graph =3D _parent_graph->graph;
> > +	struct rte_graph *graph =3D _graph->graph;
> > +	unsigned int wq_size;
> > +
> > +	wq_size =3D GRAPH_SCHED_WQ_SIZE(graph->nb_nodes);
> > +	wq_size =3D rte_align32pow2(wq_size + 1);
>=20
> Hi Zhirun,
>=20
> We should introduce a new function `rte_graph_configure` which can help
> application to control the ring size and mempool size of the work queue?
> We could fallback to default values if nothing is configured.
>=20
> rte_graph_configure should take a
> struct rte_graph_config {
> 	struct {
> 		u64 rsvd[8];
> 	} rtc;
> 	struct {
> 		u16 wq_size;
> 		...
> 	} dispatch;
> };
>=20
> This will help future graph models to have their own configuration.
>=20
> We can have a rte_graph_config_init() function to initialize the rte_grap=
h_config
> structure.
>=20

Hi Pavan,

Thanks for your comments. I am agree with you. It would be more friendly fo=
r user/developer.
And for ring and mempool, there are some limitations(must be a power of 2) =
about the size. So
I prefer to use u16 wq_size_max and u32 mp_size_max for user if they have l=
imited resources.

>=20
> > +
> > +	graph->wq =3D rte_ring_create(graph->name, wq_size, graph->socket,
> > +				    RING_F_SC_DEQ);
> > +	if (graph->wq =3D=3D NULL)
> > +		SET_ERR_JMP(EIO, fail, "Failed to allocate graph WQ");
> > +
> > +	graph->mp =3D rte_mempool_create(graph->name, wq_size,
> > +				       sizeof(struct graph_sched_wq_node),
> > +				       0, 0, NULL, NULL, NULL, NULL,
> > +				       graph->socket, MEMPOOL_F_SP_PUT);
> > +	if (graph->mp =3D=3D NULL)
> > +		SET_ERR_JMP(EIO, fail_mp,
> > +			    "Failed to allocate graph WQ schedule entry");
> > +
> > +	graph->lcore_id =3D _graph->lcore_id;
> > +
> > +	if (parent_graph->rq =3D=3D NULL) {
> > +		parent_graph->rq =3D &parent_graph->rq_head;
> > +		SLIST_INIT(parent_graph->rq);
> > +	}
> > +
> > +	graph->rq =3D parent_graph->rq;
> > +	SLIST_INSERT_HEAD(graph->rq, graph, rq_next);
> > +
> > +	return 0;
> > +
> > +fail_mp:
> > +	rte_ring_free(graph->wq);
> > +	graph->wq =3D NULL;
> > +fail:
> > +	return -rte_errno;
> > +}
> > +
> > +void
> > +graph_sched_wq_destroy(struct graph *_graph) {
> > +	struct rte_graph *graph =3D _graph->graph;
> > +
> > +	if (graph =3D=3D NULL)
> > +		return;
> > +
> > +	rte_ring_free(graph->wq);
> > +	graph->wq =3D NULL;
> > +
> > +	rte_mempool_free(graph->mp);
> > +	graph->mp =3D NULL;
> > +}
> > +
> > +static __rte_always_inline bool
> > +__graph_sched_node_enqueue(struct rte_node *node, struct rte_graph
> > *graph)
> > +{
> > +	struct graph_sched_wq_node *wq_node;
> > +	uint16_t off =3D 0;
> > +	uint16_t size;
> > +
> > +submit_again:
> > +	if (rte_mempool_get(graph->mp, (void **)&wq_node) < 0)
> > +		goto fallback;
> > +
> > +	size =3D RTE_MIN(node->idx, RTE_DIM(wq_node->objs));
> > +	wq_node->node_off =3D node->off;
> > +	wq_node->nb_objs =3D size;
> > +	rte_memcpy(wq_node->objs, &node->objs[off], size * sizeof(void
> > *));
> > +
> > +	while (rte_ring_mp_enqueue_bulk_elem(graph->wq, (void
> > *)&wq_node,
> > +					  sizeof(wq_node), 1, NULL) =3D=3D 0)
> > +		rte_pause();
> > +
> > +	off +=3D size;
> > +	node->idx -=3D size;
> > +	if (node->idx > 0)
> > +		goto submit_again;
> > +
> > +	return true;
> > +
> > +fallback:
> > +	if (off !=3D 0)
> > +		memmove(&node->objs[0], &node->objs[off],
> > +			node->idx * sizeof(void *));
> > +
> > +	return false;
> > +}
> > +
> > +bool __rte_noinline
> > +__rte_graph_sched_node_enqueue(struct rte_node *node,
> > +			       struct rte_graph_rq_head *rq) {
> > +	const unsigned int lcore_id =3D node->lcore_id;
> > +	struct rte_graph *graph;
> > +
> > +	SLIST_FOREACH(graph, rq, rq_next)
> > +		if (graph->lcore_id =3D=3D lcore_id)
> > +			break;
> > +
> > +	return graph !=3D NULL ? __graph_sched_node_enqueue(node,
> > graph) : false;
> > +}
> > +
> > +void
> > +__rte_graph_sched_wq_process(struct rte_graph *graph) {
> > +	struct graph_sched_wq_node *wq_node;
> > +	struct rte_mempool *mp =3D graph->mp;
> > +	struct rte_ring *wq =3D graph->wq;
> > +	uint16_t idx, free_space;
> > +	struct rte_node *node;
> > +	unsigned int i, n;
> > +	struct graph_sched_wq_node *wq_nodes[32];
> > +
> > +	n =3D rte_ring_sc_dequeue_burst_elem(wq, wq_nodes,
> > sizeof(wq_nodes[0]),
> > +					   RTE_DIM(wq_nodes), NULL);
> > +	if (n =3D=3D 0)
> > +		return;
> > +
> > +	for (i =3D 0; i < n; i++) {
> > +		wq_node =3D wq_nodes[i];
> > +		node =3D RTE_PTR_ADD(graph, wq_node->node_off);
> > +		RTE_ASSERT(node->fence =3D=3D RTE_GRAPH_FENCE);
> > +		idx =3D node->idx;
> > +		free_space =3D node->size - idx;
> > +
> > +		if (unlikely(free_space < wq_node->nb_objs))
> > +			__rte_node_stream_alloc_size(graph, node, node-
> > >size + wq_node->nb_objs);
> > +
> > +		memmove(&node->objs[idx], wq_node->objs, wq_node-
> > >nb_objs * sizeof(void *));
> > +		memset(wq_node->objs, 0, wq_node->nb_objs *
> > sizeof(void *));
>=20
> Memset should be avoided in fastpath for better performance as we anyway =
set
> wq_node->nb_objs as 0.
>=20
> > +		node->idx =3D idx + wq_node->nb_objs;
> > +
> > +		__rte_node_process(graph, node);
> > +
> > +		wq_node->nb_objs =3D 0;
> > +		node->idx =3D 0;
> > +	}
> > +
> > +	rte_mempool_put_bulk(mp, (void **)wq_nodes, n); }
> > +
> >  int
> >  rte_graph_model_dispatch_lcore_affinity_set(const char *name,
> > unsigned int lcore_id)  { diff --git
> > a/lib/graph/rte_graph_model_dispatch.h
> > b/lib/graph/rte_graph_model_dispatch.h
> > index 179624e972..18fa7ce0ab 100644
> > --- a/lib/graph/rte_graph_model_dispatch.h
> > +++ b/lib/graph/rte_graph_model_dispatch.h
> > @@ -14,12 +14,49 @@
> >   *
> >   * This API allows to set core affinity with the node.
> >   */
> > +#include <rte_errno.h>
> > +#include <rte_mempool.h>
> > +#include <rte_memzone.h>
> > +#include <rte_ring.h>
> > +
> >  #include "rte_graph_worker_common.h"
> >
> >  #ifdef __cplusplus
> >  extern "C" {
> >  #endif
> >
> > +#define GRAPH_SCHED_WQ_SIZE_MULTIPLIER  8
> > +#define GRAPH_SCHED_WQ_SIZE(nb_nodes)   \
> > +	((typeof(nb_nodes))((nb_nodes) *
> > GRAPH_SCHED_WQ_SIZE_MULTIPLIER))
> > +
> > +/**
> > + * @internal
> > + *
> > + * Schedule the node to the right graph's work queue.
> > + *
> > + * @param node
> > + *   Pointer to the scheduled node object.
> > + * @param rq
> > + *   Pointer to the scheduled run-queue for all graphs.
> > + *
> > + * @return
> > + *   True on success, false otherwise.
> > + */
> > +__rte_experimental
> > +bool __rte_noinline __rte_graph_sched_node_enqueue(struct rte_node
> > *node,
> > +				    struct rte_graph_rq_head *rq);
> > +
> > +/**
> > + * @internal
> > + *
> > + * Process all nodes (streams) in the graph's work queue.
> > + *
> > + * @param graph
> > + *   Pointer to the graph object.
> > + */
> > +__rte_experimental
> > +void __rte_graph_sched_wq_process(struct rte_graph *graph);
> > +
> >  /**
> >   * Set lcore affinity with the node.
> >   *
> > diff --git a/lib/graph/version.map b/lib/graph/version.map index
> > aaa86f66ed..d511133f39 100644
> > --- a/lib/graph/version.map
> > +++ b/lib/graph/version.map
> > @@ -48,6 +48,8 @@ EXPERIMENTAL {
> >
> >  	rte_graph_worker_model_set;
> >  	rte_graph_worker_model_get;
> > +	__rte_graph_sched_wq_process;
> > +	__rte_graph_sched_node_enqueue;
> >
> >  	rte_graph_model_dispatch_lcore_affinity_set;
> >
> > --
> > 2.37.2