From mboxrd@z Thu Jan  1 00:00:00 1970
Return-Path: <stable-bounces@dpdk.org>
Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124])
	by inbox.dpdk.org (Postfix) with ESMTP id C19EB45610
	for <public@inbox.dpdk.org>; Fri, 12 Jul 2024 12:53:43 +0200 (CEST)
Received: from mails.dpdk.org (localhost [127.0.0.1])
	by mails.dpdk.org (Postfix) with ESMTP id BB0F2402E5;
	Fri, 12 Jul 2024 12:53:43 +0200 (CEST)
Received: from NAM02-DM3-obe.outbound.protection.outlook.com
 (mail-dm3nam02on2059.outbound.protection.outlook.com [40.107.95.59])
 by mails.dpdk.org (Postfix) with ESMTP id 9855640261
 for <stable@dpdk.org>; Fri, 12 Jul 2024 12:53:42 +0200 (CEST)
ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none;
 b=FWXuq7KKVBJ2GTMYCr2hcdWPlcLkI8doZvXGAcr8kF3K6aOFcMecjuyoddWFxfhx7UjjRtJEpVvd5TLwk0i/HSrOmm3XIUKXqDMGdm+/1AIrkSZZmD4rQcddNBDao+F4riHepP4+0WoluCaFLVyKfLh+6JzOX6Ip8ZaBDo8s9yhNasfU6wcIpJJqahPet7NT395qodTfNcAP3RZPVaNPIH1mrGzKs24XneoUc+t78i3iivUEnVWSzL/oxsbbovJluzHmV9FY9o1wYqX8/CAqlthD5aVIm7rBwbDdg52LCppSJMhflmnzvxzNCqWTJo/tXcVUUmo8BAVoeFhA9O7jFQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; 
 s=arcselector10001;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=ti1yBN3NbEiKqDKMPkEjT/Pcy0yDvnPkaKCPG3yzXd4=;
 b=Dzy/xXbcOADImz7fcJGhM1WDEInX6370pxxTfPoZRDaiYalsHpfvZ/5o5+95p6u4DayzyN71VV+Z0+OXcobQWOGpVhGY2DPHocLND8uqeISTyQu/nhYJ2WwkughE+7rmNdjszRg9l5XsMZmR/JlZ17FTUonSM2rP4eMSwyq8BSDnun4sjPg4WbF3MvqYpsy5nDnIuwKfrg0CVKCEXc4r1VLTsh5fgx2uBDPE5d3f/1ZhruDu7j6NRxITlM4+VjE+DjHOppppVBX0cBqspfoIZ21B28HczBLcPX4eu6Sjr1haxQKhNlyMOmfwan6rZGHKaNPnIQAvKrubiNi1NBLV7w==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 216.228.117.161) smtp.rcpttodomain=redhat.com smtp.mailfrom=nvidia.com;
 dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com;
 dkim=none (message not signed); arc=none (0)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com;
 s=selector2;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ti1yBN3NbEiKqDKMPkEjT/Pcy0yDvnPkaKCPG3yzXd4=;
 b=P6QUZdfVwB7pUlzqldlxtzA1kWmW47nQZk0GRvEbjvpqww+1iri0Gz2+yFswozTz0/QufmvjGkvbvelPHY1pznXtMfbpGdokIbNXk6+NAUMsU6J+y8fEzSxh+XFgeBbs3nEO77AtnUc7m+r6Q2RWnnbI9kGv23eK0QVb+slpdisdVIVm6b2u++tMjJfw9fE2Ud/m+LxRvcFky6p3i/tiaVWQVgye7WvcWfW8WZa/xtT6y9x+38VztXphFhbCXROGV61+d7hHqpXjDnepE0m+SfV1ztYno/JlrCftl9QJUftGFKt95V83RXr6tc89Yg6YIDFdi10tPnk4w3pbCWb83A==
Received: from MN0P220CA0026.NAMP220.PROD.OUTLOOK.COM (2603:10b6:208:52e::19)
 by DM4PR12MB7696.namprd12.prod.outlook.com (2603:10b6:8:100::10) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7741.36; Fri, 12 Jul
 2024 10:53:38 +0000
Received: from BN2PEPF000055E0.namprd21.prod.outlook.com
 (2603:10b6:208:52e:cafe::65) by MN0P220CA0026.outlook.office365.com
 (2603:10b6:208:52e::19) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7762.24 via Frontend
 Transport; Fri, 12 Jul 2024 10:53:38 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.161)
 smtp.mailfrom=nvidia.com;
 dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=nvidia.com;
Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates
 216.228.117.161 as permitted sender) receiver=protection.outlook.com;
 client-ip=216.228.117.161; helo=mail.nvidia.com; pr=C
Received: from mail.nvidia.com (216.228.117.161) by
 BN2PEPF000055E0.mail.protection.outlook.com (10.167.245.10) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.7784.5 via Frontend Transport; Fri, 12 Jul 2024 10:53:38 +0000
Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com
 (10.129.200.67) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Fri, 12 Jul
 2024 03:53:26 -0700
Received: from nvidia.com (10.126.231.35) by rnnvmail201.nvidia.com
 (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Fri, 12 Jul
 2024 03:53:24 -0700
From: Xueming Li <xuemingl@nvidia.com>
To: Robin Jarry <rjarry@redhat.com>
CC: Kiran Kumar K <kirankumark@marvell.com>, dpdk stable <stable@dpdk.org>
Subject: patch 'graph: fix stats retrieval while destroying a graph' has been
 queued to stable release 23.11.2
Date: Fri, 12 Jul 2024 18:45:03 +0800
Message-ID: <20240712104528.308638-78-xuemingl@nvidia.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <20240712104528.308638-1-xuemingl@nvidia.com>
References: <20240712104528.308638-1-xuemingl@nvidia.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain
X-Originating-IP: [10.126.231.35]
X-ClientProxiedBy: rnnvmail203.nvidia.com (10.129.68.9) To
 rnnvmail201.nvidia.com (10.129.68.8)
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BN2PEPF000055E0:EE_|DM4PR12MB7696:EE_
X-MS-Office365-Filtering-Correlation-Id: cdb8d79f-fe9e-4de4-d5d5-08dca260e2ec
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
 ARA:13230040|1800799024|376014|82310400026|36860700013; 
X-Microsoft-Antispam-Message-Info: =?us-ascii?Q?NWdBmLBgRi5yJbXKYVQIbkmnu1s79TzkPxUbT2z+FcFSPfZXtoBvE0XFsIcx?=
 =?us-ascii?Q?er0XbM13AkRg0Vvj58D1eNHT8N/YXs/z+mZLjIPcSPB+tPTuU37DSTnxCvUJ?=
 =?us-ascii?Q?H0PIQfve4Jr7ujZk6wEewahWVvYwCotbpV/aQuLgj6GALAznjDsiKiRWvTwH?=
 =?us-ascii?Q?tL2n3Lys6hlDMdD2JlUr54GCZyLDGH3wFKh25GWdSid/B2My0E8DJ/wYf2lh?=
 =?us-ascii?Q?qgzjF5yVuMpeEXNfHXXvOQkDpTobGBQPnIh20F3soAQsBGPvh62EZ1+MCTpu?=
 =?us-ascii?Q?WsceLhPZkcdbSHOX40Tvy38PvXaPV+FLGe1LUKRMpHJ7bdlrRD7aqasEijub?=
 =?us-ascii?Q?iOWaBsnZYe836IBm85uJMaRueX29Q+xhuGUQgOwIIa7y+uUodWnlhxeDKqEM?=
 =?us-ascii?Q?rFQR3umXyGjC3iNRUnGgTxbTBraY+pnDuB+JIK/su1M2hTD1lY+EPmynM1Lq?=
 =?us-ascii?Q?Bl/eeeZgsI4hoYjXhJGfhI4vmxxZZo5FH9nPFErTwTj/uqOiiLpZtT151bN8?=
 =?us-ascii?Q?ysBEUqaGain2wTQpa6zyWPxRTgKG5LY512qXsUSkL12FK/V7hzYH47Pef+W6?=
 =?us-ascii?Q?sBTAE3HgybDPXpaRt4k9i2V+tzHei5x7B6Lcn4hlv/600nLY4yPFgvAfsDt2?=
 =?us-ascii?Q?wyHjxpKLclRelCyqELfREBEsnMdQiSKJRgaGpkk4Q/WjiB8UqWnZN9KjSXda?=
 =?us-ascii?Q?j2KIgWf0eDZGShEYpwn+d2aUqcjOM5HDcHWO628n9frNpcZe9Chp6R1KfV6T?=
 =?us-ascii?Q?lYSOa7xyxPn9AXu28SxlkTGn5A0Ssex/GPNRula89cwM0NMV8lyfVmPaxRoI?=
 =?us-ascii?Q?0POGbsyoASL4S5aP9UF2xFpmdBUdK8NX9fsJC7YAH61GfjHCc0cN9RnMRqt4?=
 =?us-ascii?Q?ndFGOzgQHbQNU+mlzgzrOpxDZXL1H2fcgDQWvBw6X8nu+9ywSzucR407HPr5?=
 =?us-ascii?Q?gRenqdECjvIwd7hEZJjpeWqnUej1E4tOT5bggvbs1XSbaGLD9gsjXnzXQ9Lr?=
 =?us-ascii?Q?5ZMxBtcNHYpbJpedJGc0K3+jyolMLa3LKJToEnHgyQK78ScNXVqDM27mnyyF?=
 =?us-ascii?Q?KdFH624Q+A7RAVlh2lw1eNb7n9IjvQ+NXYblR3slCyBOUwzSzCr5qqCqqehf?=
 =?us-ascii?Q?SseEHg6UG//GhWvxvfDExcCORKGrTPByInwIIHojlDZynpsjwSdavQYIl8IS?=
 =?us-ascii?Q?mUFSPFHwtU9lwbJPE1iWGzuH9jHZqwRs3GIH4kS1davqHYDLFQhIZjkpXYZM?=
 =?us-ascii?Q?FFDApuYm2t9kDqyJ+GbFlbKkmmjZve199RQpmsSMg07f8vPuG6Kse02AP5Rv?=
 =?us-ascii?Q?NWvdC2kg/P1c7r81J+Mq/jTVM1JbVG3ivDH1LZLsD+48AaQXTTaqIJL4rTBk?=
 =?us-ascii?Q?pzG+8XkUV/ZFzRziI2u+0Sp+MkJ+oz+Z9sd8afhxfqFcg0uhjA=3D=3D?=
X-Forefront-Antispam-Report: CIP:216.228.117.161; CTRY:; LANG:en; SCL:1; SRV:;
 IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge2.nvidia.com; CAT:NONE;
 SFS:(13230040)(1800799024)(376014)(82310400026)(36860700013); DIR:OUT;
 SFP:1101; 
X-OriginatorOrg: Nvidia.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 12 Jul 2024 10:53:38.4062 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: cdb8d79f-fe9e-4de4-d5d5-08dca260e2ec
X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.161];
 Helo=[mail.nvidia.com]
X-MS-Exchange-CrossTenant-AuthSource: BN2PEPF000055E0.namprd21.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM4PR12MB7696
X-BeenThere: stable@dpdk.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: patches for DPDK stable branches <stable.dpdk.org>
List-Unsubscribe: <https://mails.dpdk.org/options/stable>,
 <mailto:stable-request@dpdk.org?subject=unsubscribe>
List-Archive: <http://mails.dpdk.org/archives/stable/>
List-Post: <mailto:stable@dpdk.org>
List-Help: <mailto:stable-request@dpdk.org?subject=help>
List-Subscribe: <https://mails.dpdk.org/listinfo/stable>,
 <mailto:stable-request@dpdk.org?subject=subscribe>
Errors-To: stable-bounces@dpdk.org

Hi,

FYI, your patch has been queued to stable release 23.11.2

Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 07/14/24. So please
shout if anyone has objections.

Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.

Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging

This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=45b177b78fee756ca8f9dbd4dcb9c74ed6edba1e

Thanks.

Xueming Li <xuemingl@nvidia.com>

---
>From 45b177b78fee756ca8f9dbd4dcb9c74ed6edba1e Mon Sep 17 00:00:00 2001
From: Robin Jarry <rjarry@redhat.com>
Date: Mon, 1 Apr 2024 22:36:49 +0200
Subject: [PATCH] graph: fix stats retrieval while destroying a graph
Cc: Xueming Li <xuemingl@nvidia.com>

[ upstream commit 7d18ab565da979642ad92e93def623cfca260fef ]

In rte_graph_cluster_stats_get, the walk model of the first graph is
checked to determine if multi-core dispatch specific counters should be
updated or not. This global list is accessed without any locks.

If the global list is modified by another thread while
rte_graph_cluster_stats_get is called, it can result in undefined
behaviour.

Adding a lock would make it impossible to call
rte_graph_cluster_stats_get in packet processing code paths. Avoid
accessing the global list instead by storing a bool field in the private
rte_graph_cluster_stats structure.

Also update the default callback to avoid accessing the global list and
use a different default callback depending on the graph model.

Fixes: 358ff83fe88c ("graph: add stats for mcore dispatch model")

Signed-off-by: Robin Jarry <rjarry@redhat.com>
Acked-by: Kiran Kumar K <kirankumark@marvell.com>
---
 lib/graph/graph_stats.c | 57 ++++++++++++++++++++++++++---------------
 1 file changed, 36 insertions(+), 21 deletions(-)

diff --git a/lib/graph/graph_stats.c b/lib/graph/graph_stats.c
index cc32245c05..e99e8cf68a 100644
--- a/lib/graph/graph_stats.c
+++ b/lib/graph/graph_stats.c
@@ -34,6 +34,7 @@ struct rte_graph_cluster_stats {
 	uint32_t cluster_node_size; /* Size of struct cluster_node */
 	rte_node_t max_nodes;
 	int socket_id;
+	bool dispatch;
 	void *cookie;
 	size_t sz;
 
@@ -74,17 +75,16 @@ print_banner_dispatch(FILE *f)
 }
 
 static inline void
-print_banner(FILE *f)
+print_banner(FILE *f, bool dispatch)
 {
-	if (rte_graph_worker_model_get(STAILQ_FIRST(graph_list_head_get())->graph) ==
-	    RTE_GRAPH_MODEL_MCORE_DISPATCH)
+	if (dispatch)
 		print_banner_dispatch(f);
 	else
 		print_banner_default(f);
 }
 
 static inline void
-print_node(FILE *f, const struct rte_graph_cluster_node_stats *stat)
+print_node(FILE *f, const struct rte_graph_cluster_node_stats *stat, bool dispatch)
 {
 	double objs_per_call, objs_per_sec, cycles_per_call, ts_per_hz;
 	const uint64_t prev_calls = stat->prev_calls;
@@ -104,8 +104,7 @@ print_node(FILE *f, const struct rte_graph_cluster_node_stats *stat)
 	objs_per_sec = ts_per_hz ? (objs - prev_objs) / ts_per_hz : 0;
 	objs_per_sec /= 1000000;
 
-	if (rte_graph_worker_model_get(STAILQ_FIRST(graph_list_head_get())->graph) ==
-	    RTE_GRAPH_MODEL_MCORE_DISPATCH) {
+	if (dispatch) {
 		fprintf(f,
 			"|%-31s|%-15" PRIu64 "|%-15" PRIu64 "|%-15" PRIu64
 			"|%-15" PRIu64 "|%-15" PRIu64
@@ -123,20 +122,17 @@ print_node(FILE *f, const struct rte_graph_cluster_node_stats *stat)
 }
 
 static int
-graph_cluster_stats_cb(bool is_first, bool is_last, void *cookie,
+graph_cluster_stats_cb(bool dispatch, bool is_first, bool is_last, void *cookie,
 		       const struct rte_graph_cluster_node_stats *stat)
 {
 	FILE *f = cookie;
-	int model;
-
-	model = rte_graph_worker_model_get(STAILQ_FIRST(graph_list_head_get())->graph);
 
 	if (unlikely(is_first))
-		print_banner(f);
+		print_banner(f, dispatch);
 	if (stat->objs)
-		print_node(f, stat);
+		print_node(f, stat, dispatch);
 	if (unlikely(is_last)) {
-		if (model == RTE_GRAPH_MODEL_MCORE_DISPATCH)
+		if (dispatch)
 			boarder_model_dispatch();
 		else
 			boarder();
@@ -145,6 +141,20 @@ graph_cluster_stats_cb(bool is_first, bool is_last, void *cookie,
 	return 0;
 };
 
+static int
+graph_cluster_stats_cb_rtc(bool is_first, bool is_last, void *cookie,
+			   const struct rte_graph_cluster_node_stats *stat)
+{
+	return graph_cluster_stats_cb(false, is_first, is_last, cookie, stat);
+};
+
+static int
+graph_cluster_stats_cb_dispatch(bool is_first, bool is_last, void *cookie,
+				const struct rte_graph_cluster_node_stats *stat)
+{
+	return graph_cluster_stats_cb(true, is_first, is_last, cookie, stat);
+};
+
 static struct rte_graph_cluster_stats *
 stats_mem_init(struct cluster *cluster,
 	       const struct rte_graph_cluster_stats_param *prm)
@@ -157,8 +167,13 @@ stats_mem_init(struct cluster *cluster,
 
 	/* Fix up callback */
 	fn = prm->fn;
-	if (fn == NULL)
-		fn = graph_cluster_stats_cb;
+	if (fn == NULL) {
+		const struct rte_graph *graph = cluster->graphs[0]->graph;
+		if (graph->model == RTE_GRAPH_MODEL_MCORE_DISPATCH)
+			fn = graph_cluster_stats_cb_dispatch;
+		else
+			fn = graph_cluster_stats_cb_rtc;
+	}
 
 	cluster_node_size = sizeof(struct cluster_node);
 	/* For a given cluster, max nodes will be the max number of graphs */
@@ -350,6 +365,8 @@ rte_graph_cluster_stats_create(const struct rte_graph_cluster_stats_param *prm)
 			if (stats_mem_populate(&stats, graph_fp, graph_node))
 				goto realloc_fail;
 		}
+		if (graph->graph->model == RTE_GRAPH_MODEL_MCORE_DISPATCH)
+			stats->dispatch = true;
 	}
 
 	/* Finally copy to hugepage memory to avoid pressure on rte_realloc */
@@ -375,20 +392,18 @@ rte_graph_cluster_stats_destroy(struct rte_graph_cluster_stats *stat)
 }
 
 static inline void
-cluster_node_arregate_stats(struct cluster_node *cluster)
+cluster_node_arregate_stats(struct cluster_node *cluster, bool dispatch)
 {
 	uint64_t calls = 0, cycles = 0, objs = 0, realloc_count = 0;
 	struct rte_graph_cluster_node_stats *stat = &cluster->stat;
 	uint64_t sched_objs = 0, sched_fail = 0;
 	struct rte_node *node;
 	rte_node_t count;
-	int model;
 
-	model = rte_graph_worker_model_get(STAILQ_FIRST(graph_list_head_get())->graph);
 	for (count = 0; count < cluster->nb_nodes; count++) {
 		node = cluster->nodes[count];
 
-		if (model == RTE_GRAPH_MODEL_MCORE_DISPATCH) {
+		if (dispatch) {
 			sched_objs += node->dispatch.total_sched_objs;
 			sched_fail += node->dispatch.total_sched_fail;
 		}
@@ -403,7 +418,7 @@ cluster_node_arregate_stats(struct cluster_node *cluster)
 	stat->objs = objs;
 	stat->cycles = cycles;
 
-	if (model == RTE_GRAPH_MODEL_MCORE_DISPATCH) {
+	if (dispatch) {
 		stat->dispatch.sched_objs = sched_objs;
 		stat->dispatch.sched_fail = sched_fail;
 	}
@@ -433,7 +448,7 @@ rte_graph_cluster_stats_get(struct rte_graph_cluster_stats *stat, bool skip_cb)
 	cluster = stat->clusters;
 
 	for (count = 0; count < stat->max_nodes; count++) {
-		cluster_node_arregate_stats(cluster);
+		cluster_node_arregate_stats(cluster, stat->dispatch);
 		if (!skip_cb)
 			rc = stat->fn(!count, (count == stat->max_nodes - 1),
 				      stat->cookie, &cluster->stat);
-- 
2.34.1

---
  Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- -	2024-07-12 18:40:17.598927208 +0800
+++ 0077-graph-fix-stats-retrieval-while-destroying-a-graph.patch	2024-07-12 18:40:14.206594215 +0800
@@ -1 +1 @@
-From 7d18ab565da979642ad92e93def623cfca260fef Mon Sep 17 00:00:00 2001
+From 45b177b78fee756ca8f9dbd4dcb9c74ed6edba1e Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 7d18ab565da979642ad92e93def623cfca260fef ]
@@ -23 +25,0 @@
-Cc: stable@dpdk.org
@@ -32 +34 @@
-index 2fb808b21e..d71451a17b 100644
+index cc32245c05..e99e8cf68a 100644
@@ -35 +37 @@
-@@ -34,6 +34,7 @@ struct __rte_cache_aligned rte_graph_cluster_stats {
+@@ -34,6 +34,7 @@ struct rte_graph_cluster_stats {