From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 0E14243E57 for ; Sat, 13 Apr 2024 14:58:47 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 08A0E40C35; Sat, 13 Apr 2024 14:58:47 +0200 (CEST) Received: from NAM12-DM6-obe.outbound.protection.outlook.com (mail-dm6nam12on2067.outbound.protection.outlook.com [40.107.243.67]) by mails.dpdk.org (Postfix) with ESMTP id 1AE5D400D6 for ; Sat, 13 Apr 2024 14:58:46 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=cmNLhmfjdIAat9N9jYSn+wNVdVqbP9xavzo5xNikRdSUuikxwm4d7h03NAW3csARWLUudj4cmTjGPv1jFA55byGpjZqbL4XYV9ACtEKSGs+qNBx8FvWp02+g6ll6F+YZ+Qd/Ab5XjMXIk9pI1B8RxobsHdSfgfyTG5rXhSNGqgpq29Evs7pLl4ZJ3uCabDqVliebEzUBJK+VP8qzy8xYbZJC3XvINwiIh/Rk9/lU07DiXk+vGRNn/jhI0dpkDLFLL/WkvpjAAVGYtUKZaYfHRfJ//6g8ZSTX6unzPgFuyyVArEyXEvG78fi2ryoFc7aefbUUkJVHBUdBPCd/FQH+4w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=CdqK8atqzYOhu5AjEJp5JF58I3kyIknfY+6b5H6Z0O0=; b=MEoz4YQPCVUd/uIfLg88f4eCBNpKKITRrZGFoFQWv/SlpqQAsCEuYUKQ477upqjYZgOC/peY7CO4FP0/pmwyqa+URhzhG8JyVUVWsK8nPodpRHLt1tf6Vb8/t7QnqgZFZFrAHQZRBts14PJICi22UqKb0S+Px5uE/fdPOwzCOwM0BaoGD/qv3nW5pe1nDvA9r38JwURjLqoBfuvSkcRAQewmMR6sF9+ObX25UWcq8AcTpy261hGIVaIZE11bRwbW9NevzpdA6NOOzsT5T8lBgdyCe2AJROe6U8ycu0+O8YZTDjipe5tuISwd5GzDqiB9JlYHzROCvnCwRVL0flNwNg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.161) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=CdqK8atqzYOhu5AjEJp5JF58I3kyIknfY+6b5H6Z0O0=; b=Xxv3gcVOikb9TVIOCBDCCtGdA4xhD8DHh4Iymy32RqMMlXOhGR1GPjP6aaZRYY3J2SNaT12VJ4e3frwifmSmvlpMkmL4g8tI0fomv0gZ/ePhxJJ0blbuX3S6RqrnfY0ne+HXTBVqLgyKsc9oEEiE5xiET7K2PY+aNhjmqDALHW2tMihomhVm+cb5svN4oKz1pT2Ce3vibCi+vaNNr1nGA+snSdDe2/nSFRV/0/he4/w3vlrsBbtoTphcbEmbbFDdBEY4lzBNRJw0CDdRVf7/hQl7uLAmZk7n2KO9d/orFxOVDWsYKxHX96U9XGceq2kUg/ECHQiOWj+3fLia0vW15Q== Received: from MW4PR03CA0262.namprd03.prod.outlook.com (2603:10b6:303:b4::27) by PH7PR12MB8177.namprd12.prod.outlook.com (2603:10b6:510:2b4::8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7409.55; Sat, 13 Apr 2024 12:58:43 +0000 Received: from MWH0EPF000989EB.namprd02.prod.outlook.com (2603:10b6:303:b4:cafe::6a) by MW4PR03CA0262.outlook.office365.com (2603:10b6:303:b4::27) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7472.28 via Frontend Transport; Sat, 13 Apr 2024 12:58:42 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.161) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.161 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.161; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.161) by MWH0EPF000989EB.mail.protection.outlook.com (10.167.241.138) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7452.22 via Frontend Transport; Sat, 13 Apr 2024 12:58:42 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.67) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.41; Sat, 13 Apr 2024 05:58:32 -0700 Received: from nvidia.com (10.126.231.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1258.12; Sat, 13 Apr 2024 05:58:31 -0700 From: Xueming Li To: Gregory Etelson CC: Dariusz Sosnowski , dpdk stable Subject: patch 'net/mlx5: fix indirect action async job initialization' has been queued to stable release 23.11.1 Date: Sat, 13 Apr 2024 20:49:34 +0800 Message-ID: <20240413125005.725659-94-xuemingl@nvidia.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240413125005.725659-1-xuemingl@nvidia.com> References: <20240305094757.439387-1-xuemingl@nvidia.com> <20240413125005.725659-1-xuemingl@nvidia.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain X-Originating-IP: [10.126.231.35] X-ClientProxiedBy: rnnvmail201.nvidia.com (10.129.68.8) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: MWH0EPF000989EB:EE_|PH7PR12MB8177:EE_ X-MS-Office365-Filtering-Correlation-Id: ae470719-1386-4e37-16a4-08dc5bb97280 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: vs9SMrVBzAQRGeP/6temGE2dGTe/L5bQB09xrfHtiixWg74/KTSJPwSzCacfRqGlLcfo7sB2hldoyx2KQaEvA52/6NpbkeU/PdCYhULfCaSGOPm9BpSpQqsY1ZI+3hZgiRf1IxuZ/SkBSo7nwZVIg+XjN7WWjQefWbdKtzuJ9GYthILwDEMiLYpL6Weat8ongKmmkuf3g+djlIMgLHDjuAnKjZVwc8gqnvYRVAtjED5DzsosL82nfL1OYfVQTnhnhshhyfiAVA7PhGMqdimEx0BJl5N7SyNWoilybAlkjttngl4731Jpf3DFpDDQ9EAiFzyMNBspv8IOgBCouUSZcUfM8myjyug5IfL8rfGpTZrVeJovwShvRxiDW3FwXESvTuguFdfqlnBMnap/WQZ1qK0Bc/io/P/XlOh3XltiQD3HdaCi/by0/HS1REBBWSkuOJfISCoB1MBYykmRlgiJs2FsG3ajdMTFkyNBanJUhRudhr5msFqBz57o+YLPuI8fbQD3cd7Zgo+dlhGuFRCums5XXHUOy8lVPSsbbh/EI4ppZSDpJMRDzd7d7pS33z9rkZmmGqzddvEfHldYHAb5qO73QC2DyAURWJG1SBTPoVumYpL9KCmgWPog3OHPP6yjL2tjJzh0fx6pBpnhlEmWWV84b08rMFZPGgn17Iiq10mYuWaoQnTil/wxOV9dTNIWRjZpDLncrKujrWJjX/Z9TipouDFSPC11sjQh+HMEGa/EzR70xYWa39jabI/eSZhW X-Forefront-Antispam-Report: CIP:216.228.117.161; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge2.nvidia.com; CAT:NONE; SFS:(13230031)(36860700004)(376005)(1800799015)(82310400014); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 13 Apr 2024 12:58:42.6904 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: ae470719-1386-4e37-16a4-08dc5bb97280 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.161]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: MWH0EPF000989EB.namprd02.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH7PR12MB8177 X-BeenThere: stable@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: patches for DPDK stable branches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: stable-bounces@dpdk.org Hi, FYI, your patch has been queued to stable release 23.11.1 Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet. It will be pushed if I get no objections before 04/15/24. So please shout if anyone has objections. Also note that after the patch there's a diff of the upstream commit vs the patch applied to the branch. This will indicate if there was any rebasing needed to apply to the stable branch. If there were code changes for rebasing (ie: not only metadata diffs), please double check that the rebase was correctly done. Queued patches are on a temporary branch at: https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging This queued commit can be viewed at: https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=1994df02c988a2f1d70cfd192ecd2098edfc6713 Thanks. Xueming Li --- >From 1994df02c988a2f1d70cfd192ecd2098edfc6713 Mon Sep 17 00:00:00 2001 From: Gregory Etelson Date: Thu, 7 Mar 2024 12:19:10 +0200 Subject: [PATCH] net/mlx5: fix indirect action async job initialization Cc: Xueming Li [ upstream commit 1a8b80329748033eb3bb9ed7433e0aef1bbcd838 ] MLX5 PMD driver supports 2 types of indirect actions: legacy INDIRECT and INDIRECT_LIST. PMD has different handlers for each of indirection actions types. Therefore PMD marks async `job::indirect_type` with relevant value. PMD set the type only during indirect action creation. Legacy INDIRECT query could have get a job object used previously by INDIRECT_LIST action. In that case such job object was handled as INDIRECT_LIST because the `job::indirect_type` was not re-assigned. The patch sets `job::indirect_type` during the job initialization according to operation type. Fixes: 59155721936e ("net/mlx5: fix indirect flow completion processing") Signed-off-by: Gregory Etelson Acked-by: Dariusz Sosnowski --- drivers/net/mlx5/mlx5_flow_hw.c | 24 +++++++++++++----------- 1 file changed, 13 insertions(+), 11 deletions(-) diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c index f43ffb1d4e..6d0f1beeec 100644 --- a/drivers/net/mlx5/mlx5_flow_hw.c +++ b/drivers/net/mlx5/mlx5_flow_hw.c @@ -109,6 +109,7 @@ flow_hw_action_job_init(struct mlx5_priv *priv, uint32_t queue, const struct rte_flow_action_handle *handle, void *user_data, void *query_data, enum mlx5_hw_job_type type, + enum mlx5_hw_indirect_type indirect_type, struct rte_flow_error *error); static int mlx5_tbl_multi_pattern_process(struct rte_eth_dev *dev, @@ -1583,7 +1584,8 @@ flow_hw_meter_mark_compile(struct rte_eth_dev *dev, struct mlx5_aso_mtr *aso_mtr; struct mlx5_hw_q_job *job = flow_hw_action_job_init(priv, queue, NULL, NULL, NULL, - MLX5_HW_Q_JOB_TYPE_CREATE, NULL); + MLX5_HW_Q_JOB_TYPE_CREATE, + MLX5_HW_INDIRECT_TYPE_LEGACY, NULL); if (!job) return -1; @@ -10057,6 +10059,7 @@ flow_hw_action_job_init(struct mlx5_priv *priv, uint32_t queue, const struct rte_flow_action_handle *handle, void *user_data, void *query_data, enum mlx5_hw_job_type type, + enum mlx5_hw_indirect_type indirect_type, struct rte_flow_error *error) { struct mlx5_hw_q_job *job; @@ -10074,6 +10077,7 @@ flow_hw_action_job_init(struct mlx5_priv *priv, uint32_t queue, job->action = handle; job->user_data = user_data; job->query.user = query_data; + job->indirect_type = indirect_type; return job; } @@ -10085,7 +10089,7 @@ mlx5_flow_action_job_init(struct mlx5_priv *priv, uint32_t queue, struct rte_flow_error *error) { return flow_hw_action_job_init(priv, queue, handle, user_data, query_data, - type, error); + type, MLX5_HW_INDIRECT_TYPE_LEGACY, error); } static __rte_always_inline void @@ -10155,7 +10159,7 @@ flow_hw_action_handle_create(struct rte_eth_dev *dev, uint32_t queue, if (attr || force_job) { job = flow_hw_action_job_init(priv, queue, NULL, user_data, NULL, MLX5_HW_Q_JOB_TYPE_CREATE, - error); + MLX5_HW_INDIRECT_TYPE_LEGACY, error); if (!job) return NULL; } @@ -10224,7 +10228,6 @@ flow_hw_action_handle_create(struct rte_eth_dev *dev, uint32_t queue, } if (job && !force_job) { job->action = handle; - job->indirect_type = MLX5_HW_INDIRECT_TYPE_LEGACY; flow_hw_action_finalize(dev, queue, job, push, aso, handle != NULL); } @@ -10316,7 +10319,7 @@ flow_hw_action_handle_update(struct rte_eth_dev *dev, uint32_t queue, if (attr || force_job) { job = flow_hw_action_job_init(priv, queue, handle, user_data, NULL, MLX5_HW_Q_JOB_TYPE_UPDATE, - error); + MLX5_HW_INDIRECT_TYPE_LEGACY, error); if (!job) return -rte_errno; } @@ -10398,7 +10401,7 @@ flow_hw_action_handle_destroy(struct rte_eth_dev *dev, uint32_t queue, if (attr || force_job) { job = flow_hw_action_job_init(priv, queue, handle, user_data, NULL, MLX5_HW_Q_JOB_TYPE_DESTROY, - error); + MLX5_HW_INDIRECT_TYPE_LEGACY, error); if (!job) return -rte_errno; } @@ -10711,7 +10714,7 @@ flow_hw_action_handle_query(struct rte_eth_dev *dev, uint32_t queue, if (attr) { job = flow_hw_action_job_init(priv, queue, handle, user_data, data, MLX5_HW_Q_JOB_TYPE_QUERY, - error); + MLX5_HW_INDIRECT_TYPE_LEGACY, error); if (!job) return -rte_errno; } @@ -10765,7 +10768,7 @@ flow_hw_async_action_handle_query_update job = flow_hw_action_job_init(priv, queue, handle, user_data, query, MLX5_HW_Q_JOB_TYPE_UPDATE_QUERY, - error); + MLX5_HW_INDIRECT_TYPE_LEGACY, error); if (!job) return -rte_errno; } @@ -11445,7 +11448,7 @@ flow_hw_async_action_list_handle_create(struct rte_eth_dev *dev, uint32_t queue, if (attr) { job = flow_hw_action_job_init(priv, queue, NULL, user_data, NULL, MLX5_HW_Q_JOB_TYPE_CREATE, - error); + MLX5_HW_INDIRECT_TYPE_LIST, error); if (!job) return NULL; } @@ -11465,7 +11468,6 @@ flow_hw_async_action_list_handle_create(struct rte_eth_dev *dev, uint32_t queue, } if (job) { job->action = handle; - job->indirect_type = MLX5_HW_INDIRECT_TYPE_LIST; flow_hw_action_finalize(dev, queue, job, push, false, handle != NULL); } @@ -11510,7 +11512,7 @@ flow_hw_async_action_list_handle_destroy if (attr) { job = flow_hw_action_job_init(priv, queue, NULL, user_data, NULL, MLX5_HW_Q_JOB_TYPE_DESTROY, - error); + MLX5_HW_INDIRECT_TYPE_LIST, error); if (!job) return rte_errno; } -- 2.34.1 --- Diff of the applied patch vs upstream commit (please double-check if non-empty: --- --- - 2024-04-13 20:43:07.886808758 +0800 +++ 0094-net-mlx5-fix-indirect-action-async-job-initializatio.patch 2024-04-13 20:43:05.057753853 +0800 @@ -1 +1 @@ -From 1a8b80329748033eb3bb9ed7433e0aef1bbcd838 Mon Sep 17 00:00:00 2001 +From 1994df02c988a2f1d70cfd192ecd2098edfc6713 Mon Sep 17 00:00:00 2001 @@ -4,0 +5,3 @@ +Cc: Xueming Li + +[ upstream commit 1a8b80329748033eb3bb9ed7433e0aef1bbcd838 ] @@ -20 +22,0 @@ -Cc: stable@dpdk.org @@ -29 +31 @@ -index 8f004b5435..b9ba05f695 100644 +index f43ffb1d4e..6d0f1beeec 100644 @@ -32 +34 @@ -@@ -188,6 +188,7 @@ flow_hw_action_job_init(struct mlx5_priv *priv, uint32_t queue, +@@ -109,6 +109,7 @@ flow_hw_action_job_init(struct mlx5_priv *priv, uint32_t queue, @@ -40 +42 @@ -@@ -1692,7 +1693,8 @@ flow_hw_meter_mark_compile(struct rte_eth_dev *dev, +@@ -1583,7 +1584,8 @@ flow_hw_meter_mark_compile(struct rte_eth_dev *dev, @@ -50 +52 @@ -@@ -10998,6 +11000,7 @@ flow_hw_action_job_init(struct mlx5_priv *priv, uint32_t queue, +@@ -10057,6 +10059,7 @@ flow_hw_action_job_init(struct mlx5_priv *priv, uint32_t queue, @@ -58 +60 @@ -@@ -11015,6 +11018,7 @@ flow_hw_action_job_init(struct mlx5_priv *priv, uint32_t queue, +@@ -10074,6 +10077,7 @@ flow_hw_action_job_init(struct mlx5_priv *priv, uint32_t queue, @@ -66 +68 @@ -@@ -11026,7 +11030,7 @@ mlx5_flow_action_job_init(struct mlx5_priv *priv, uint32_t queue, +@@ -10085,7 +10089,7 @@ mlx5_flow_action_job_init(struct mlx5_priv *priv, uint32_t queue, @@ -75 +77 @@ -@@ -11096,7 +11100,7 @@ flow_hw_action_handle_create(struct rte_eth_dev *dev, uint32_t queue, +@@ -10155,7 +10159,7 @@ flow_hw_action_handle_create(struct rte_eth_dev *dev, uint32_t queue, @@ -84 +86 @@ -@@ -11165,7 +11169,6 @@ flow_hw_action_handle_create(struct rte_eth_dev *dev, uint32_t queue, +@@ -10224,7 +10228,6 @@ flow_hw_action_handle_create(struct rte_eth_dev *dev, uint32_t queue, @@ -92 +94 @@ -@@ -11257,7 +11260,7 @@ flow_hw_action_handle_update(struct rte_eth_dev *dev, uint32_t queue, +@@ -10316,7 +10319,7 @@ flow_hw_action_handle_update(struct rte_eth_dev *dev, uint32_t queue, @@ -101 +103 @@ -@@ -11339,7 +11342,7 @@ flow_hw_action_handle_destroy(struct rte_eth_dev *dev, uint32_t queue, +@@ -10398,7 +10401,7 @@ flow_hw_action_handle_destroy(struct rte_eth_dev *dev, uint32_t queue, @@ -110 +112 @@ -@@ -11663,7 +11666,7 @@ flow_hw_action_handle_query(struct rte_eth_dev *dev, uint32_t queue, +@@ -10711,7 +10714,7 @@ flow_hw_action_handle_query(struct rte_eth_dev *dev, uint32_t queue, @@ -119 +121 @@ -@@ -11717,7 +11720,7 @@ flow_hw_async_action_handle_query_update +@@ -10765,7 +10768,7 @@ flow_hw_async_action_handle_query_update @@ -128 +130 @@ -@@ -12397,7 +12400,7 @@ flow_hw_async_action_list_handle_create(struct rte_eth_dev *dev, uint32_t queue, +@@ -11445,7 +11448,7 @@ flow_hw_async_action_list_handle_create(struct rte_eth_dev *dev, uint32_t queue, @@ -137 +139 @@ -@@ -12417,7 +12420,6 @@ flow_hw_async_action_list_handle_create(struct rte_eth_dev *dev, uint32_t queue, +@@ -11465,7 +11468,6 @@ flow_hw_async_action_list_handle_create(struct rte_eth_dev *dev, uint32_t queue, @@ -145 +147 @@ -@@ -12462,7 +12464,7 @@ flow_hw_async_action_list_handle_destroy +@@ -11510,7 +11512,7 @@ flow_hw_async_action_list_handle_destroy