From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id D82DA43C2C; Wed, 28 Feb 2024 18:03:15 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 64DAE42FE7; Wed, 28 Feb 2024 18:02:04 +0100 (CET) Received: from NAM11-BN8-obe.outbound.protection.outlook.com (mail-bn8nam11on2041.outbound.protection.outlook.com [40.107.236.41]) by mails.dpdk.org (Postfix) with ESMTP id 4907042FC3 for ; Wed, 28 Feb 2024 18:02:02 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=Qfk7sUByzjU+PCuI2D4Gd4FwIePb3xv4ETgwq0HpElx46hSzmAnpD8NLYVif7xUUoJW2BPwR2kekGMj13KKvjrCSStyBZruavsgouUbxvOKQT41EUH+hlUrbBCuNzmems2NaUtpbFAE/kGWV4O+C9YzMl0exIuaBQfmbxD6zDnev+rqHVZi0JtgZeq5M2A2jhTXY2v2MBjc+CEAnTFN3x6hz7jYjY94SI9r+svMQXlXFxONLLaAbm4g5/WG+bLfUEuM0bceKPNUzf0h/JPokkMuKOr6kAjM2/hr1tnNntVZID1K9RcuMOS0Y5nSKC/0n0pN2CiiQ3jvfQf5gMqairQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=dH+aHXCFZ/2oxyMKejklwJC56MhBr72cmWM9HyE32JY=; b=JDqShRPBMuw5OkN2H20qWkWrwjo++6Dllb4LTkaexpEgwLP9oR8pibTgGdmjmENfuIm0YHfM4tl0AiVG3gOyCenPAOTHCqtXjRLWduxoK9ns5vrm8C7Ng98iIXEYQtW5gpikXOiGVLjNnCMN6MEbbg2MkT4zVeaeWnXC9NDLKT9MjREw9uY0/OmlI8mrXT1bSKSwxhKFsd14pU6FzadUwGrE9fTCxLNv7OcH+WlaiknSsGRUIb4J0nX3EbZmhzGpONAQyACpHhO+btm6w8DPvzCmZhMTnff9WS0l/Xe62mFdeqtfr5VX91CM6uMqYcoyY2KqCkpnz+U+9sq2xrfw4Q== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.160) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=dH+aHXCFZ/2oxyMKejklwJC56MhBr72cmWM9HyE32JY=; b=hrV9dLYpCb1mi5xLq8dGH3iJSXeHLK+W8CutPAleLSz5CdAZoh8PoSnXt4mvD1hMKRTD1Rl9xSLQKlB04Xx9nK4vJlyaa1CiHkodofOlHEZqdqF34LL+hVJM8ASJ/4dnHyn0o8j6w3hgJ5LoThuLuSLDp5AORZlEilNLqy/7JNRfMz+0O21UvdzjiRjKTe3UapAI0GT0/nYlTjzH0JSBXEiXgn+/Y4l8PJeo2kgMDh0QOIwfvHW77/jPc97UBkROVeTPKI+DR7bLwZ0mO/zFaovM0tlV+vTCuU51Ooe/b4D1qC0x8dMAoSuRcUOyYgbFylvpAIlA7FMhYhYCBo6lBg== Received: from BL0PR1501CA0024.namprd15.prod.outlook.com (2603:10b6:207:17::37) by MN2PR12MB4095.namprd12.prod.outlook.com (2603:10b6:208:1d1::11) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7316.39; Wed, 28 Feb 2024 17:01:59 +0000 Received: from BL02EPF0001A103.namprd05.prod.outlook.com (2603:10b6:207:17:cafe::d) by BL0PR1501CA0024.outlook.office365.com (2603:10b6:207:17::37) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7339.28 via Frontend Transport; Wed, 28 Feb 2024 17:01:59 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.160) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.160 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.160; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.160) by BL02EPF0001A103.mail.protection.outlook.com (10.167.241.133) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7292.25 via Frontend Transport; Wed, 28 Feb 2024 17:01:58 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.41; Wed, 28 Feb 2024 09:01:24 -0800 Received: from nvidia.com (10.126.230.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1258.12; Wed, 28 Feb 2024 09:01:22 -0800 From: Dariusz Sosnowski To: Viacheslav Ovsiienko , Ori Kam , Suanming Mou , Matan Azrad CC: , Raslan Darawsheh , Bing Zhao Subject: [PATCH 09/11] net/mlx5: move rarely used flow fields outside Date: Wed, 28 Feb 2024 18:00:44 +0100 Message-ID: <20240228170046.176600-10-dsosnowski@nvidia.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20240228170046.176600-1-dsosnowski@nvidia.com> References: <20240228170046.176600-1-dsosnowski@nvidia.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain X-Originating-IP: [10.126.230.35] X-ClientProxiedBy: rnnvmail202.nvidia.com (10.129.68.7) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: BL02EPF0001A103:EE_|MN2PR12MB4095:EE_ X-MS-Office365-Filtering-Correlation-Id: f5be1a28-7a39-4b35-e2ca-08dc387efa0a X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: Idx07mNHZoaoFOTpVrvqWnA80wI3Xoyfi+SE00s/KtW9OFS28oyMumq0nBmRjxVcLT4JfUo4EbKszdiIjtMPO1QPcEtEEbZER/y2XFrcNZsiRGHc9u5IwqinbO8S2w3qKrizilv5MVTozLbxshmLmMKLLb+vaOHMCpFN8WVFGUn23ODfMsymn47uPrinbwUGZicLyB3VXYLhiJxbbnUub1DTNtTyVXVcJFr5XhgaK0KJFfIPEZYP8u6Rdvqwx+a7ZkpJ+m7fKUKQm3raGGNdUuUbkC8PBYQHcfufgHaWLIt9+wkPNDwMYkSgY6fUMtASPnaX29IWU2kGL9Yw4z1+jlmKrmvAUmBREMB9pQO3o1u+GPrQ1KyD1HYauuXYr8NQyWoyl9faeYxWqYFlavT+uOdZ8nl6VFTfYoeRRTH4962qTf0MJ7JSG4WBwPvL8CoUu39N1TqVHmN+NVaxVWXtV6m2FezlVSXzV4eW5lS9q4otNfNSvQGQezjRQNAzBjT8Vu5lJeG3U+IRCTQd+27TbMa/wg0vSMws4vFjXELMxWP3NwL6lTVJ6saG/IyH9bron9N2XPbzyR8a6ofgitrJL26A7SRou5SR7cvckKxs7tcNxY9vMJgicY4ISpo92uUN4QsX+Lw5Z86Ol/EUs8AHUK8UpyDEHH3oPDrCnvcDGztrpVo/YpTnTQsulC1lDuCN3r0bTSGHyc+/+wEpkofcPn6FMDro+pCmxEnz+ifVXBvZVFePRKmkzfPQXJDeRlqU X-Forefront-Antispam-Report: CIP:216.228.117.160; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge1.nvidia.com; CAT:NONE; SFS:(13230031)(36860700004)(82310400014); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 28 Feb 2024 17:01:58.9788 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: f5be1a28-7a39-4b35-e2ca-08dc387efa0a X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.160]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BL02EPF0001A103.namprd05.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN2PR12MB4095 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Some of the flow fields are either not always required or are used very rarely, e.g.: - AGE action reference, - direct METER/METER_MARK action reference, - matcher selector for resizable tables. This patch moves these fields to rte_flow_hw_aux struct in order to reduce the overall size of the flow struct, reducing the total size of working set for most common use cases. This results in reduction of the frequency of cache invalidation during async flow operations processing. Signed-off-by: Dariusz Sosnowski --- drivers/net/mlx5/mlx5_flow.h | 61 +++++++++++----- drivers/net/mlx5/mlx5_flow_hw.c | 121 ++++++++++++++++++++++++-------- 2 files changed, 138 insertions(+), 44 deletions(-) diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h index 2e3e7d0533..1c67d8dd35 100644 --- a/drivers/net/mlx5/mlx5_flow.h +++ b/drivers/net/mlx5/mlx5_flow.h @@ -1271,31 +1271,60 @@ enum { #pragma GCC diagnostic ignored "-Wpedantic" #endif -/* HWS flow struct. */ +/** HWS flow struct. */ struct rte_flow_hw { - uint32_t idx; /* Flow index from indexed pool. */ - uint32_t res_idx; /* Resource index from indexed pool. */ - uint32_t fate_type; /* Fate action type. */ + /** The table flow allcated from. */ + struct rte_flow_template_table *table; + /** Application's private data passed to enqueued flow operation. */ + void *user_data; + /** Flow index from indexed pool. */ + uint32_t idx; + /** Resource index from indexed pool. */ + uint32_t res_idx; + /** HWS flow rule index passed to mlx5dr. */ + uint32_t rule_idx; + /** Fate action type. */ + uint32_t fate_type; + /** Ongoing flow operation type. */ + uint8_t operation_type; + /** Index of pattern template this flow is based on. */ + uint8_t mt_idx; + + /** COUNT action index. */ + cnt_id_t cnt_id; union { - /* Jump action. */ + /** Jump action. */ struct mlx5_hw_jump_action *jump; - struct mlx5_hrxq *hrxq; /* TIR action. */ + /** TIR action. */ + struct mlx5_hrxq *hrxq; }; - struct rte_flow_template_table *table; /* The table flow allcated from. */ - uint8_t mt_idx; - uint8_t matcher_selector:1; + + /** + * Padding for alignment to 56 bytes. + * Since mlx5dr rule is 72 bytes, whole flow is contained within 128 B (2 cache lines). + * This space is reserved for future additions to flow struct. + */ + uint8_t padding[10]; + /** HWS layer data struct. */ + uint8_t rule[]; +} __rte_packed; + +/** Auxiliary data fields that are updatable. */ +struct rte_flow_hw_aux_fields { + /** AGE action index. */ uint32_t age_idx; - cnt_id_t cnt_id; + /** Direct meter (METER or METER_MARK) action index. */ uint32_t mtr_id; - uint32_t rule_idx; - uint8_t operation_type; /**< Ongoing flow operation type. */ - void *user_data; /**< Application's private data passed to enqueued flow operation. */ - uint8_t padding[1]; /**< Padding for proper alignment of mlx5dr rule struct. */ - uint8_t rule[]; /* HWS layer data struct. */ -} __rte_packed; +}; /** Auxiliary data stored per flow which is not required to be stored in main flow structure. */ struct rte_flow_hw_aux { + /** Auxiliary fields associated with the original flow. */ + struct rte_flow_hw_aux_fields orig; + /** Auxiliary fields associated with the updated flow. */ + struct rte_flow_hw_aux_fields upd; + /** Index of resizable matcher associated with this flow. */ + uint8_t matcher_selector; /** Placeholder flow struct used during flow rule update operation. */ struct rte_flow_hw upd_flow; }; diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c index 4d39e7bd45..3252f76e64 100644 --- a/drivers/net/mlx5/mlx5_flow_hw.c +++ b/drivers/net/mlx5/mlx5_flow_hw.c @@ -139,6 +139,50 @@ mlx5_flow_hw_aux(uint16_t port_id, struct rte_flow_hw *flow) } } +static __rte_always_inline void +mlx5_flow_hw_aux_set_age_idx(struct rte_flow_hw *flow, + struct rte_flow_hw_aux *aux, + uint32_t age_idx) +{ + /* + * Only when creating a flow rule, the type will be set explicitly. + * Or else, it should be none in the rule update case. + */ + if (unlikely(flow->operation_type == MLX5_FLOW_HW_FLOW_OP_TYPE_UPDATE)) + aux->upd.age_idx = age_idx; + else + aux->orig.age_idx = age_idx; +} + +static __rte_always_inline uint32_t +mlx5_flow_hw_aux_get_age_idx(struct rte_flow_hw *flow, struct rte_flow_hw_aux *aux) +{ + if (unlikely(flow->operation_type == MLX5_FLOW_HW_FLOW_OP_TYPE_UPDATE)) + return aux->upd.age_idx; + else + return aux->orig.age_idx; +} + +static __rte_always_inline void +mlx5_flow_hw_aux_set_mtr_id(struct rte_flow_hw *flow, + struct rte_flow_hw_aux *aux, + uint32_t mtr_id) +{ + if (unlikely(flow->operation_type == MLX5_FLOW_HW_FLOW_OP_TYPE_UPDATE)) + aux->upd.mtr_id = mtr_id; + else + aux->orig.mtr_id = mtr_id; +} + +static __rte_always_inline uint32_t __rte_unused +mlx5_flow_hw_aux_get_mtr_id(struct rte_flow_hw *flow, struct rte_flow_hw_aux *aux) +{ + if (unlikely(flow->operation_type == MLX5_FLOW_HW_FLOW_OP_TYPE_UPDATE)) + return aux->upd.mtr_id; + else + return aux->orig.mtr_id; +} + static int mlx5_tbl_multi_pattern_process(struct rte_eth_dev *dev, struct rte_flow_template_table *tbl, @@ -2753,6 +2797,7 @@ flow_hw_shared_action_construct(struct rte_eth_dev *dev, uint32_t queue, struct mlx5_aso_mtr *aso_mtr; struct mlx5_age_info *age_info; struct mlx5_hws_age_param *param; + struct rte_flow_hw_aux *aux; uint32_t act_idx = (uint32_t)(uintptr_t)action->conf; uint32_t type = act_idx >> MLX5_INDIRECT_ACTION_TYPE_OFFSET; uint32_t idx = act_idx & @@ -2790,11 +2835,12 @@ flow_hw_shared_action_construct(struct rte_eth_dev *dev, uint32_t queue, flow->cnt_id = act_idx; break; case MLX5_INDIRECT_ACTION_TYPE_AGE: + aux = mlx5_flow_hw_aux(dev->data->port_id, flow); /* * Save the index with the indirect type, to recognize * it in flow destroy. */ - flow->age_idx = act_idx; + mlx5_flow_hw_aux_set_age_idx(flow, aux, act_idx); if (action_flags & MLX5_FLOW_ACTION_INDIRECT_COUNT) /* * The mutual update for idirect AGE & COUNT will be @@ -3020,14 +3066,16 @@ flow_hw_actions_construct(struct rte_eth_dev *dev, const struct rte_flow_action_meter *meter = NULL; const struct rte_flow_action_age *age = NULL; struct rte_flow_attr attr = { - .ingress = 1, + .ingress = 1, }; uint32_t ft_flag; - size_t encap_len = 0; int ret; + size_t encap_len = 0; uint32_t age_idx = 0; + uint32_t mtr_idx = 0; struct mlx5_aso_mtr *aso_mtr; struct mlx5_multi_pattern_segment *mp_segment = NULL; + struct rte_flow_hw_aux *aux; attr.group = table->grp->group_id; ft_flag = mlx5_hw_act_flag[!!table->grp->group_id][table->type]; @@ -3207,6 +3255,7 @@ flow_hw_actions_construct(struct rte_eth_dev *dev, return -1; break; case RTE_FLOW_ACTION_TYPE_AGE: + aux = mlx5_flow_hw_aux(dev->data->port_id, flow); age = action->conf; /* * First, create the AGE parameter, then create its @@ -3220,7 +3269,7 @@ flow_hw_actions_construct(struct rte_eth_dev *dev, error); if (age_idx == 0) return -rte_errno; - flow->age_idx = age_idx; + mlx5_flow_hw_aux_set_age_idx(flow, aux, age_idx); if (at->action_flags & MLX5_FLOW_ACTION_INDIRECT_COUNT) /* * When AGE uses indirect counter, no need to @@ -3281,9 +3330,11 @@ flow_hw_actions_construct(struct rte_eth_dev *dev, */ ret = flow_hw_meter_mark_compile(dev, act_data->action_dst, action, - rule_acts, &flow->mtr_id, MLX5_HW_INV_QUEUE, error); + rule_acts, &mtr_idx, MLX5_HW_INV_QUEUE, error); if (ret != 0) return ret; + aux = mlx5_flow_hw_aux(dev->data->port_id, flow); + mlx5_flow_hw_aux_set_mtr_id(flow, aux, mtr_idx); break; default: break; @@ -3291,9 +3342,10 @@ flow_hw_actions_construct(struct rte_eth_dev *dev, } if (at->action_flags & MLX5_FLOW_ACTION_INDIRECT_COUNT) { if (at->action_flags & MLX5_FLOW_ACTION_INDIRECT_AGE) { - age_idx = flow->age_idx & MLX5_HWS_AGE_IDX_MASK; - if (mlx5_hws_cnt_age_get(priv->hws_cpool, - flow->cnt_id) != age_idx) + aux = mlx5_flow_hw_aux(dev->data->port_id, flow); + age_idx = mlx5_flow_hw_aux_get_age_idx(flow, aux) & + MLX5_HWS_AGE_IDX_MASK; + if (mlx5_hws_cnt_age_get(priv->hws_cpool, flow->cnt_id) != age_idx) /* * This is first use of this indirect counter * for this indirect AGE, need to increase the @@ -3305,8 +3357,7 @@ flow_hw_actions_construct(struct rte_eth_dev *dev, * Update this indirect counter the indirect/direct AGE in which * using it. */ - mlx5_hws_cnt_age_set(priv->hws_cpool, flow->cnt_id, - age_idx); + mlx5_hws_cnt_age_set(priv->hws_cpool, flow->cnt_id, age_idx); } if (hw_acts->encap_decap && !hw_acts->encap_decap->shared) { int ix = mlx5_multi_pattern_reformat_to_index(hw_acts->encap_decap->action_type); @@ -3499,6 +3550,7 @@ flow_hw_async_flow_create(struct rte_eth_dev *dev, &rule_attr, (struct mlx5dr_rule *)flow->rule); } else { + struct rte_flow_hw_aux *aux = mlx5_flow_hw_aux(dev->data->port_id, flow); uint32_t selector; flow->operation_type = MLX5_FLOW_HW_FLOW_OP_TYPE_RSZ_TBL_CREATE; @@ -3510,7 +3562,7 @@ flow_hw_async_flow_create(struct rte_eth_dev *dev, &rule_attr, (struct mlx5dr_rule *)flow->rule); rte_rwlock_read_unlock(&table->matcher_replace_rwlk); - flow->matcher_selector = selector; + aux->matcher_selector = selector; } if (likely(!ret)) { flow_hw_q_inc_flow_ops(priv, queue); @@ -3632,6 +3684,7 @@ flow_hw_async_flow_create_by_index(struct rte_eth_dev *dev, rule_acts, &rule_attr, (struct mlx5dr_rule *)flow->rule); } else { + struct rte_flow_hw_aux *aux = mlx5_flow_hw_aux(dev->data->port_id, flow); uint32_t selector; flow->operation_type = MLX5_FLOW_HW_FLOW_OP_TYPE_RSZ_TBL_CREATE; @@ -3642,6 +3695,7 @@ flow_hw_async_flow_create_by_index(struct rte_eth_dev *dev, rule_acts, &rule_attr, (struct mlx5dr_rule *)flow->rule); rte_rwlock_read_unlock(&table->matcher_replace_rwlk); + aux->matcher_selector = selector; } if (likely(!ret)) { flow_hw_q_inc_flow_ops(priv, queue); @@ -3729,6 +3783,8 @@ flow_hw_async_flow_update(struct rte_eth_dev *dev, } else { nf->res_idx = of->res_idx; } + /* Indicate the construction function to set the proper fields. */ + nf->operation_type = MLX5_FLOW_HW_FLOW_OP_TYPE_UPDATE; /* * Indexed pool returns 1-based indices, but mlx5dr expects 0-based indices * for rule insertion hints. @@ -3846,15 +3902,17 @@ flow_hw_age_count_release(struct mlx5_priv *priv, uint32_t queue, struct rte_flow_hw *flow, struct rte_flow_error *error) { + struct rte_flow_hw_aux *aux = mlx5_flow_hw_aux(priv->dev_data->port_id, flow); uint32_t *cnt_queue; + uint32_t age_idx = aux->orig.age_idx; if (mlx5_hws_cnt_is_shared(priv->hws_cpool, flow->cnt_id)) { - if (flow->age_idx && !mlx5_hws_age_is_indirect(flow->age_idx)) { + if (age_idx && !mlx5_hws_age_is_indirect(age_idx)) { /* Remove this AGE parameter from indirect counter. */ mlx5_hws_cnt_age_set(priv->hws_cpool, flow->cnt_id, 0); /* Release the AGE parameter. */ - mlx5_hws_age_action_destroy(priv, flow->age_idx, error); - flow->age_idx = 0; + mlx5_hws_age_action_destroy(priv, age_idx, error); + mlx5_flow_hw_aux_set_age_idx(flow, aux, 0); } return; } @@ -3863,16 +3921,16 @@ flow_hw_age_count_release(struct mlx5_priv *priv, uint32_t queue, /* Put the counter first to reduce the race risk in BG thread. */ mlx5_hws_cnt_pool_put(priv->hws_cpool, cnt_queue, &flow->cnt_id); flow->cnt_id = 0; - if (flow->age_idx) { - if (mlx5_hws_age_is_indirect(flow->age_idx)) { - uint32_t idx = flow->age_idx & MLX5_HWS_AGE_IDX_MASK; + if (age_idx) { + if (mlx5_hws_age_is_indirect(age_idx)) { + uint32_t idx = age_idx & MLX5_HWS_AGE_IDX_MASK; mlx5_hws_age_nb_cnt_decrease(priv, idx); } else { /* Release the AGE parameter. */ - mlx5_hws_age_action_destroy(priv, flow->age_idx, error); + mlx5_hws_age_action_destroy(priv, age_idx, error); } - flow->age_idx = 0; + mlx5_flow_hw_aux_set_age_idx(flow, aux, age_idx); } } @@ -4002,6 +4060,7 @@ hw_cmpl_flow_update_or_destroy(struct rte_eth_dev *dev, struct mlx5_priv *priv = dev->data->dev_private; struct mlx5_aso_mtr_pool *pool = priv->hws_mpool; struct rte_flow_template_table *table = flow->table; + struct rte_flow_hw_aux *aux = mlx5_flow_hw_aux(dev->data->port_id, flow); /* Release the original resource index in case of update. */ uint32_t res_idx = flow->res_idx; @@ -4012,9 +4071,9 @@ hw_cmpl_flow_update_or_destroy(struct rte_eth_dev *dev, if (mlx5_hws_cnt_id_valid(flow->cnt_id)) flow_hw_age_count_release(priv, queue, flow, error); - if (flow->mtr_id) { - mlx5_ipool_free(pool->idx_pool, flow->mtr_id); - flow->mtr_id = 0; + if (aux->orig.mtr_id) { + mlx5_ipool_free(pool->idx_pool, aux->orig.mtr_id); + aux->orig.mtr_id = 0; } if (flow->operation_type != MLX5_FLOW_HW_FLOW_OP_TYPE_UPDATE) { if (table->resource) @@ -4025,6 +4084,8 @@ hw_cmpl_flow_update_or_destroy(struct rte_eth_dev *dev, struct rte_flow_hw *upd_flow = &aux->upd_flow; rte_memcpy(flow, upd_flow, offsetof(struct rte_flow_hw, rule)); + aux->orig = aux->upd; + flow->operation_type = MLX5_FLOW_HW_FLOW_OP_TYPE_CREATE; if (table->resource) mlx5_ipool_free(table->resource, res_idx); } @@ -4037,7 +4098,8 @@ hw_cmpl_resizable_tbl(struct rte_eth_dev *dev, struct rte_flow_error *error) { struct rte_flow_template_table *table = flow->table; - uint32_t selector = flow->matcher_selector; + struct rte_flow_hw_aux *aux = mlx5_flow_hw_aux(dev->data->port_id, flow); + uint32_t selector = aux->matcher_selector; uint32_t other_selector = (selector + 1) & 1; switch (flow->operation_type) { @@ -4060,7 +4122,7 @@ hw_cmpl_resizable_tbl(struct rte_eth_dev *dev, rte_atomic_fetch_add_explicit (&table->matcher_info[other_selector].refcnt, 1, rte_memory_order_relaxed); - flow->matcher_selector = other_selector; + aux->matcher_selector = other_selector; } break; default: @@ -11206,6 +11268,7 @@ flow_hw_query(struct rte_eth_dev *dev, struct rte_flow *flow, { int ret = -EINVAL; struct rte_flow_hw *hw_flow = (struct rte_flow_hw *)flow; + struct rte_flow_hw_aux *aux; for (; actions->type != RTE_FLOW_ACTION_TYPE_END; actions++) { switch (actions->type) { @@ -11216,8 +11279,9 @@ flow_hw_query(struct rte_eth_dev *dev, struct rte_flow *flow, error); break; case RTE_FLOW_ACTION_TYPE_AGE: - ret = flow_hw_query_age(dev, hw_flow->age_idx, data, - error); + aux = mlx5_flow_hw_aux(dev->data->port_id, hw_flow); + ret = flow_hw_query_age(dev, mlx5_flow_hw_aux_get_age_idx(hw_flow, aux), + data, error); break; default: return rte_flow_error_set(error, ENOTSUP, @@ -12497,8 +12561,9 @@ flow_hw_update_resized(struct rte_eth_dev *dev, uint32_t queue, struct mlx5_priv *priv = dev->data->dev_private; struct rte_flow_hw *hw_flow = (struct rte_flow_hw *)flow; struct rte_flow_template_table *table = hw_flow->table; + struct rte_flow_hw_aux *aux = mlx5_flow_hw_aux(dev->data->port_id, hw_flow); uint32_t table_selector = table->matcher_selector; - uint32_t rule_selector = hw_flow->matcher_selector; + uint32_t rule_selector = aux->matcher_selector; uint32_t other_selector; struct mlx5dr_matcher *other_matcher; struct mlx5dr_rule_attr rule_attr = { @@ -12511,7 +12576,7 @@ flow_hw_update_resized(struct rte_eth_dev *dev, uint32_t queue, * the one that was used BEFORE table resize. * Since the function is called AFTER table resize, * `table->matcher_selector` always points to the new matcher and - * `hw_flow->matcher_selector` points to a matcher used to create the flow. + * `aux->matcher_selector` points to a matcher used to create the flow. */ other_selector = rule_selector == table_selector ? (rule_selector + 1) & 1 : rule_selector; -- 2.39.2