From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id AB05F43C2C; Wed, 28 Feb 2024 18:02:45 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 9F28742FA3; Wed, 28 Feb 2024 18:01:55 +0100 (CET) Received: from NAM12-DM6-obe.outbound.protection.outlook.com (mail-dm6nam12on2077.outbound.protection.outlook.com [40.107.243.77]) by mails.dpdk.org (Postfix) with ESMTP id 4410B427E8 for ; Wed, 28 Feb 2024 18:01:51 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=FgL1hQM9HiEjqYOhbs1/mWGF5lkOTyly4sZF0BEe3vNYyiztmk9eUN/ZuO/2PZ9kCxAOhhuKsT8y1i4U6FtMiw3ib2Ul62nYTiM/T6Fe3CQqDuzLZsU0kcE/CiaM/WdSkFfsbEFEeh/+j2GSskNdUTj6bipSoax+Q6DmSuELN69VSI55rPYy/p39SM25u2lpKbBn8958MP69FjNjYLuyJp0HrtcpnPY4SH7e4euxCH7rloAeHvUhESuWa+QX8E1UF/YzVl3g18z8k3XlnCWCBxMULFVBIICt9LrlkBRhiN8EMZsksuvdEjeoEQXMuozLgi98XVjsuCViXCRI3bNp0Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=4sO+NNyqov3mwLS1xQmBuK/L79gR7ObwfTe9N6l5mVE=; b=hT4BeFBN/zoHGqUgads2ESlMZfOVY494oneda13Z8db84XnI2tuZUCwQgDQ+OOQlSHSLECHUTwBfTNxnderqydXdTzd8FInwW066cwJWXjfB5SNhZqecVAFEHzNEH0YDALJIf9e+jhgHFsnNsn+1HRfjZEySOLYS06R4ydOLDUOKzQ3DBJyYe9sBFbzISL08HNWV0QY1BHV3BH4pOKbRI9VwQi4/RGhUl4bX4JJDMXXM5SAgk5OI05ePo1hijZ4WFqHrT9gyITV4Ypc0LO63aRSdUmI4Px+nYiKnbXb5HPRRw2VarY3dJjq+cH66Z1SASkc6mu430ugs5d/RBkB79w== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.160) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=4sO+NNyqov3mwLS1xQmBuK/L79gR7ObwfTe9N6l5mVE=; b=S4AiPoUAywApY6hvb4QzoC++Dh0s7QRHNImfMbHP2mzeN/B/JU7xBcMQa/59fAIXdvL1bVG1a4AIgaaxSm0NEj/pEXGozoVvjEbe4VS0Kjt+Bu1+2WbozeWl641++dziBLmSnE4emAQpJLHb8+EuzxS/70jEI7+NffNuhz0vAyRlHRxo1vHjiRemv20AhdHimsvnl1pZtpymdIZJ9YPsEkjgfBeVIw4JBLt4lWLmicsdTiA+7gWKM4btUSFTps/vlUtGrd85ZHthxxiM+0ZVOczvc7EkFgv0OUXaC9IbANyFJmGJ/E/75MQQGVd2n5ry3jCnbRnPPeoMDT/m5WaOYQ== Received: from MN2PR07CA0014.namprd07.prod.outlook.com (2603:10b6:208:1a0::24) by SA0PR12MB4590.namprd12.prod.outlook.com (2603:10b6:806:93::11) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7316.39; Wed, 28 Feb 2024 17:01:47 +0000 Received: from BL02EPF0001A106.namprd05.prod.outlook.com (2603:10b6:208:1a0:cafe::69) by MN2PR07CA0014.outlook.office365.com (2603:10b6:208:1a0::24) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7292.50 via Frontend Transport; Wed, 28 Feb 2024 17:01:47 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.160) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.160 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.160; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.160) by BL02EPF0001A106.mail.protection.outlook.com (10.167.241.139) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7292.25 via Frontend Transport; Wed, 28 Feb 2024 17:01:47 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.41; Wed, 28 Feb 2024 09:01:15 -0800 Received: from nvidia.com (10.126.230.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1258.12; Wed, 28 Feb 2024 09:01:13 -0800 From: Dariusz Sosnowski To: Viacheslav Ovsiienko , Ori Kam , Suanming Mou , Matan Azrad CC: , Raslan Darawsheh , Bing Zhao Subject: [PATCH 05/11] net/mlx5: remove action params from job Date: Wed, 28 Feb 2024 18:00:40 +0100 Message-ID: <20240228170046.176600-6-dsosnowski@nvidia.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20240228170046.176600-1-dsosnowski@nvidia.com> References: <20240228170046.176600-1-dsosnowski@nvidia.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain X-Originating-IP: [10.126.230.35] X-ClientProxiedBy: rnnvmail202.nvidia.com (10.129.68.7) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: BL02EPF0001A106:EE_|SA0PR12MB4590:EE_ X-MS-Office365-Filtering-Correlation-Id: 8e716a0c-a8cd-46ad-867d-08dc387ef316 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: It6oRMNjSXmGxl0rCxU7SMWdg2wy4jt2dVsP+TkbOWe0NZ+ire/Hn110pg3nTA4xn6Lmu6HM5jgqEd2VXC+k6d9MGki9s0FoyUzKCRuZmcV2GDYcLZDcRbNupZY0Ha+IEDmEuFsG1ahNw/MlYlt7VnnGWiUvkMkpCL0e2Wf65yUNO2InabxFrfWzlLCByed1VpiiK2g05JAA0KAzaHhOrxz6ThZUbLFjgl8zERkNWB15evsFzo+fqn9I1imvv3kmaTkohe/bahtPIyhefF3plqK+a1Oyei2chYJffhbDYvEBDku7KPH7hztMbiT0nxaOi93FqjenAp1jALj8BDJMEXxLaQLUYTInJ8r84+GN9fNtvvsvPuN1ddOiMwfN7fHAOJdwAscL+yoLxJZDOPNHMedxOizJkPYmyTt7IKa6ssV1+CJlxjtN+ek0M89Il05r/tM8zeahu3xO+er0y/V6QFMJb2GOiMTI6zpFDlgYx3EKrrJ97vao4sUKwFa5LzbLUH18WxIL2mnPJM4QSLG8s0vWZ8iG7qEjGp5QhNlH6UXSNuzZban9erkOe6qucMsAMOv23Pp9ZDhDOUyFd3h39THUOka98Ct89MAVpAKYNtoD2orw76VBy1dLt9p4TQ8jpPfaJu7w9bLWi3hEHnVA2zC4BPYQoRnPVaQvB66Ei6qcgdzLLlwqas94dYAOeOCGOtar+/fQuQYbEx+WPiMnFD88FgABylVEVy5hIwcMFsYQMYL6Kb1EePr/ByWVpLAA X-Forefront-Antispam-Report: CIP:216.228.117.160; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge1.nvidia.com; CAT:NONE; SFS:(13230031)(82310400014)(36860700004); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 28 Feb 2024 17:01:47.2329 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 8e716a0c-a8cd-46ad-867d-08dc387ef316 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.160]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BL02EPF0001A106.namprd05.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: SA0PR12MB4590 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org mlx5_hw_q_job struct held references to buffers which contained: - modify header commands array, - encap/decap data buffer, - IPv6 routing data buffer. These buffers were passed as parameters to HWS layer during rule creation. They were needed only during the call to HWS layer when flow operation is enqueues (i.e. mlx5dr_rule_create()). After operation is enqueued, data stored there can be safely discarded and it is not required to store it during the whole lifecycle of a job. This patch removes references to these buffers from mlx5_hw_q_job and removes relevant allocations to reduce job memory footprint. Buffers stored per job are replaced with stack allocated ones, contained in mlx5_flow_hw_action_params struct. Signed-off-by: Dariusz Sosnowski --- drivers/net/mlx5/mlx5.h | 3 - drivers/net/mlx5/mlx5_flow.h | 10 +++ drivers/net/mlx5/mlx5_flow_hw.c | 120 ++++++++++++++------------------ 3 files changed, 63 insertions(+), 70 deletions(-) diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h index bb1853e797..bd0846d6bf 100644 --- a/drivers/net/mlx5/mlx5.h +++ b/drivers/net/mlx5/mlx5.h @@ -401,9 +401,6 @@ struct mlx5_hw_q_job { const void *action; /* Indirect action attached to the job. */ }; void *user_data; /* Job user data. */ - uint8_t *encap_data; /* Encap data. */ - uint8_t *push_data; /* IPv6 routing push data. */ - struct mlx5_modification_cmd *mhdr_cmd; struct rte_flow_item *items; union { struct { diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h index 11135645ef..df1c913017 100644 --- a/drivers/net/mlx5/mlx5_flow.h +++ b/drivers/net/mlx5/mlx5_flow.h @@ -1294,6 +1294,16 @@ typedef int #define MLX5_MHDR_MAX_CMD ((MLX5_MAX_MODIFY_NUM) * 2 + 1) +/** Container for flow action data constructed during flow rule creation. */ +struct mlx5_flow_hw_action_params { + /** Array of constructed modify header commands. */ + struct mlx5_modification_cmd mhdr_cmd[MLX5_MHDR_MAX_CMD]; + /** Constructed encap/decap data buffer. */ + uint8_t encap_data[MLX5_ENCAP_MAX_LEN]; + /** Constructed IPv6 routing data buffer. */ + uint8_t ipv6_push_data[MLX5_PUSH_MAX_LEN]; +}; + /* rte flow action translate to DR action struct. */ struct mlx5_action_construct_data { LIST_ENTRY(mlx5_action_construct_data) next; diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c index fcf493c771..7160477c83 100644 --- a/drivers/net/mlx5/mlx5_flow_hw.c +++ b/drivers/net/mlx5/mlx5_flow_hw.c @@ -158,7 +158,7 @@ static int flow_hw_translate_group(struct rte_eth_dev *dev, struct rte_flow_error *error); static __rte_always_inline int flow_hw_set_vlan_vid_construct(struct rte_eth_dev *dev, - struct mlx5_hw_q_job *job, + struct mlx5_modification_cmd *mhdr_cmd, struct mlx5_action_construct_data *act_data, const struct mlx5_hw_actions *hw_acts, const struct rte_flow_action *action); @@ -2799,7 +2799,7 @@ flow_hw_mhdr_cmd_is_nop(const struct mlx5_modification_cmd *cmd) * 0 on success, negative value otherwise and rte_errno is set. */ static __rte_always_inline int -flow_hw_modify_field_construct(struct mlx5_hw_q_job *job, +flow_hw_modify_field_construct(struct mlx5_modification_cmd *mhdr_cmd, struct mlx5_action_construct_data *act_data, const struct mlx5_hw_actions *hw_acts, const struct rte_flow_action *action) @@ -2858,7 +2858,7 @@ flow_hw_modify_field_construct(struct mlx5_hw_q_job *job, if (i >= act_data->modify_header.mhdr_cmds_end) return -1; - if (flow_hw_mhdr_cmd_is_nop(&job->mhdr_cmd[i])) { + if (flow_hw_mhdr_cmd_is_nop(&mhdr_cmd[i])) { ++i; continue; } @@ -2878,7 +2878,7 @@ flow_hw_modify_field_construct(struct mlx5_hw_q_job *job, mhdr_action->dst.field == RTE_FLOW_FIELD_IPV6_DSCP) data <<= MLX5_IPV6_HDR_DSCP_SHIFT; data = (data & mask) >> off_b; - job->mhdr_cmd[i++].data1 = rte_cpu_to_be_32(data); + mhdr_cmd[i++].data1 = rte_cpu_to_be_32(data); ++field; } while (field->size); return 0; @@ -2892,8 +2892,10 @@ flow_hw_modify_field_construct(struct mlx5_hw_q_job *job, * * @param[in] dev * Pointer to the rte_eth_dev structure. - * @param[in] job - * Pointer to job descriptor. + * @param[in] flow + * Pointer to flow structure. + * @param[in] ap + * Pointer to container for temporarily constructed actions' parameters. * @param[in] hw_acts * Pointer to translated actions from template. * @param[in] it_idx @@ -2910,7 +2912,8 @@ flow_hw_modify_field_construct(struct mlx5_hw_q_job *job, */ static __rte_always_inline int flow_hw_actions_construct(struct rte_eth_dev *dev, - struct mlx5_hw_q_job *job, + struct rte_flow_hw *flow, + struct mlx5_flow_hw_action_params *ap, const struct mlx5_hw_action_template *hw_at, const uint8_t it_idx, const struct rte_flow_action actions[], @@ -2920,7 +2923,7 @@ flow_hw_actions_construct(struct rte_eth_dev *dev, { struct mlx5_priv *priv = dev->data->dev_private; struct mlx5_aso_mtr_pool *pool = priv->hws_mpool; - struct rte_flow_template_table *table = job->flow->table; + struct rte_flow_template_table *table = flow->table; struct mlx5_action_construct_data *act_data; const struct rte_flow_actions_template *at = hw_at->action_template; const struct mlx5_hw_actions *hw_acts = &hw_at->acts; @@ -2931,8 +2934,6 @@ flow_hw_actions_construct(struct rte_eth_dev *dev, const struct rte_flow_action_ethdev *port_action = NULL; const struct rte_flow_action_meter *meter = NULL; const struct rte_flow_action_age *age = NULL; - uint8_t *buf = job->encap_data; - uint8_t *push_buf = job->push_data; struct rte_flow_attr attr = { .ingress = 1, }; @@ -2957,17 +2958,17 @@ flow_hw_actions_construct(struct rte_eth_dev *dev, if (hw_acts->mhdr && hw_acts->mhdr->mhdr_cmds_num > 0 && !hw_acts->mhdr->shared) { uint16_t pos = hw_acts->mhdr->pos; - mp_segment = mlx5_multi_pattern_segment_find(table, job->flow->res_idx); + mp_segment = mlx5_multi_pattern_segment_find(table, flow->res_idx); if (!mp_segment || !mp_segment->mhdr_action) return -1; rule_acts[pos].action = mp_segment->mhdr_action; /* offset is relative to DR action */ rule_acts[pos].modify_header.offset = - job->flow->res_idx - mp_segment->head_index; + flow->res_idx - mp_segment->head_index; rule_acts[pos].modify_header.data = - (uint8_t *)job->mhdr_cmd; - rte_memcpy(job->mhdr_cmd, hw_acts->mhdr->mhdr_cmds, - sizeof(*job->mhdr_cmd) * hw_acts->mhdr->mhdr_cmds_num); + (uint8_t *)ap->mhdr_cmd; + rte_memcpy(ap->mhdr_cmd, hw_acts->mhdr->mhdr_cmds, + sizeof(*ap->mhdr_cmd) * hw_acts->mhdr->mhdr_cmds_num); } LIST_FOREACH(act_data, &hw_acts->act_list, next) { uint32_t jump_group; @@ -3000,7 +3001,7 @@ flow_hw_actions_construct(struct rte_eth_dev *dev, case RTE_FLOW_ACTION_TYPE_INDIRECT: if (flow_hw_shared_action_construct (dev, queue, action, table, it_idx, - at->action_flags, job->flow, + at->action_flags, flow, &rule_acts[act_data->action_dst])) return -1; break; @@ -3025,8 +3026,8 @@ flow_hw_actions_construct(struct rte_eth_dev *dev, return -1; rule_acts[act_data->action_dst].action = (!!attr.group) ? jump->hws_action : jump->root_action; - job->flow->jump = jump; - job->flow->fate_type = MLX5_FLOW_FATE_JUMP; + flow->jump = jump; + flow->fate_type = MLX5_FLOW_FATE_JUMP; break; case RTE_FLOW_ACTION_TYPE_RSS: case RTE_FLOW_ACTION_TYPE_QUEUE: @@ -3036,8 +3037,8 @@ flow_hw_actions_construct(struct rte_eth_dev *dev, if (!hrxq) return -1; rule_acts[act_data->action_dst].action = hrxq->action; - job->flow->hrxq = hrxq; - job->flow->fate_type = MLX5_FLOW_FATE_QUEUE; + flow->hrxq = hrxq; + flow->fate_type = MLX5_FLOW_FATE_QUEUE; break; case MLX5_RTE_FLOW_ACTION_TYPE_RSS: item_flags = table->its[it_idx]->item_flags; @@ -3049,38 +3050,37 @@ flow_hw_actions_construct(struct rte_eth_dev *dev, case RTE_FLOW_ACTION_TYPE_VXLAN_ENCAP: enc_item = ((const struct rte_flow_action_vxlan_encap *) action->conf)->definition; - if (flow_dv_convert_encap_data(enc_item, buf, &encap_len, NULL)) + if (flow_dv_convert_encap_data(enc_item, ap->encap_data, &encap_len, NULL)) return -1; break; case RTE_FLOW_ACTION_TYPE_NVGRE_ENCAP: enc_item = ((const struct rte_flow_action_nvgre_encap *) action->conf)->definition; - if (flow_dv_convert_encap_data(enc_item, buf, &encap_len, NULL)) + if (flow_dv_convert_encap_data(enc_item, ap->encap_data, &encap_len, NULL)) return -1; break; case RTE_FLOW_ACTION_TYPE_RAW_ENCAP: raw_encap_data = (const struct rte_flow_action_raw_encap *) action->conf; - rte_memcpy((void *)buf, raw_encap_data->data, act_data->encap.len); - MLX5_ASSERT(raw_encap_data->size == - act_data->encap.len); + rte_memcpy(ap->encap_data, raw_encap_data->data, act_data->encap.len); + MLX5_ASSERT(raw_encap_data->size == act_data->encap.len); break; case RTE_FLOW_ACTION_TYPE_IPV6_EXT_PUSH: ipv6_push = (const struct rte_flow_action_ipv6_ext_push *)action->conf; - rte_memcpy((void *)push_buf, ipv6_push->data, + rte_memcpy(ap->ipv6_push_data, ipv6_push->data, act_data->ipv6_ext.len); MLX5_ASSERT(ipv6_push->size == act_data->ipv6_ext.len); break; case RTE_FLOW_ACTION_TYPE_MODIFY_FIELD: if (action->type == RTE_FLOW_ACTION_TYPE_OF_SET_VLAN_VID) - ret = flow_hw_set_vlan_vid_construct(dev, job, + ret = flow_hw_set_vlan_vid_construct(dev, ap->mhdr_cmd, act_data, hw_acts, action); else - ret = flow_hw_modify_field_construct(job, + ret = flow_hw_modify_field_construct(ap->mhdr_cmd, act_data, hw_acts, action); @@ -3116,8 +3116,8 @@ flow_hw_actions_construct(struct rte_eth_dev *dev, rule_acts[act_data->action_dst + 1].action = (!!attr.group) ? jump->hws_action : jump->root_action; - job->flow->jump = jump; - job->flow->fate_type = MLX5_FLOW_FATE_JUMP; + flow->jump = jump; + flow->fate_type = MLX5_FLOW_FATE_JUMP; if (mlx5_aso_mtr_wait(priv->sh, MLX5_HW_INV_QUEUE, aso_mtr)) return -1; break; @@ -3131,11 +3131,11 @@ flow_hw_actions_construct(struct rte_eth_dev *dev, */ age_idx = mlx5_hws_age_action_create(priv, queue, 0, age, - job->flow->res_idx, + flow->res_idx, error); if (age_idx == 0) return -rte_errno; - job->flow->age_idx = age_idx; + flow->age_idx = age_idx; if (at->action_flags & MLX5_FLOW_ACTION_INDIRECT_COUNT) /* * When AGE uses indirect counter, no need to @@ -3158,7 +3158,7 @@ flow_hw_actions_construct(struct rte_eth_dev *dev, ); if (ret != 0) return ret; - job->flow->cnt_id = cnt_id; + flow->cnt_id = cnt_id; break; case MLX5_RTE_FLOW_ACTION_TYPE_COUNT: ret = mlx5_hws_cnt_pool_get_action_offset @@ -3169,7 +3169,7 @@ flow_hw_actions_construct(struct rte_eth_dev *dev, ); if (ret != 0) return ret; - job->flow->cnt_id = act_data->shared_counter.id; + flow->cnt_id = act_data->shared_counter.id; break; case RTE_FLOW_ACTION_TYPE_CONNTRACK: ct_idx = MLX5_INDIRECT_ACTION_IDX_GET(action->conf); @@ -3196,8 +3196,7 @@ flow_hw_actions_construct(struct rte_eth_dev *dev, */ ret = flow_hw_meter_mark_compile(dev, act_data->action_dst, action, - rule_acts, &job->flow->mtr_id, - MLX5_HW_INV_QUEUE, error); + rule_acts, &flow->mtr_id, MLX5_HW_INV_QUEUE, error); if (ret != 0) return ret; break; @@ -3207,9 +3206,9 @@ flow_hw_actions_construct(struct rte_eth_dev *dev, } if (at->action_flags & MLX5_FLOW_ACTION_INDIRECT_COUNT) { if (at->action_flags & MLX5_FLOW_ACTION_INDIRECT_AGE) { - age_idx = job->flow->age_idx & MLX5_HWS_AGE_IDX_MASK; + age_idx = flow->age_idx & MLX5_HWS_AGE_IDX_MASK; if (mlx5_hws_cnt_age_get(priv->hws_cpool, - job->flow->cnt_id) != age_idx) + flow->cnt_id) != age_idx) /* * This is first use of this indirect counter * for this indirect AGE, need to increase the @@ -3221,7 +3220,7 @@ flow_hw_actions_construct(struct rte_eth_dev *dev, * Update this indirect counter the indirect/direct AGE in which * using it. */ - mlx5_hws_cnt_age_set(priv->hws_cpool, job->flow->cnt_id, + mlx5_hws_cnt_age_set(priv->hws_cpool, flow->cnt_id, age_idx); } if (hw_acts->encap_decap && !hw_acts->encap_decap->shared) { @@ -3231,21 +3230,21 @@ flow_hw_actions_construct(struct rte_eth_dev *dev, if (ix < 0) return -1; if (!mp_segment) - mp_segment = mlx5_multi_pattern_segment_find(table, job->flow->res_idx); + mp_segment = mlx5_multi_pattern_segment_find(table, flow->res_idx); if (!mp_segment || !mp_segment->reformat_action[ix]) return -1; ra->action = mp_segment->reformat_action[ix]; /* reformat offset is relative to selected DR action */ - ra->reformat.offset = job->flow->res_idx - mp_segment->head_index; - ra->reformat.data = buf; + ra->reformat.offset = flow->res_idx - mp_segment->head_index; + ra->reformat.data = ap->encap_data; } if (hw_acts->push_remove && !hw_acts->push_remove->shared) { rule_acts[hw_acts->push_remove_pos].ipv6_ext.offset = - job->flow->res_idx - 1; - rule_acts[hw_acts->push_remove_pos].ipv6_ext.header = push_buf; + flow->res_idx - 1; + rule_acts[hw_acts->push_remove_pos].ipv6_ext.header = ap->ipv6_push_data; } if (mlx5_hws_cnt_id_valid(hw_acts->cnt_id)) - job->flow->cnt_id = hw_acts->cnt_id; + flow->cnt_id = hw_acts->cnt_id; return 0; } @@ -3345,6 +3344,7 @@ flow_hw_async_flow_create(struct rte_eth_dev *dev, .burst = attr->postpone, }; struct mlx5dr_rule_action *rule_acts; + struct mlx5_flow_hw_action_params ap; struct rte_flow_hw *flow = NULL; struct mlx5_hw_q_job *job = NULL; const struct rte_flow_item *rule_items; @@ -3401,7 +3401,7 @@ flow_hw_async_flow_create(struct rte_eth_dev *dev, * No need to copy and contrust a new "actions" list based on the * user's input, in order to save the cost. */ - if (flow_hw_actions_construct(dev, job, + if (flow_hw_actions_construct(dev, flow, &ap, &table->ats[action_template_index], pattern_template_index, actions, rule_acts, queue, error)) { @@ -3493,6 +3493,7 @@ flow_hw_async_flow_create_by_index(struct rte_eth_dev *dev, .burst = attr->postpone, }; struct mlx5dr_rule_action *rule_acts; + struct mlx5_flow_hw_action_params ap; struct rte_flow_hw *flow = NULL; struct mlx5_hw_q_job *job = NULL; uint32_t flow_idx = 0; @@ -3545,7 +3546,7 @@ flow_hw_async_flow_create_by_index(struct rte_eth_dev *dev, * No need to copy and contrust a new "actions" list based on the * user's input, in order to save the cost. */ - if (flow_hw_actions_construct(dev, job, + if (flow_hw_actions_construct(dev, flow, &ap, &table->ats[action_template_index], 0, actions, rule_acts, queue, error)) { rte_errno = EINVAL; @@ -3627,6 +3628,7 @@ flow_hw_async_flow_update(struct rte_eth_dev *dev, .burst = attr->postpone, }; struct mlx5dr_rule_action *rule_acts; + struct mlx5_flow_hw_action_params ap; struct rte_flow_hw *of = (struct rte_flow_hw *)flow; struct rte_flow_hw *nf; struct rte_flow_template_table *table = of->table; @@ -3679,7 +3681,7 @@ flow_hw_async_flow_update(struct rte_eth_dev *dev, * No need to copy and contrust a new "actions" list based on the * user's input, in order to save the cost. */ - if (flow_hw_actions_construct(dev, job, + if (flow_hw_actions_construct(dev, nf, &ap, &table->ats[action_template_index], nf->mt_idx, actions, rule_acts, queue, error)) { @@ -6611,7 +6613,7 @@ flow_hw_set_vlan_vid(struct rte_eth_dev *dev, static __rte_always_inline int flow_hw_set_vlan_vid_construct(struct rte_eth_dev *dev, - struct mlx5_hw_q_job *job, + struct mlx5_modification_cmd *mhdr_cmd, struct mlx5_action_construct_data *act_data, const struct mlx5_hw_actions *hw_acts, const struct rte_flow_action *action) @@ -6639,8 +6641,7 @@ flow_hw_set_vlan_vid_construct(struct rte_eth_dev *dev, .conf = &conf }; - return flow_hw_modify_field_construct(job, act_data, hw_acts, - &modify_action); + return flow_hw_modify_field_construct(mhdr_cmd, act_data, hw_acts, &modify_action); } static int @@ -9990,10 +9991,6 @@ flow_hw_configure(struct rte_eth_dev *dev, } mem_size += (sizeof(struct mlx5_hw_q_job *) + sizeof(struct mlx5_hw_q_job) + - sizeof(uint8_t) * MLX5_ENCAP_MAX_LEN + - sizeof(uint8_t) * MLX5_PUSH_MAX_LEN + - sizeof(struct mlx5_modification_cmd) * - MLX5_MHDR_MAX_CMD + sizeof(struct rte_flow_item) * MLX5_HW_MAX_ITEMS + sizeof(struct rte_flow_hw)) * @@ -10006,8 +10003,6 @@ flow_hw_configure(struct rte_eth_dev *dev, goto err; } for (i = 0; i < nb_q_updated; i++) { - uint8_t *encap = NULL, *push = NULL; - struct mlx5_modification_cmd *mhdr_cmd = NULL; struct rte_flow_item *items = NULL; struct rte_flow_hw *upd_flow = NULL; @@ -10021,20 +10016,11 @@ flow_hw_configure(struct rte_eth_dev *dev, &job[_queue_attr[i - 1]->size - 1].upd_flow[1]; job = (struct mlx5_hw_q_job *) &priv->hw_q[i].job[_queue_attr[i]->size]; - mhdr_cmd = (struct mlx5_modification_cmd *) - &job[_queue_attr[i]->size]; - encap = (uint8_t *) - &mhdr_cmd[_queue_attr[i]->size * MLX5_MHDR_MAX_CMD]; - push = (uint8_t *) - &encap[_queue_attr[i]->size * MLX5_ENCAP_MAX_LEN]; items = (struct rte_flow_item *) - &push[_queue_attr[i]->size * MLX5_PUSH_MAX_LEN]; + &job[_queue_attr[i]->size]; upd_flow = (struct rte_flow_hw *) &items[_queue_attr[i]->size * MLX5_HW_MAX_ITEMS]; for (j = 0; j < _queue_attr[i]->size; j++) { - job[j].mhdr_cmd = &mhdr_cmd[j * MLX5_MHDR_MAX_CMD]; - job[j].encap_data = &encap[j * MLX5_ENCAP_MAX_LEN]; - job[j].push_data = &push[j * MLX5_PUSH_MAX_LEN]; job[j].items = &items[j * MLX5_HW_MAX_ITEMS]; job[j].upd_flow = &upd_flow[j]; priv->hw_q[i].job[j] = &job[j]; -- 2.39.2