From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id D9756A0353; Thu, 24 Feb 2022 04:11:37 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id CF34941199; Thu, 24 Feb 2022 04:11:11 +0100 (CET) Received: from NAM11-BN8-obe.outbound.protection.outlook.com (mail-bn8nam11on2069.outbound.protection.outlook.com [40.107.236.69]) by mails.dpdk.org (Postfix) with ESMTP id 258FE4118A for ; Thu, 24 Feb 2022 04:11:08 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=Z6wJ1/8tUKShtjxFw9H9CYoCqE5+HMMwn2ccNfn4kdy5E6prUy1abl3QyOBHfwWAF9J+OVo64g9wN5QV8ySGXm82OLqcPDvjFS9LkGL7BNPohmjXHd6pD9t00Q7marfG/3McUjy2/RSPF0ULjiv2b7sC1kOtNbjY9mJcv69iU9ExTid/wB3yljbchzmXG/FNUEs21Tn2wLIYq3aGAesQuIOHiXrpnBONgiLxFd7WIgj4CroWaXRu8jcg1ZX0HJZsGWgZ4Lh1iUuRIIIA49KUNfSWXuvw5pJ7zJ+gBdsCM18YgbuQ+JWkmp6VXM8j63sSY8077HtmfkftNNEUjnhMug== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=Sn7DqmwSxM9M4dXxd+vpjrbqAlcRITSMO+Z9I2Z3+vg=; b=LDsJEWrpD5tYJm7jW9PYHxLh03jBlbmpQ1td68cxrC91dRlFG+4aNTV3B3l01iWJaPGh+WDWNWMOmYgGKP4Tb/PN+XZV37JcY7G4RSHXRp+XVtVZjwhDvod3CpovN1V4y00sOQ1vvnu2is/9d9s7vC4LSlddAd7qhIQ2mFrz9sZzhi91P2/FMjNmMpk7dBka5OLycUAz1jCGJUmg6IY0T71GSsCfR8huDldkXPU/blDvHwpVYqsxx8VX3idssuHdJQjgpM8tUo0jRtvmkQWp0Cu4nW9CQSXNzfHTRYljXjFjBG0hUH4ogq2Tq8k7d3ONM6Je9/y005Y4cwaCb82VCw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 12.22.5.234) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=Sn7DqmwSxM9M4dXxd+vpjrbqAlcRITSMO+Z9I2Z3+vg=; b=H92nOb8lnm9bJ0H1ErWEFztsX6qhncrLC9ckPFxHR5pVdnZajvQS1crYHj8NOstH6MbARXUHInATjKjtO31EIK8pAA3709qYEt2VMPnSUQuMAtK0Mq94fOveIxW/lSThJmCypjkKHijnKpfLs8doVkl0rt6kXHfOxDxEQOviy1dXNJ6G4jZhNuNYyl/Fp2oa2X/pdawhabWaLTpXFlRKU/p1PVmonplwxHJLlv20itN4uBsw3GauFPI1YMGaRzxZ+pAoVzRrXXrNg5n2Ia43y4Lz6H5CgQXxRYvO6A/jLqwtrP9QXtzweq114bfTPrFWCLIIbffCF+dQWjqcBSt5nw== Received: from MW4PR03CA0326.namprd03.prod.outlook.com (2603:10b6:303:dd::31) by MN2PR12MB3951.namprd12.prod.outlook.com (2603:10b6:208:16b::24) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5017.21; Thu, 24 Feb 2022 03:11:06 +0000 Received: from CO1NAM11FT036.eop-nam11.prod.protection.outlook.com (2603:10b6:303:dd:cafe::65) by MW4PR03CA0326.outlook.office365.com (2603:10b6:303:dd::31) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5017.23 via Frontend Transport; Thu, 24 Feb 2022 03:11:06 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 12.22.5.234) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 12.22.5.234 as permitted sender) receiver=protection.outlook.com; client-ip=12.22.5.234; helo=mail.nvidia.com; Received: from mail.nvidia.com (12.22.5.234) by CO1NAM11FT036.mail.protection.outlook.com (10.13.174.124) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.5017.22 via Frontend Transport; Thu, 24 Feb 2022 03:11:05 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by DRHQMAIL101.nvidia.com (10.27.9.10) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Thu, 24 Feb 2022 03:11:04 +0000 Received: from nvidia.com (10.126.231.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.9; Wed, 23 Feb 2022 19:11:02 -0800 From: Suanming Mou To: , CC: , , Subject: [PATCH v3 10/14] net/mlx5: add flow jump action Date: Thu, 24 Feb 2022 05:10:25 +0200 Message-ID: <20220224031029.14049-11-suanmingm@nvidia.com> X-Mailer: git-send-email 2.18.1 In-Reply-To: <20220224031029.14049-1-suanmingm@nvidia.com> References: <20220210162926.20436-1-suanmingm@nvidia.com> <20220224031029.14049-1-suanmingm@nvidia.com> MIME-Version: 1.0 Content-Type: text/plain X-Originating-IP: [10.126.231.35] X-ClientProxiedBy: rnnvmail201.nvidia.com (10.129.68.8) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 2233a846-6df1-497f-6fab-08d9f7434bf1 X-MS-TrafficTypeDiagnostic: MN2PR12MB3951:EE_ X-Microsoft-Antispam-PRVS: X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: zvoKtfWz0ISNcKOo1WYjnASRO14VgQDU0pKzSboR/11/6Zczx1LF+HsZqNXt8pYrhoH6PzJxy9O67Crb2bMrKub3ZnrTEHvi0k5nDGvBUDD6bi9likrly7ejUZ42aqjHMUQU3X6bm+uc/4rQi59WJIYmfQa0vTDkesFIjVu+5G+m9Ofaem9j7hXE6FUYBOGMkxqgMuJFnw9KeJKxsW5butmWQF7FPYIO9HRykpndy5yvtdeMJEofmuEZV42hLWmmcDADbWf5ibZNx/ptF8Ric6MR/IivVexqZLISGFoRLSBjwcpWp3oIXf3E8Qnhdy+8UbT5XjvjOCNOhFR1tAosl+D9K8sfuQWfX5JSLFth4wdEd3eVHHlj4Ud5CHd8noVFUH5I8E6xVPDuT/WR8OYICQCcGk1svyvyW04a3MoIqzBdLhHkNGuyVDcDmPsVQXbMRF9M9ZRxbnH0CvBuQkP2XB1Oo+IJronMDGd98/eluTe41A7cw6Vi3HYz0GYonQH3Ts3YdhGC9FQur956zv8FxMwyZqcv+Nn29naHEkFDJL9NaVKyVnkAkS7Glrtv4SKJCBjQAYIui0fs/N5WCov0Xia3+6segDzIS1vFGXtfYf+yTd67e0IDWISWL73oYI9rx+dLIRkQRMMJ+DriT6mO3v2RNeLdwP1VRIW6j/vUphq605e/2WqxEf8HoQ+WUkgqMppIXcED3PkT0gX1NiND1DZNurTvZPHpfhQS9nqbtsd6Y+zg2GFhfDKYriD8aaUr X-Forefront-Antispam-Report: CIP:12.22.5.234; CTRY:US; LANG:en; SCL:1; SRV:; IPV:CAL; SFV:NSPM; H:mail.nvidia.com; PTR:InfoNoRecords; CAT:NONE; SFS:(13230001)(4636009)(46966006)(36840700001)(40470700004)(5660300002)(40460700003)(30864003)(110136005)(2906002)(36756003)(508600001)(356005)(54906003)(6636002)(81166007)(8936002)(316002)(6666004)(6286002)(4326008)(1076003)(26005)(186003)(83380400001)(426003)(336012)(7696005)(16526019)(86362001)(82310400004)(70206006)(70586007)(8676002)(47076005)(55016003)(2616005)(36860700001)(36900700001)(309714004); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 24 Feb 2022 03:11:05.7525 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 2233a846-6df1-497f-6fab-08d9f7434bf1 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[12.22.5.234]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT036.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN2PR12MB3951 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Jump action connects different level of flow tables and allows packet handling in the chain of flows. A new action construct data struct is also added in this commit to help to handle not only the dynamic jump action but also for the other generic dynamic actions. The actions with empty mask configuration means dynamic action, and the dedicated action will be created with the flow action configuration during flow creation. In that dynamic action case, the action will be appended to the table template's action list during table creation. When creating the flows, traverse the action list and pick the dynamic action configuration details from flow actions as the action construct data struct describes, then create the dedicated dynamic actions. This commit adds the jump action and the generic dynamic action construct mechanism. Signed-off-by: Suanming Mou Acked-by: Viacheslav Ovsiienko --- drivers/net/mlx5/mlx5.h | 1 + drivers/net/mlx5/mlx5_flow.h | 25 ++- drivers/net/mlx5/mlx5_flow_hw.c | 270 +++++++++++++++++++++++++++++--- 3 files changed, 275 insertions(+), 21 deletions(-) diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h index d94e98db77..f3732958a2 100644 --- a/drivers/net/mlx5/mlx5.h +++ b/drivers/net/mlx5/mlx5.h @@ -1527,6 +1527,7 @@ struct mlx5_priv { /* HW steering global drop action. */ struct mlx5dr_action *hw_drop[MLX5_HW_ACTION_FLAG_MAX] [MLX5DR_TABLE_TYPE_MAX]; + struct mlx5_indexed_pool *acts_ipool; /* Action data indexed pool. */ #endif }; diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h index 3add4c4a81..963dbd7806 100644 --- a/drivers/net/mlx5/mlx5_flow.h +++ b/drivers/net/mlx5/mlx5_flow.h @@ -1020,10 +1020,25 @@ struct rte_flow { /* HWS flow struct. */ struct rte_flow_hw { uint32_t idx; /* Flow index from indexed pool. */ + uint32_t fate_type; /* Fate action type. */ + union { + /* Jump action. */ + struct mlx5_hw_jump_action *jump; + }; struct rte_flow_template_table *table; /* The table flow allcated from. */ struct mlx5dr_rule rule; /* HWS layer data struct. */ } __rte_packed; +/* rte flow action translate to DR action struct. */ +struct mlx5_action_construct_data { + LIST_ENTRY(mlx5_action_construct_data) next; + /* Ensure the action types are matched. */ + int type; + uint32_t idx; /* Data index. */ + uint16_t action_src; /* rte_flow_action src offset. */ + uint16_t action_dst; /* mlx5dr_rule_action dst offset. */ +}; + /* Flow item template struct. */ struct rte_flow_pattern_template { LIST_ENTRY(rte_flow_pattern_template) next; @@ -1051,9 +1066,17 @@ struct mlx5_hw_jump_action { struct mlx5dr_action *hws_action; }; +/* The maximum actions support in the flow. */ +#define MLX5_HW_MAX_ACTS 16 + /* DR action set struct. */ struct mlx5_hw_actions { - struct mlx5dr_action *drop; /* Drop action. */ + /* Dynamic action list. */ + LIST_HEAD(act_list, mlx5_action_construct_data) act_list; + struct mlx5_hw_jump_action *jump; /* Jump action. */ + uint32_t acts_num:4; /* Total action number. */ + /* Translated DR action array from action template. */ + struct mlx5dr_rule_action rule_acts[MLX5_HW_MAX_ACTS]; }; /* mlx5 action template struct. */ diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c index ed14eacce2..f320d0db8c 100644 --- a/drivers/net/mlx5/mlx5_flow_hw.c +++ b/drivers/net/mlx5/mlx5_flow_hw.c @@ -36,18 +36,158 @@ static uint32_t mlx5_hw_act_flag[MLX5_HW_ACTION_FLAG_MAX] }, }; +/** + * Register destination table DR jump action. + * + * @param[in] dev + * Pointer to the rte_eth_dev structure. + * @param[in] table_attr + * Pointer to the flow attributes. + * @param[in] dest_group + * The destination group ID. + * @param[out] error + * Pointer to error structure. + * + * @return + * Table on success, NULL otherwise and rte_errno is set. + */ +static struct mlx5_hw_jump_action * +flow_hw_jump_action_register(struct rte_eth_dev *dev, + const struct rte_flow_attr *attr, + uint32_t dest_group, + struct rte_flow_error *error) +{ + struct mlx5_priv *priv = dev->data->dev_private; + struct rte_flow_attr jattr = *attr; + struct mlx5_flow_group *grp; + struct mlx5_flow_cb_ctx ctx = { + .dev = dev, + .error = error, + .data = &jattr, + }; + struct mlx5_list_entry *ge; + + jattr.group = dest_group; + ge = mlx5_hlist_register(priv->sh->flow_tbls, dest_group, &ctx); + if (!ge) + return NULL; + grp = container_of(ge, struct mlx5_flow_group, entry); + return &grp->jump; +} + +/** + * Release jump action. + * + * @param[in] dev + * Pointer to the rte_eth_dev structure. + * @param[in] jump + * Pointer to the jump action. + */ + +static void +flow_hw_jump_release(struct rte_eth_dev *dev, struct mlx5_hw_jump_action *jump) +{ + struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5_flow_group *grp; + + grp = container_of + (jump, struct mlx5_flow_group, jump); + mlx5_hlist_unregister(priv->sh->flow_tbls, &grp->entry); +} + /** * Destroy DR actions created by action template. * * For DR actions created during table creation's action translate. * Need to destroy the DR action when destroying the table. * + * @param[in] dev + * Pointer to the rte_eth_dev structure. * @param[in] acts * Pointer to the template HW steering DR actions. */ static void -__flow_hw_action_template_destroy(struct mlx5_hw_actions *acts __rte_unused) +__flow_hw_action_template_destroy(struct rte_eth_dev *dev, + struct mlx5_hw_actions *acts) { + struct mlx5_priv *priv = dev->data->dev_private; + + if (acts->jump) { + struct mlx5_flow_group *grp; + + grp = container_of + (acts->jump, struct mlx5_flow_group, jump); + mlx5_hlist_unregister(priv->sh->flow_tbls, &grp->entry); + acts->jump = NULL; + } +} + +/** + * Append dynamic action to the dynamic action list. + * + * @param[in] priv + * Pointer to the port private data structure. + * @param[in] acts + * Pointer to the template HW steering DR actions. + * @param[in] type + * Action type. + * @param[in] action_src + * Offset of source rte flow action. + * @param[in] action_dst + * Offset of destination DR action. + * + * @return + * 0 on success, negative value otherwise and rte_errno is set. + */ +static __rte_always_inline struct mlx5_action_construct_data * +__flow_hw_act_data_alloc(struct mlx5_priv *priv, + enum rte_flow_action_type type, + uint16_t action_src, + uint16_t action_dst) +{ + struct mlx5_action_construct_data *act_data; + uint32_t idx = 0; + + act_data = mlx5_ipool_zmalloc(priv->acts_ipool, &idx); + if (!act_data) + return NULL; + act_data->idx = idx; + act_data->type = type; + act_data->action_src = action_src; + act_data->action_dst = action_dst; + return act_data; +} + +/** + * Append dynamic action to the dynamic action list. + * + * @param[in] priv + * Pointer to the port private data structure. + * @param[in] acts + * Pointer to the template HW steering DR actions. + * @param[in] type + * Action type. + * @param[in] action_src + * Offset of source rte flow action. + * @param[in] action_dst + * Offset of destination DR action. + * + * @return + * 0 on success, negative value otherwise and rte_errno is set. + */ +static __rte_always_inline int +__flow_hw_act_data_general_append(struct mlx5_priv *priv, + struct mlx5_hw_actions *acts, + enum rte_flow_action_type type, + uint16_t action_src, + uint16_t action_dst) +{ struct mlx5_action_construct_data *act_data; + + act_data = __flow_hw_act_data_alloc(priv, type, action_src, action_dst); + if (!act_data) + return -1; + LIST_INSERT_HEAD(&acts->act_list, act_data, next); + return 0; } /** @@ -80,14 +220,16 @@ flow_hw_actions_translate(struct rte_eth_dev *dev, const struct rte_flow_template_table_attr *table_attr, struct mlx5_hw_actions *acts, struct rte_flow_actions_template *at, - struct rte_flow_error *error __rte_unused) + struct rte_flow_error *error) { struct mlx5_priv *priv = dev->data->dev_private; const struct rte_flow_attr *attr = &table_attr->flow_attr; struct rte_flow_action *actions = at->actions; + struct rte_flow_action *action_start = actions; struct rte_flow_action *masks = at->masks; bool actions_end = false; - uint32_t type; + uint32_t type, i; + int err; if (attr->transfer) type = MLX5DR_TABLE_TYPE_FDB; @@ -95,14 +237,34 @@ flow_hw_actions_translate(struct rte_eth_dev *dev, type = MLX5DR_TABLE_TYPE_NIC_TX; else type = MLX5DR_TABLE_TYPE_NIC_RX; - for (; !actions_end; actions++, masks++) { + for (i = 0; !actions_end; actions++, masks++) { switch (actions->type) { case RTE_FLOW_ACTION_TYPE_INDIRECT: break; case RTE_FLOW_ACTION_TYPE_VOID: break; case RTE_FLOW_ACTION_TYPE_DROP: - acts->drop = priv->hw_drop[!!attr->group][type]; + acts->rule_acts[i++].action = + priv->hw_drop[!!attr->group][type]; + break; + case RTE_FLOW_ACTION_TYPE_JUMP: + if (masks->conf) { + uint32_t jump_group = + ((const struct rte_flow_action_jump *) + actions->conf)->group; + acts->jump = flow_hw_jump_action_register + (dev, attr, jump_group, error); + if (!acts->jump) + goto err; + acts->rule_acts[i].action = (!!attr->group) ? + acts->jump->hws_action : + acts->jump->root_action; + } else if (__flow_hw_act_data_general_append + (priv, acts, actions->type, + actions - action_start, i)){ + goto err; + } + i++; break; case RTE_FLOW_ACTION_TYPE_END: actions_end = true; @@ -111,7 +273,14 @@ flow_hw_actions_translate(struct rte_eth_dev *dev, break; } } + acts->acts_num = i; return 0; +err: + err = rte_errno; + __flow_hw_action_template_destroy(dev, acts); + return rte_flow_error_set(error, err, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, + "fail to create rte table"); } /** @@ -120,6 +289,10 @@ flow_hw_actions_translate(struct rte_eth_dev *dev, * For action template contains dynamic actions, these actions need to * be updated according to the rte_flow action during flow creation. * + * @param[in] dev + * Pointer to the rte_eth_dev structure. + * @param[in] job + * Pointer to job descriptor. * @param[in] hw_acts * Pointer to translated actions from template. * @param[in] actions @@ -133,31 +306,63 @@ flow_hw_actions_translate(struct rte_eth_dev *dev, * 0 on success, negative value otherwise and rte_errno is set. */ static __rte_always_inline int -flow_hw_actions_construct(struct mlx5_hw_actions *hw_acts, +flow_hw_actions_construct(struct rte_eth_dev *dev, + struct mlx5_hw_q_job *job, + struct mlx5_hw_actions *hw_acts, const struct rte_flow_action actions[], struct mlx5dr_rule_action *rule_acts, uint32_t *acts_num) { - bool actions_end = false; - uint32_t i; + struct rte_flow_template_table *table = job->flow->table; + struct mlx5_action_construct_data *act_data; + const struct rte_flow_action *action; + struct rte_flow_attr attr = { + .ingress = 1, + }; - for (i = 0; !actions_end || (i >= MLX5_HW_MAX_ACTS); actions++) { - switch (actions->type) { + memcpy(rule_acts, hw_acts->rule_acts, + sizeof(*rule_acts) * hw_acts->acts_num); + *acts_num = hw_acts->acts_num; + if (LIST_EMPTY(&hw_acts->act_list)) + return 0; + attr.group = table->grp->group_id; + if (table->type == MLX5DR_TABLE_TYPE_FDB) { + attr.transfer = 1; + attr.ingress = 1; + } else if (table->type == MLX5DR_TABLE_TYPE_NIC_TX) { + attr.egress = 1; + attr.ingress = 0; + } else { + attr.ingress = 1; + } + LIST_FOREACH(act_data, &hw_acts->act_list, next) { + uint32_t jump_group; + struct mlx5_hw_jump_action *jump; + + action = &actions[act_data->action_src]; + MLX5_ASSERT(action->type == RTE_FLOW_ACTION_TYPE_INDIRECT || + (int)action->type == act_data->type); + switch (action->type) { case RTE_FLOW_ACTION_TYPE_INDIRECT: break; case RTE_FLOW_ACTION_TYPE_VOID: break; - case RTE_FLOW_ACTION_TYPE_DROP: - rule_acts[i++].action = hw_acts->drop; - break; - case RTE_FLOW_ACTION_TYPE_END: - actions_end = true; + case RTE_FLOW_ACTION_TYPE_JUMP: + jump_group = ((const struct rte_flow_action_jump *) + action->conf)->group; + jump = flow_hw_jump_action_register + (dev, &attr, jump_group, NULL); + if (!jump) + return -1; + rule_acts[act_data->action_dst].action = + (!!attr.group) ? jump->hws_action : jump->root_action; + job->flow->jump = jump; + job->flow->fate_type = MLX5_FLOW_FATE_JUMP; break; default: break; } } - *acts_num = i; return 0; } @@ -239,7 +444,8 @@ flow_hw_async_flow_create(struct rte_eth_dev *dev, rule_attr.user_data = job; hw_acts = &table->ats[action_template_index].acts; /* Construct the flow action array based on the input actions.*/ - flow_hw_actions_construct(hw_acts, actions, rule_acts, &acts_num); + flow_hw_actions_construct(dev, job, hw_acts, actions, + rule_acts, &acts_num); ret = mlx5dr_rule_create(table->matcher, pattern_template_index, items, rule_acts, acts_num, @@ -356,8 +562,11 @@ flow_hw_pull(struct rte_eth_dev *dev, job = (struct mlx5_hw_q_job *)res[i].user_data; /* Restore user data. */ res[i].user_data = job->user_data; - if (job->type == MLX5_HW_Q_JOB_TYPE_DESTROY) + if (job->type == MLX5_HW_Q_JOB_TYPE_DESTROY) { + if (job->flow->fate_type == MLX5_FLOW_FATE_JUMP) + flow_hw_jump_release(dev, job->flow->jump); mlx5_ipool_free(job->flow->table->flow, job->flow->idx); + } priv->hw_q[queue].job[priv->hw_q[queue].job_idx++] = job; } return ret; @@ -642,6 +851,7 @@ flow_hw_table_create(struct rte_eth_dev *dev, rte_errno = EINVAL; goto at_error; } + LIST_INIT(&tbl->ats[i].acts.act_list); err = flow_hw_actions_translate(dev, attr, &tbl->ats[i].acts, action_templates[i], error); @@ -657,7 +867,7 @@ flow_hw_table_create(struct rte_eth_dev *dev, return tbl; at_error: while (i--) { - __flow_hw_action_template_destroy(&tbl->ats[i].acts); + __flow_hw_action_template_destroy(dev, &tbl->ats[i].acts); __atomic_sub_fetch(&action_templates[i]->refcnt, 1, __ATOMIC_RELAXED); } @@ -716,7 +926,7 @@ flow_hw_table_destroy(struct rte_eth_dev *dev, __atomic_sub_fetch(&table->its[i]->refcnt, 1, __ATOMIC_RELAXED); for (i = 0; i < table->nb_action_templates; i++) { - __flow_hw_action_template_destroy(&table->ats[i].acts); + __flow_hw_action_template_destroy(dev, &table->ats[i].acts); __atomic_sub_fetch(&table->ats[i].action_template->refcnt, 1, __ATOMIC_RELAXED); } @@ -1167,6 +1377,15 @@ flow_hw_configure(struct rte_eth_dev *dev, struct mlx5_hw_q *hw_q; struct mlx5_hw_q_job *job = NULL; uint32_t mem_size, i, j; + struct mlx5_indexed_pool_config cfg = { + .size = sizeof(struct rte_flow_hw), + .trunk_size = 4096, + .need_lock = 1, + .release_mem_en = !!priv->sh->config.reclaim_mode, + .malloc = mlx5_malloc, + .free = mlx5_free, + .type = "mlx5_hw_action_construct_data", + }; if (!port_attr || !nb_queue || !queue_attr) { rte_errno = EINVAL; @@ -1185,6 +1404,9 @@ flow_hw_configure(struct rte_eth_dev *dev, } flow_hw_resource_release(dev); } + priv->acts_ipool = mlx5_ipool_create(&cfg); + if (!priv->acts_ipool) + goto err; /* Allocate the queue job descriptor LIFO. */ mem_size = sizeof(priv->hw_q[0]) * nb_queue; for (i = 0; i < nb_queue; i++) { @@ -1252,6 +1474,10 @@ flow_hw_configure(struct rte_eth_dev *dev, claim_zero(mlx5dr_context_close(dr_ctx)); mlx5_free(priv->hw_q); priv->hw_q = NULL; + if (priv->acts_ipool) { + mlx5_ipool_destroy(priv->acts_ipool); + priv->acts_ipool = NULL; + } return rte_flow_error_set(error, rte_errno, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, "fail to configure port"); @@ -1293,6 +1519,10 @@ flow_hw_resource_release(struct rte_eth_dev *dev) mlx5dr_action_destroy(priv->hw_drop[i][j]); } } + if (priv->acts_ipool) { + mlx5_ipool_destroy(priv->acts_ipool); + priv->acts_ipool = NULL; + } mlx5_free(priv->hw_q); priv->hw_q = NULL; claim_zero(mlx5dr_context_close(priv->dr_ctx)); -- 2.25.1