From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 97774A0353; Thu, 24 Feb 2022 04:11:49 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id BE6A5411B8; Thu, 24 Feb 2022 04:11:16 +0100 (CET) Received: from NAM10-BN7-obe.outbound.protection.outlook.com (mail-bn7nam10on2069.outbound.protection.outlook.com [40.107.92.69]) by mails.dpdk.org (Postfix) with ESMTP id E8F1C411AB for ; Thu, 24 Feb 2022 04:11:13 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=mYQ5sQa04Zu6exggor5lzDGuhLULX7+kDV5jwBAhPacDbvteWE2PbKyRzb+GOZOrkgxZLyPvtOJHhIjRpG0Kz1Hv3H/G/8vjrmEbOqB4g1pup7NMQmHIKccPkfFv3ogF3/x3KTM+Cv3icGsgqsWw8947MY1HWaVDa8S/H954zi6KJjbe8ReULBFQxYbXlVcC9bLdUlXpQjzU08w8E16RnByCho1tGNQ4uU3yfs6OmeAGLVq5BSYjPZlihAYVj1m1wGfr4s+FXUGUVgDmLQZGU/dtIF9dA/nw1L3FVudSnthEv1veY7834CEdtTh7Guy5dWM9SJZR46JIuDbACoPaeA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=gLuExmavdrOV/qIUnpR5+BVutBZlLTPggmte6hDFzp4=; b=D26koLzpSJckW6u79oBBEcXdDxOizq2csv2KiO6k+z9eIoGvLnxhNoS5Xs8L55Us9If0Llr6IqcL0RqFNE9RbWiR3yRG57jB5EosOeTKMd5/F2L4E9ihlWxoRyNNn1ysMhNFeDy/NmzXxSCZEXVn1TnXFT49fc1TOgaCS1KuSKRCzLLEap4/iTLyPdsQrLuWZjJf+I3pLaPfFO6x/I5aQvAL44qsNx0lL7FD0+WgA0+1q7g9XuzPYvVpyNsJrCRXRZ8yHpE+/KjQosJYISaJVIi7/kUgFIHTluKN7lniNk/VAElQAVTZ9AN28Yxe0tGcKSL1awgT2FrGETEftrCLAQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 12.22.5.238) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=gLuExmavdrOV/qIUnpR5+BVutBZlLTPggmte6hDFzp4=; b=tetRMmNExqAkmj9kCKKZ/xE/SC2Es0r8L3iGCeXkkSiO4p0zgqRkVIsNH22/X+sg+zmb04SBJkGRUAzsKxwuVhLfegkOBJ/S6JN4/euTbZ4RkBoFnGmsE3RWv95DQVC9TvAgY82m+UZ4Z8Rt6gNRYw3f0j7u2OR2x3UdLHgBGiViHFnx0WPh9guuZhmP8f1NSmtYCdCIdwQcvuY7ZN5hKDWZDlwYk72NqWnUhA3EY/Cj5wKxYSLgi/L868d7V5vG3feq2va3BGD5VESybtVcZdSIZh8DmbJxFIMv+9MNHLMGQCbJj8spY5DjeI/MTUVaZVY0CllTfcUyf3/Z1jIVtg== Received: from DM3PR12CA0067.namprd12.prod.outlook.com (2603:10b6:0:57::11) by SN6PR12MB2830.namprd12.prod.outlook.com (2603:10b6:805:e0::27) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4995.16; Thu, 24 Feb 2022 03:11:11 +0000 Received: from DM6NAM11FT050.eop-nam11.prod.protection.outlook.com (2603:10b6:0:57:cafe::28) by DM3PR12CA0067.outlook.office365.com (2603:10b6:0:57::11) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5017.22 via Frontend Transport; Thu, 24 Feb 2022 03:11:11 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 12.22.5.238) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 12.22.5.238 as permitted sender) receiver=protection.outlook.com; client-ip=12.22.5.238; helo=mail.nvidia.com; Received: from mail.nvidia.com (12.22.5.238) by DM6NAM11FT050.mail.protection.outlook.com (10.13.173.111) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.5017.22 via Frontend Transport; Thu, 24 Feb 2022 03:11:10 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by DRHQMAIL105.nvidia.com (10.27.9.14) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Thu, 24 Feb 2022 03:11:10 +0000 Received: from nvidia.com (10.126.231.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.9; Wed, 23 Feb 2022 19:11:07 -0800 From: Suanming Mou To: , CC: , , Subject: [PATCH v3 13/14] net/mlx5: add indirect action Date: Thu, 24 Feb 2022 05:10:28 +0200 Message-ID: <20220224031029.14049-14-suanmingm@nvidia.com> X-Mailer: git-send-email 2.18.1 In-Reply-To: <20220224031029.14049-1-suanmingm@nvidia.com> References: <20220210162926.20436-1-suanmingm@nvidia.com> <20220224031029.14049-1-suanmingm@nvidia.com> MIME-Version: 1.0 Content-Type: text/plain X-Originating-IP: [10.126.231.35] X-ClientProxiedBy: rnnvmail201.nvidia.com (10.129.68.8) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: b22838a5-396b-42d2-4ba7-08d9f7434f00 X-MS-TrafficTypeDiagnostic: SN6PR12MB2830:EE_ X-Microsoft-Antispam-PRVS: X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: 6DvBOvtfMeaI5jKwjdFwKJjvoVWOlDWe1oYCO2YpDdSa97tjFlYQlCzGbvH4MBBBAjOPfxReHAijkyBEajEo0CZcyRgCVP5KSN2H4YJw4iIn71ySaSf+n9Ib12aIu/5b0Z81l8UoZpkDHBwzi64Y0JChR5viO3i8oacCPcSYQXxxEdMXWjXjrv4pQYdgMIKtzd0BIu4OM5n72G9BaT3sfT+322386jGsDrs8/jI8gTpidShluvjAGt69YVdAiKmAtyterv1KfqgnjrWBQYfLVuyhwt5x1sXqgGho3Pkhgt0Bg/QuzQed11OhLHJl1rXdrtRkD+/qJfhK3SgRzBZcpITvzpFhfZWK8b/DwsEkfwYwG14lP9GFmsjnIH8eeQEt4l05fVyfuDDQjE9dhniUc2CTQctFR1M4r0EfZRFYkwSOkfNHe0Z8QKZknrhRas0sKMAE6OAE2IMLUF5PahP6qcwDAgiu1G5lFieg0nhAoRI6J3cfSwICRjZinrlK6q+XlL2kaGTiQ+4TvoZfEbM7TXy/bWlyh+0w+90umVY7/pQst2v2L4V1ozAEN0JJY7m2BSBhQUOehmXED5g2vhtq1XAqE+HUDooOX2J8BsmZgehdfNjVVZ/hL9s16x2FNA+pzRU5uA9C5d8rsJI21b0phGfQUVAGkQK1aYcEhPPmgm4LUuuWPwEdVpJ1WoPJ2TXnhGSs7eFpc9MyR/m5g2B3fw== X-Forefront-Antispam-Report: CIP:12.22.5.238; CTRY:US; LANG:en; SCL:1; SRV:; IPV:CAL; SFV:NSPM; H:mail.nvidia.com; PTR:InfoNoRecords; CAT:NONE; SFS:(13230001)(4636009)(40470700004)(36840700001)(46966006)(36860700001)(2906002)(86362001)(81166007)(356005)(47076005)(8676002)(82310400004)(70586007)(70206006)(83380400001)(16526019)(316002)(2616005)(4326008)(6636002)(7696005)(55016003)(40460700003)(8936002)(508600001)(5660300002)(110136005)(54906003)(1076003)(426003)(6666004)(6286002)(30864003)(26005)(336012)(36756003)(186003)(36900700001); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 24 Feb 2022 03:11:10.8519 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: b22838a5-396b-42d2-4ba7-08d9f7434f00 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[12.22.5.238]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: DM6NAM11FT050.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: SN6PR12MB2830 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org HW steering can support indirect action as well. With indirect action, the flow can be created with more flexible shared RSS action selection. This will can save the action template with different RSS actions. This commit adds the flow queue operation callback for: rte_flow_async_action_handle_create(); rte_flow_async_action_handle_destroy(); rte_flow_async_action_handle_update(); Signed-off-by: Suanming Mou Acked-by: Viacheslav Ovsiienko --- drivers/net/mlx5/mlx5_flow.c | 131 ++++++++++ drivers/net/mlx5/mlx5_flow.h | 59 +++++ drivers/net/mlx5/mlx5_flow_dv.c | 21 +- drivers/net/mlx5/mlx5_flow_hw.c | 414 +++++++++++++++++++++++++++++++- 4 files changed, 612 insertions(+), 13 deletions(-) diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c index cbd8408e30..5a4e000c12 100644 --- a/drivers/net/mlx5/mlx5_flow.c +++ b/drivers/net/mlx5/mlx5_flow.c @@ -879,6 +879,29 @@ mlx5_flow_push(struct rte_eth_dev *dev, uint32_t queue, struct rte_flow_error *error); +static struct rte_flow_action_handle * +mlx5_flow_async_action_handle_create(struct rte_eth_dev *dev, uint32_t queue, + const struct rte_flow_op_attr *attr, + const struct rte_flow_indir_action_conf *conf, + const struct rte_flow_action *action, + void *user_data, + struct rte_flow_error *error); + +static int +mlx5_flow_async_action_handle_update(struct rte_eth_dev *dev, uint32_t queue, + const struct rte_flow_op_attr *attr, + struct rte_flow_action_handle *handle, + const void *update, + void *user_data, + struct rte_flow_error *error); + +static int +mlx5_flow_async_action_handle_destroy(struct rte_eth_dev *dev, uint32_t queue, + const struct rte_flow_op_attr *attr, + struct rte_flow_action_handle *handle, + void *user_data, + struct rte_flow_error *error); + static const struct rte_flow_ops mlx5_flow_ops = { .validate = mlx5_flow_validate, .create = mlx5_flow_create, @@ -911,6 +934,9 @@ static const struct rte_flow_ops mlx5_flow_ops = { .async_destroy = mlx5_flow_async_flow_destroy, .pull = mlx5_flow_pull, .push = mlx5_flow_push, + .async_action_handle_create = mlx5_flow_async_action_handle_create, + .async_action_handle_update = mlx5_flow_async_action_handle_update, + .async_action_handle_destroy = mlx5_flow_async_action_handle_destroy, }; /* Tunnel information. */ @@ -8367,6 +8393,111 @@ mlx5_flow_push(struct rte_eth_dev *dev, return fops->push(dev, queue, error); } +/** + * Create shared action. + * + * @param[in] dev + * Pointer to the rte_eth_dev structure. + * @param[in] queue + * Which queue to be used.. + * @param[in] attr + * Operation attribute. + * @param[in] conf + * Indirect action configuration. + * @param[in] action + * rte_flow action detail. + * @param[in] user_data + * Pointer to the user_data. + * @param[out] error + * Pointer to error structure. + * + * @return + * Action handle on success, NULL otherwise and rte_errno is set. + */ +static struct rte_flow_action_handle * +mlx5_flow_async_action_handle_create(struct rte_eth_dev *dev, uint32_t queue, + const struct rte_flow_op_attr *attr, + const struct rte_flow_indir_action_conf *conf, + const struct rte_flow_action *action, + void *user_data, + struct rte_flow_error *error) +{ + const struct mlx5_flow_driver_ops *fops = + flow_get_drv_ops(MLX5_FLOW_TYPE_HW); + + return fops->async_action_create(dev, queue, attr, conf, action, + user_data, error); +} + +/** + * Update shared action. + * + * @param[in] dev + * Pointer to the rte_eth_dev structure. + * @param[in] queue + * Which queue to be used.. + * @param[in] attr + * Operation attribute. + * @param[in] handle + * Action handle to be updated. + * @param[in] update + * Update value. + * @param[in] user_data + * Pointer to the user_data. + * @param[out] error + * Pointer to error structure. + * + * @return + * 0 on success, negative value otherwise and rte_errno is set. + */ +static int +mlx5_flow_async_action_handle_update(struct rte_eth_dev *dev, uint32_t queue, + const struct rte_flow_op_attr *attr, + struct rte_flow_action_handle *handle, + const void *update, + void *user_data, + struct rte_flow_error *error) +{ + const struct mlx5_flow_driver_ops *fops = + flow_get_drv_ops(MLX5_FLOW_TYPE_HW); + + return fops->async_action_update(dev, queue, attr, handle, + update, user_data, error); +} + +/** + * Destroy shared action. + * + * @param[in] dev + * Pointer to the rte_eth_dev structure. + * @param[in] queue + * Which queue to be used.. + * @param[in] attr + * Operation attribute. + * @param[in] handle + * Action handle to be destroyed. + * @param[in] user_data + * Pointer to the user_data. + * @param[out] error + * Pointer to error structure. + * + * @return + * 0 on success, negative value otherwise and rte_errno is set. + */ +static int +mlx5_flow_async_action_handle_destroy(struct rte_eth_dev *dev, uint32_t queue, + const struct rte_flow_op_attr *attr, + struct rte_flow_action_handle *handle, + void *user_data, + struct rte_flow_error *error) +{ + const struct mlx5_flow_driver_ops *fops = + flow_get_drv_ops(MLX5_FLOW_TYPE_HW); + + return fops->async_action_destroy(dev, queue, attr, handle, + user_data, error); +} + /** * Allocate a new memory for the counter values wrapped by all the needed * management. diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h index c83e73c793..4c224bbf52 100644 --- a/drivers/net/mlx5/mlx5_flow.h +++ b/drivers/net/mlx5/mlx5_flow.h @@ -41,6 +41,7 @@ enum mlx5_rte_flow_action_type { MLX5_RTE_FLOW_ACTION_TYPE_AGE, MLX5_RTE_FLOW_ACTION_TYPE_COUNT, MLX5_RTE_FLOW_ACTION_TYPE_JUMP, + MLX5_RTE_FLOW_ACTION_TYPE_RSS, }; #define MLX5_INDIRECT_ACTION_TYPE_OFFSET 30 @@ -1038,6 +1039,13 @@ struct mlx5_action_construct_data { uint32_t idx; /* Data index. */ uint16_t action_src; /* rte_flow_action src offset. */ uint16_t action_dst; /* mlx5dr_rule_action dst offset. */ + union { + struct { + uint64_t types; /* RSS hash types. */ + uint32_t level; /* RSS level. */ + uint32_t idx; /* Shared action index. */ + } shared_rss; + }; }; /* Flow item template struct. */ @@ -1046,6 +1054,7 @@ struct rte_flow_pattern_template { /* Template attributes. */ struct rte_flow_pattern_template_attr attr; struct mlx5dr_match_template *mt; /* mlx5 match template. */ + uint64_t item_flags; /* Item layer flags. */ uint32_t refcnt; /* Reference counter. */ }; @@ -1433,6 +1442,32 @@ typedef int (*mlx5_flow_push_t) uint32_t queue, struct rte_flow_error *error); +typedef struct rte_flow_action_handle *(*mlx5_flow_async_action_handle_create_t) + (struct rte_eth_dev *dev, + uint32_t queue, + const struct rte_flow_op_attr *attr, + const struct rte_flow_indir_action_conf *conf, + const struct rte_flow_action *action, + void *user_data, + struct rte_flow_error *error); + +typedef int (*mlx5_flow_async_action_handle_update_t) + (struct rte_eth_dev *dev, + uint32_t queue, + const struct rte_flow_op_attr *attr, + struct rte_flow_action_handle *handle, + const void *update, + void *user_data, + struct rte_flow_error *error); + +typedef int (*mlx5_flow_async_action_handle_destroy_t) + (struct rte_eth_dev *dev, + uint32_t queue, + const struct rte_flow_op_attr *attr, + struct rte_flow_action_handle *handle, + void *user_data, + struct rte_flow_error *error); + struct mlx5_flow_driver_ops { mlx5_flow_validate_t validate; mlx5_flow_prepare_t prepare; @@ -1482,6 +1517,9 @@ struct mlx5_flow_driver_ops { mlx5_flow_async_flow_destroy_t async_flow_destroy; mlx5_flow_pull_t pull; mlx5_flow_push_t push; + mlx5_flow_async_action_handle_create_t async_action_create; + mlx5_flow_async_action_handle_update_t async_action_update; + mlx5_flow_async_action_handle_destroy_t async_action_destroy; }; /* mlx5_flow.c */ @@ -1918,6 +1956,8 @@ void flow_dv_hashfields_set(uint64_t item_flags, uint64_t *hash_fields); void flow_dv_action_rss_l34_hash_adjust(uint64_t rss_types, uint64_t *hash_field); +uint32_t flow_dv_action_rss_hrxq_lookup(struct rte_eth_dev *dev, uint32_t idx, + const uint64_t hash_fields); struct mlx5_list_entry *flow_hw_grp_create_cb(void *tool_ctx, void *cb_ctx); void flow_hw_grp_remove_cb(void *tool_ctx, struct mlx5_list_entry *entry); @@ -1968,4 +2008,23 @@ mlx5_get_tof(const struct rte_flow_item *items, enum mlx5_tof_rule_type *rule_type); void flow_hw_resource_release(struct rte_eth_dev *dev); +int flow_dv_action_validate(struct rte_eth_dev *dev, + const struct rte_flow_indir_action_conf *conf, + const struct rte_flow_action *action, + struct rte_flow_error *err); +struct rte_flow_action_handle *flow_dv_action_create(struct rte_eth_dev *dev, + const struct rte_flow_indir_action_conf *conf, + const struct rte_flow_action *action, + struct rte_flow_error *err); +int flow_dv_action_destroy(struct rte_eth_dev *dev, + struct rte_flow_action_handle *handle, + struct rte_flow_error *error); +int flow_dv_action_update(struct rte_eth_dev *dev, + struct rte_flow_action_handle *handle, + const void *update, + struct rte_flow_error *err); +int flow_dv_action_query(struct rte_eth_dev *dev, + const struct rte_flow_action_handle *handle, + void *data, + struct rte_flow_error *error); #endif /* RTE_PMD_MLX5_FLOW_H_ */ diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c index d48726cf05..5f85100324 100644 --- a/drivers/net/mlx5/mlx5_flow_dv.c +++ b/drivers/net/mlx5/mlx5_flow_dv.c @@ -13845,9 +13845,9 @@ __flow_dv_action_rss_hrxq_set(struct mlx5_shared_action_rss *action, * @return * Valid hash RX queue index, otherwise 0. */ -static uint32_t -__flow_dv_action_rss_hrxq_lookup(struct rte_eth_dev *dev, uint32_t idx, - const uint64_t hash_fields) +uint32_t +flow_dv_action_rss_hrxq_lookup(struct rte_eth_dev *dev, uint32_t idx, + const uint64_t hash_fields) { struct mlx5_priv *priv = dev->data->dev_private; struct mlx5_shared_action_rss *shared_rss = @@ -13975,7 +13975,7 @@ flow_dv_apply(struct rte_eth_dev *dev, struct rte_flow *flow, struct mlx5_hrxq *hrxq = NULL; uint32_t hrxq_idx; - hrxq_idx = __flow_dv_action_rss_hrxq_lookup(dev, + hrxq_idx = flow_dv_action_rss_hrxq_lookup(dev, rss_desc->shared_rss, dev_flow->hash_fields); if (hrxq_idx) @@ -14699,6 +14699,7 @@ __flow_dv_action_rss_setup(struct rte_eth_dev *dev, struct mlx5_shared_action_rss *shared_rss, struct rte_flow_error *error) { + struct mlx5_priv *priv = dev->data->dev_private; struct mlx5_flow_rss_desc rss_desc = { 0 }; size_t i; int err; @@ -14719,6 +14720,8 @@ __flow_dv_action_rss_setup(struct rte_eth_dev *dev, /* Set non-zero value to indicate a shared RSS. */ rss_desc.shared_rss = action_idx; rss_desc.ind_tbl = shared_rss->ind_tbl; + if (priv->sh->config.dv_flow_en == 2) + rss_desc.hws_flags = MLX5DR_ACTION_FLAG_HWS_RX; for (i = 0; i < MLX5_RSS_HASH_FIELDS_LEN; i++) { struct mlx5_hrxq *hrxq; uint64_t hash_fields = mlx5_rss_hash_fields[i]; @@ -14910,7 +14913,7 @@ __flow_dv_action_rss_release(struct rte_eth_dev *dev, uint32_t idx, * A valid shared action handle in case of success, NULL otherwise and * rte_errno is set. */ -static struct rte_flow_action_handle * +struct rte_flow_action_handle * flow_dv_action_create(struct rte_eth_dev *dev, const struct rte_flow_indir_action_conf *conf, const struct rte_flow_action *action, @@ -14980,7 +14983,7 @@ flow_dv_action_create(struct rte_eth_dev *dev, * @return * 0 on success, otherwise negative errno value. */ -static int +int flow_dv_action_destroy(struct rte_eth_dev *dev, struct rte_flow_action_handle *handle, struct rte_flow_error *error) @@ -15190,7 +15193,7 @@ __flow_dv_action_ct_update(struct rte_eth_dev *dev, uint32_t idx, * @return * 0 on success, otherwise negative errno value. */ -static int +int flow_dv_action_update(struct rte_eth_dev *dev, struct rte_flow_action_handle *handle, const void *update, @@ -15862,7 +15865,7 @@ flow_dv_query_count(struct rte_eth_dev *dev, uint32_t cnt_idx, void *data, "counters are not available"); } -static int +int flow_dv_action_query(struct rte_eth_dev *dev, const struct rte_flow_action_handle *handle, void *data, struct rte_flow_error *error) @@ -17555,7 +17558,7 @@ flow_dv_counter_allocate(struct rte_eth_dev *dev) * @return * 0 on success, otherwise negative errno value. */ -static int +int flow_dv_action_validate(struct rte_eth_dev *dev, const struct rte_flow_indir_action_conf *conf, const struct rte_flow_action *action, diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c index a28e3c00b3..95df6e5190 100644 --- a/drivers/net/mlx5/mlx5_flow_hw.c +++ b/drivers/net/mlx5/mlx5_flow_hw.c @@ -62,6 +62,72 @@ flow_hw_rxq_flag_set(struct rte_eth_dev *dev, bool enable) priv->mark_enabled = enable; } +/** + * Generate the pattern item flags. + * Will be used for shared RSS action. + * + * @param[in] items + * Pointer to the list of items. + * + * @return + * Item flags. + */ +static uint64_t +flow_hw_rss_item_flags_get(const struct rte_flow_item items[]) +{ + uint64_t item_flags = 0; + uint64_t last_item = 0; + + for (; items->type != RTE_FLOW_ITEM_TYPE_END; items++) { + int tunnel = !!(item_flags & MLX5_FLOW_LAYER_TUNNEL); + int item_type = items->type; + + switch (item_type) { + case RTE_FLOW_ITEM_TYPE_IPV4: + last_item = tunnel ? MLX5_FLOW_LAYER_INNER_L3_IPV4 : + MLX5_FLOW_LAYER_OUTER_L3_IPV4; + break; + case RTE_FLOW_ITEM_TYPE_IPV6: + last_item = tunnel ? MLX5_FLOW_LAYER_INNER_L3_IPV6 : + MLX5_FLOW_LAYER_OUTER_L3_IPV6; + break; + case RTE_FLOW_ITEM_TYPE_TCP: + last_item = tunnel ? MLX5_FLOW_LAYER_INNER_L4_TCP : + MLX5_FLOW_LAYER_OUTER_L4_TCP; + break; + case RTE_FLOW_ITEM_TYPE_UDP: + last_item = tunnel ? MLX5_FLOW_LAYER_INNER_L4_UDP : + MLX5_FLOW_LAYER_OUTER_L4_UDP; + break; + case RTE_FLOW_ITEM_TYPE_GRE: + last_item = MLX5_FLOW_LAYER_GRE; + break; + case RTE_FLOW_ITEM_TYPE_NVGRE: + last_item = MLX5_FLOW_LAYER_GRE; + break; + case RTE_FLOW_ITEM_TYPE_VXLAN: + last_item = MLX5_FLOW_LAYER_VXLAN; + break; + case RTE_FLOW_ITEM_TYPE_VXLAN_GPE: + last_item = MLX5_FLOW_LAYER_VXLAN_GPE; + break; + case RTE_FLOW_ITEM_TYPE_GENEVE: + last_item = MLX5_FLOW_LAYER_GENEVE; + break; + case RTE_FLOW_ITEM_TYPE_MPLS: + last_item = MLX5_FLOW_LAYER_MPLS; + break; + case RTE_FLOW_ITEM_TYPE_GTP: + last_item = MLX5_FLOW_LAYER_GTP; + break; + default: + break; + } + item_flags |= last_item; + } + return item_flags; +} + /** * Register destination table DR jump action. * @@ -266,6 +332,96 @@ __flow_hw_act_data_general_append(struct mlx5_priv *priv, return 0; } +/** + * Append shared RSS action to the dynamic action list. + * + * @param[in] priv + * Pointer to the port private data structure. + * @param[in] acts + * Pointer to the template HW steering DR actions. + * @param[in] type + * Action type. + * @param[in] action_src + * Offset of source rte flow action. + * @param[in] action_dst + * Offset of destination DR action. + * @param[in] idx + * Shared RSS index. + * @param[in] rss + * Pointer to the shared RSS info. + * + * @return + * 0 on success, negative value otherwise and rte_errno is set. + */ +static __rte_always_inline int +__flow_hw_act_data_shared_rss_append(struct mlx5_priv *priv, + struct mlx5_hw_actions *acts, + enum rte_flow_action_type type, + uint16_t action_src, + uint16_t action_dst, + uint32_t idx, + struct mlx5_shared_action_rss *rss) +{ struct mlx5_action_construct_data *act_data; + + act_data = __flow_hw_act_data_alloc(priv, type, action_src, action_dst); + if (!act_data) + return -1; + act_data->shared_rss.level = rss->origin.level; + act_data->shared_rss.types = !rss->origin.types ? RTE_ETH_RSS_IP : + rss->origin.types; + act_data->shared_rss.idx = idx; + LIST_INSERT_HEAD(&acts->act_list, act_data, next); + return 0; +} + +/** + * Translate shared indirect action. + * + * @param[in] dev + * Pointer to the rte_eth_dev data structure. + * @param[in] action + * Pointer to the shared indirect rte_flow action. + * @param[in] acts + * Pointer to the template HW steering DR actions. + * @param[in] action_src + * Offset of source rte flow action. + * @param[in] action_dst + * Offset of destination DR action. + * + * @return + * 0 on success, negative value otherwise and rte_errno is set. + */ +static __rte_always_inline int +flow_hw_shared_action_translate(struct rte_eth_dev *dev, + const struct rte_flow_action *action, + struct mlx5_hw_actions *acts, + uint16_t action_src, + uint16_t action_dst) +{ + struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5_shared_action_rss *shared_rss; + uint32_t act_idx = (uint32_t)(uintptr_t)action->conf; + uint32_t type = act_idx >> MLX5_INDIRECT_ACTION_TYPE_OFFSET; + uint32_t idx = act_idx & + ((1u << MLX5_INDIRECT_ACTION_TYPE_OFFSET) - 1); + + switch (type) { + case MLX5_INDIRECT_ACTION_TYPE_RSS: + shared_rss = mlx5_ipool_get + (priv->sh->ipool[MLX5_IPOOL_RSS_SHARED_ACTIONS], idx); + if (!shared_rss || __flow_hw_act_data_shared_rss_append + (priv, acts, + (enum rte_flow_action_type)MLX5_RTE_FLOW_ACTION_TYPE_RSS, + action_src, action_dst, idx, shared_rss)) + return -1; + break; + default: + DRV_LOG(WARNING, "Unsupported shared action type:%d", type); + break; + } + return 0; +} + /** * Translate rte_flow actions to DR action. * @@ -316,6 +472,20 @@ flow_hw_actions_translate(struct rte_eth_dev *dev, for (i = 0; !actions_end; actions++, masks++) { switch (actions->type) { case RTE_FLOW_ACTION_TYPE_INDIRECT: + if (!attr->group) { + DRV_LOG(ERR, "Indirect action is not supported in root table."); + goto err; + } + if (actions->conf && masks->conf) { + if (flow_hw_shared_action_translate + (dev, actions, acts, actions - action_start, i)) + goto err; + } else if (__flow_hw_act_data_general_append + (priv, acts, actions->type, + actions - action_start, i)){ + goto err; + } + i++; break; case RTE_FLOW_ACTION_TYPE_VOID: break; @@ -407,6 +577,115 @@ flow_hw_actions_translate(struct rte_eth_dev *dev, "fail to create rte table"); } +/** + * Get shared indirect action. + * + * @param[in] dev + * Pointer to the rte_eth_dev data structure. + * @param[in] act_data + * Pointer to the recorded action construct data. + * @param[in] item_flags + * The matcher itme_flags used for RSS lookup. + * @param[in] rule_act + * Pointer to the shared action's destination rule DR action. + * + * @return + * 0 on success, negative value otherwise and rte_errno is set. + */ +static __rte_always_inline int +flow_hw_shared_action_get(struct rte_eth_dev *dev, + struct mlx5_action_construct_data *act_data, + const uint64_t item_flags, + struct mlx5dr_rule_action *rule_act) +{ + struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5_flow_rss_desc rss_desc = { 0 }; + uint64_t hash_fields = 0; + uint32_t hrxq_idx = 0; + struct mlx5_hrxq *hrxq = NULL; + int act_type = act_data->type; + + switch (act_type) { + case MLX5_RTE_FLOW_ACTION_TYPE_RSS: + rss_desc.level = act_data->shared_rss.level; + rss_desc.types = act_data->shared_rss.types; + flow_dv_hashfields_set(item_flags, &rss_desc, &hash_fields); + hrxq_idx = flow_dv_action_rss_hrxq_lookup + (dev, act_data->shared_rss.idx, hash_fields); + if (hrxq_idx) + hrxq = mlx5_ipool_get(priv->sh->ipool[MLX5_IPOOL_HRXQ], + hrxq_idx); + if (hrxq) { + rule_act->action = hrxq->action; + return 0; + } + break; + default: + DRV_LOG(WARNING, "Unsupported shared action type:%d", + act_data->type); + break; + } + return -1; +} + +/** + * Construct shared indirect action. + * + * @param[in] dev + * Pointer to the rte_eth_dev data structure. + * @param[in] action + * Pointer to the shared indirect rte_flow action. + * @param[in] table + * Pointer to the flow table. + * @param[in] it_idx + * Item template index the action template refer to. + * @param[in] rule_act + * Pointer to the shared action's destination rule DR action. + * + * @return + * 0 on success, negative value otherwise and rte_errno is set. + */ +static __rte_always_inline int +flow_hw_shared_action_construct(struct rte_eth_dev *dev, + const struct rte_flow_action *action, + struct rte_flow_template_table *table, + const uint8_t it_idx, + struct mlx5dr_rule_action *rule_act) +{ + struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5_action_construct_data act_data; + struct mlx5_shared_action_rss *shared_rss; + uint32_t act_idx = (uint32_t)(uintptr_t)action->conf; + uint32_t type = act_idx >> MLX5_INDIRECT_ACTION_TYPE_OFFSET; + uint32_t idx = act_idx & + ((1u << MLX5_INDIRECT_ACTION_TYPE_OFFSET) - 1); + uint64_t item_flags; + + memset(&act_data, 0, sizeof(act_data)); + switch (type) { + case MLX5_INDIRECT_ACTION_TYPE_RSS: + act_data.type = MLX5_RTE_FLOW_ACTION_TYPE_RSS; + shared_rss = mlx5_ipool_get + (priv->sh->ipool[MLX5_IPOOL_RSS_SHARED_ACTIONS], idx); + if (!shared_rss) + return -1; + act_data.shared_rss.idx = idx; + act_data.shared_rss.level = shared_rss->origin.level; + act_data.shared_rss.types = !shared_rss->origin.types ? + RTE_ETH_RSS_IP : + shared_rss->origin.types; + item_flags = table->its[it_idx]->item_flags; + if (flow_hw_shared_action_get + (dev, &act_data, item_flags, rule_act)) + return -1; + break; + default: + DRV_LOG(WARNING, "Unsupported shared action type:%d", type); + break; + } + return 0; +} + /** * Construct flow action array. * @@ -419,6 +698,8 @@ flow_hw_actions_translate(struct rte_eth_dev *dev, * Pointer to job descriptor. * @param[in] hw_acts * Pointer to translated actions from template. + * @param[in] it_idx + * Item template index the action template refer to. * @param[in] actions * Array of rte_flow action need to be checked. * @param[in] rule_acts @@ -432,7 +713,8 @@ flow_hw_actions_translate(struct rte_eth_dev *dev, static __rte_always_inline int flow_hw_actions_construct(struct rte_eth_dev *dev, struct mlx5_hw_q_job *job, - struct mlx5_hw_actions *hw_acts, + const struct mlx5_hw_actions *hw_acts, + const uint8_t it_idx, const struct rte_flow_action actions[], struct mlx5dr_rule_action *rule_acts, uint32_t *acts_num) @@ -464,14 +746,19 @@ flow_hw_actions_construct(struct rte_eth_dev *dev, LIST_FOREACH(act_data, &hw_acts->act_list, next) { uint32_t jump_group; uint32_t tag; + uint64_t item_flags; struct mlx5_hw_jump_action *jump; struct mlx5_hrxq *hrxq; action = &actions[act_data->action_src]; MLX5_ASSERT(action->type == RTE_FLOW_ACTION_TYPE_INDIRECT || (int)action->type == act_data->type); - switch (action->type) { + switch (act_data->type) { case RTE_FLOW_ACTION_TYPE_INDIRECT: + if (flow_hw_shared_action_construct + (dev, action, table, it_idx, + &rule_acts[act_data->action_dst])) + return -1; break; case RTE_FLOW_ACTION_TYPE_VOID: break; @@ -504,6 +791,13 @@ flow_hw_actions_construct(struct rte_eth_dev *dev, job->flow->hrxq = hrxq; job->flow->fate_type = MLX5_FLOW_FATE_QUEUE; break; + case MLX5_RTE_FLOW_ACTION_TYPE_RSS: + item_flags = table->its[it_idx]->item_flags; + if (flow_hw_shared_action_get + (dev, act_data, item_flags, + &rule_acts[act_data->action_dst])) + return -1; + break; default: break; } @@ -589,8 +883,8 @@ flow_hw_async_flow_create(struct rte_eth_dev *dev, rule_attr.user_data = job; hw_acts = &table->ats[action_template_index].acts; /* Construct the flow action array based on the input actions.*/ - flow_hw_actions_construct(dev, job, hw_acts, actions, - rule_acts, &acts_num); + flow_hw_actions_construct(dev, job, hw_acts, pattern_template_index, + actions, rule_acts, &acts_num); ret = mlx5dr_rule_create(table->matcher, pattern_template_index, items, rule_acts, acts_num, @@ -1237,6 +1531,7 @@ flow_hw_pattern_template_create(struct rte_eth_dev *dev, "cannot create match template"); return NULL; } + it->item_flags = flow_hw_rss_item_flags_get(items); __atomic_fetch_add(&it->refcnt, 1, __ATOMIC_RELAXED); LIST_INSERT_HEAD(&priv->flow_hw_itt, it, next); return it; @@ -1685,6 +1980,109 @@ flow_hw_resource_release(struct rte_eth_dev *dev) priv->nb_queue = 0; } +/** + * Create shared action. + * + * @param[in] dev + * Pointer to the rte_eth_dev structure. + * @param[in] queue + * Which queue to be used.. + * @param[in] attr + * Operation attribute. + * @param[in] conf + * Indirect action configuration. + * @param[in] action + * rte_flow action detail. + * @param[in] user_data + * Pointer to the user_data. + * @param[out] error + * Pointer to error structure. + * + * @return + * Action handle on success, NULL otherwise and rte_errno is set. + */ +static struct rte_flow_action_handle * +flow_hw_action_handle_create(struct rte_eth_dev *dev, uint32_t queue, + const struct rte_flow_op_attr *attr, + const struct rte_flow_indir_action_conf *conf, + const struct rte_flow_action *action, + void *user_data, + struct rte_flow_error *error) +{ + RTE_SET_USED(queue); + RTE_SET_USED(attr); + RTE_SET_USED(user_data); + return flow_dv_action_create(dev, conf, action, error); +} + +/** + * Update shared action. + * + * @param[in] dev + * Pointer to the rte_eth_dev structure. + * @param[in] queue + * Which queue to be used.. + * @param[in] attr + * Operation attribute. + * @param[in] handle + * Action handle to be updated. + * @param[in] update + * Update value. + * @param[in] user_data + * Pointer to the user_data. + * @param[out] error + * Pointer to error structure. + * + * @return + * 0 on success, negative value otherwise and rte_errno is set. + */ +static int +flow_hw_action_handle_update(struct rte_eth_dev *dev, uint32_t queue, + const struct rte_flow_op_attr *attr, + struct rte_flow_action_handle *handle, + const void *update, + void *user_data, + struct rte_flow_error *error) +{ + RTE_SET_USED(queue); + RTE_SET_USED(attr); + RTE_SET_USED(user_data); + return flow_dv_action_update(dev, handle, update, error); +} + +/** + * Destroy shared action. + * + * @param[in] dev + * Pointer to the rte_eth_dev structure. + * @param[in] queue + * Which queue to be used.. + * @param[in] attr + * Operation attribute. + * @param[in] handle + * Action handle to be destroyed. + * @param[in] user_data + * Pointer to the user_data. + * @param[out] error + * Pointer to error structure. + * + * @return + * 0 on success, negative value otherwise and rte_errno is set. + */ +static int +flow_hw_action_handle_destroy(struct rte_eth_dev *dev, uint32_t queue, + const struct rte_flow_op_attr *attr, + struct rte_flow_action_handle *handle, + void *user_data, + struct rte_flow_error *error) +{ + RTE_SET_USED(queue); + RTE_SET_USED(attr); + RTE_SET_USED(user_data); + return flow_dv_action_destroy(dev, handle, error); +} + + const struct mlx5_flow_driver_ops mlx5_flow_hw_drv_ops = { .info_get = flow_hw_info_get, .configure = flow_hw_configure, @@ -1698,6 +2096,14 @@ const struct mlx5_flow_driver_ops mlx5_flow_hw_drv_ops = { .async_flow_destroy = flow_hw_async_flow_destroy, .pull = flow_hw_pull, .push = flow_hw_push, + .async_action_create = flow_hw_action_handle_create, + .async_action_destroy = flow_hw_action_handle_destroy, + .async_action_update = flow_hw_action_handle_update, + .action_validate = flow_dv_action_validate, + .action_create = flow_dv_action_create, + .action_destroy = flow_dv_action_destroy, + .action_update = flow_dv_action_update, + .action_query = flow_dv_action_query, }; #endif -- 2.25.1