From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id CDB87A0353; Thu, 24 Feb 2022 04:12:26 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 9FE05426D9; Thu, 24 Feb 2022 04:11:26 +0100 (CET) Received: from NAM10-DM6-obe.outbound.protection.outlook.com (mail-dm6nam10on2041.outbound.protection.outlook.com [40.107.93.41]) by mails.dpdk.org (Postfix) with ESMTP id DABDE41238 for ; Thu, 24 Feb 2022 04:11:20 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=HiFEZoExK0G0aWBvms0IHtjGMhDHfVNsIrx0XvPLzwCyOQ16bSojOYgxLfDpQZWuO+JDA8GLqL7t6LnpqGO4Zxb2IcyhYvIg8Weg2+zACx/F0xkAbkStr9MhPFsPcFwhqcm/sdniiMacgZ0fXG5fP2Xf0BIo9lWycuiK8hpjLejnKQKPQKNFv5rL7GVJuAS7Nd8ZXQ1BmBpTFSYvVFNRgXuBDMCgf+MNCyQm6GLdAkR8HgH1Xq601NECDjjxCgyHMzRQZF94bJwh/mFsTaFlCK2YYdDl7Am03jH66vtwfNIr/h2XTQmxYQ5BbyuQ6TsSZNqAow5YtG0AE3VDquRFvQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=z12Fm7LYcar8IQIhSbCs7OUPpCCl2vTSy8sLBrn2WIk=; b=TJFC/blkxzGVO2el2dQMFUFq1OIFElqGA3SZhyjsoz8bdnOkKQ7Iz4+zTOpe2iBZh1glrNWt9OiVwHnB1YJnPtrHBft2TzjasyIilAep/cuNENws0B7T27ePO6VkeX5Rj82Raxf5z1I7jIo6kVDzp+2gCLgcw93iumeAwADbE4e4Hwo+Yw9KrsbVfXdW86BOtpbviZINjkEf+UXIDqwSjQ46i2i7pJehJqjV9vVJzZSbbr4Ygy7DkahqugKfch6kmP1PC1UoaCjBNlYo92Eht2GOHbKaCJlBYmidJV9CPvv3Z4TtNziJ1wO0zvon16TQlt/ZXmAz1QCQ7u0TNEf6hg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 12.22.5.235) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=z12Fm7LYcar8IQIhSbCs7OUPpCCl2vTSy8sLBrn2WIk=; b=YZdkjofM1gQszTQwWYFSln6WqkFGxS7xaqFrMCxqAdIVWKv4bUmjbYd0SFBgM0Fp8qIwBlsvn/MccV3z12MxQtw6fUZPCvOignBfR/NZwVMBGHiR6LJThSOE7Nfm9J7+rvPMHu3Qn/Am2xJXf4e7D/vEz0N+6/dB5saPOGJCYSXNHUOQAmv/V3vZPAkKI1y9w9xiNhYNAZkWHVu+QlNuhuA6muNgDS6J0GvDNGdtoVDGhecileH2i6RYZWec5O0kwCJLlT5GopZRBKV3cc4qFYOzbZbnQsxi0tybbyjMleT4mkW2r8z71ZG5vjZirOouVDD8nFsvzEUqrP1f29cM4A== Received: from DM6PR21CA0004.namprd21.prod.outlook.com (2603:10b6:5:174::14) by BYAPR12MB2744.namprd12.prod.outlook.com (2603:10b6:a03:69::12) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4995.16; Thu, 24 Feb 2022 03:11:18 +0000 Received: from DM6NAM11FT066.eop-nam11.prod.protection.outlook.com (2603:10b6:5:174:cafe::46) by DM6PR21CA0004.outlook.office365.com (2603:10b6:5:174::14) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5038.9 via Frontend Transport; Thu, 24 Feb 2022 03:11:17 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 12.22.5.235) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 12.22.5.235 as permitted sender) receiver=protection.outlook.com; client-ip=12.22.5.235; helo=mail.nvidia.com; Received: from mail.nvidia.com (12.22.5.235) by DM6NAM11FT066.mail.protection.outlook.com (10.13.173.179) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.5017.22 via Frontend Transport; Thu, 24 Feb 2022 03:11:17 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by DRHQMAIL107.nvidia.com (10.27.9.16) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Thu, 24 Feb 2022 03:11:00 +0000 Received: from nvidia.com (10.126.231.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.9; Wed, 23 Feb 2022 19:10:58 -0800 From: Suanming Mou To: , CC: , , Subject: [PATCH v3 08/14] net/mlx5: add basic flow queue operation Date: Thu, 24 Feb 2022 05:10:23 +0200 Message-ID: <20220224031029.14049-9-suanmingm@nvidia.com> X-Mailer: git-send-email 2.18.1 In-Reply-To: <20220224031029.14049-1-suanmingm@nvidia.com> References: <20220210162926.20436-1-suanmingm@nvidia.com> <20220224031029.14049-1-suanmingm@nvidia.com> MIME-Version: 1.0 Content-Type: text/plain X-Originating-IP: [10.126.231.35] X-ClientProxiedBy: rnnvmail201.nvidia.com (10.129.68.8) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 8b3b6e76-977c-4a5b-f774-08d9f74352d1 X-MS-TrafficTypeDiagnostic: BYAPR12MB2744:EE_ X-Microsoft-Antispam-PRVS: X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: PyptvGghltXcOw1MiMwRktCiDBkNxUMcIoBd9N03DB0K1K/KCL/YqLsZao4MBL4p5Mio1kWduMTWAmA0BCVMVDaqdRrq4tGq0W6J3bF/3lZoibHIsbmAdeC+Ub/uEvRr2IYK3MXUsls64eHu6NP/hsBoFfUjKuI4TzIeeFq7a1ia1rNkZClRpu1mG4yi0OO1yiZF3jj1t0SFha0gCBPjlTFLhfaFuyB4tOSSmv2N8fJB2dZiIkKb/g8E4usDyobUBJaF3vb/MeRjhkWvI4Wm6RkMWNIE1LJvrBdA53ImE3dxJSnBeMPnmwP2PQoMM/xU6V/MV1PJZXhWiVzMRf8nfO/2yUvzMSPfzf+iogr48G7cdrd7SysAn8H5KQEN30APLAYh8FNPYjHlgvLqN18FOtyFRyL6kY0EOXLNZNlEwK+hSPWY6agvzY3Gy7j+1g8hWMsH1Xg0YfnUontG2gls7u8sAreBPlO4AS/mciR6HLKhW/Ed/t3RAvhIiwDxGJYlYyUGG2Mjzwsh/y1veW/veLhQ9uh7ohb9AEaePqUwyufkKExO0MXsJ6tRnynp45j6BLOe+n3WBpNxoKgqgiK5fUWylWMzVy9uK7HHoiRaAbjghi3nzpwiU8HvWObX92GOf9nJKEvwK4kz9dRQ1EXqRv313P341zL+eSMZMEPWJDlai+TC1M7sRccytl1S5CKDdxnCR59D9Op2n8MbBtD2dQ== X-Forefront-Antispam-Report: CIP:12.22.5.235; CTRY:US; LANG:en; SCL:1; SRV:; IPV:CAL; SFV:NSPM; H:mail.nvidia.com; PTR:InfoNoRecords; CAT:NONE; SFS:(13230001)(4636009)(46966006)(40470700004)(36840700001)(83380400001)(55016003)(82310400004)(1076003)(2616005)(5660300002)(426003)(16526019)(6286002)(26005)(186003)(336012)(81166007)(356005)(86362001)(40460700003)(316002)(6666004)(54906003)(47076005)(110136005)(8936002)(6636002)(36756003)(7696005)(30864003)(70586007)(508600001)(70206006)(4326008)(36860700001)(8676002)(2906002)(36900700001); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 24 Feb 2022 03:11:17.2591 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 8b3b6e76-977c-4a5b-f774-08d9f74352d1 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[12.22.5.235]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: DM6NAM11FT066.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: BYAPR12MB2744 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org The HW steering uses async queue-based flow rules management mechanism. The matcher and part of the actions have been prepared during flow table creation. Some remaining actions will be constructed during flow creation if needed. A flow postpone attribute bit describes if flow management should be applied to the HW directly. An extra push function is provided to force push all the cached flows to the HW. Once the flow has been applied to the HW, the pull function will be called to get the queued creation/destruction flows. The DR rule flow memory is represented in PMD layer instead of allocating from HW steering layer. While destroying the flow, the flow rule memory can only be freed after the CQE received. The HW queue job descriptor is currently introduced to convey the flow information and operation type between the flow insertion/destruction in the pull function. This commit adds the basic flow queue operation for: rte_flow_async_create(); rte_flow_async_destroy(); rte_flow_push(); rte_flow_pull(); Signed-off-by: Suanming Mou Acked-by: Viacheslav Ovsiienko --- doc/guides/nics/mlx5.rst | 6 + doc/guides/rel_notes/release_22_03.rst | 6 + drivers/net/mlx5/mlx5.h | 2 +- drivers/net/mlx5/mlx5_flow.c | 188 ++++++++++++++++ drivers/net/mlx5/mlx5_flow.h | 41 ++++ drivers/net/mlx5/mlx5_flow_hw.c | 292 ++++++++++++++++++++++++- 6 files changed, 533 insertions(+), 2 deletions(-) diff --git a/doc/guides/nics/mlx5.rst b/doc/guides/nics/mlx5.rst index 0e0169c8bb..7b04e9bac5 100644 --- a/doc/guides/nics/mlx5.rst +++ b/doc/guides/nics/mlx5.rst @@ -487,6 +487,12 @@ Limitations from the reference "Clock Queue" completions, the scheduled send timestamps should not be specified with non-zero MSB. + - HW steering: + + - WQE based high scaling and safer flow insertion/destruction. + - Set ``dv_flow_en`` to 2 in order to enable HW steering. + - Async queue-based ``rte_flow_q`` APIs supported only. + Statistics ---------- diff --git a/doc/guides/rel_notes/release_22_03.rst b/doc/guides/rel_notes/release_22_03.rst index 512c727e81..375df6ba74 100644 --- a/doc/guides/rel_notes/release_22_03.rst +++ b/doc/guides/rel_notes/release_22_03.rst @@ -162,6 +162,12 @@ New Features * Added LED OEM support. +* **Updated Mellanox mlx5 driver.** + + Updated the Mellanox mlx5 driver with new features and improvements, including: + + * Added WQE based hardware steering support with ``rte_flow_q`` API. + * **Added an API for private user data in asymmetric crypto session.** An API was added to get/set an asymmetric crypto session's user data. diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h index f523173ad5..d94e98db77 100644 --- a/drivers/net/mlx5/mlx5.h +++ b/drivers/net/mlx5/mlx5.h @@ -340,7 +340,7 @@ enum { /* HW steering flow management job descriptor. */ struct mlx5_hw_q_job { uint32_t type; /* Job type. */ - struct rte_flow *flow; /* Flow attached to the job. */ + struct rte_flow_hw *flow; /* Flow attached to the job. */ void *user_data; /* Job user data. */ }; diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c index ee7fc35e1a..ad131c1b22 100644 --- a/drivers/net/mlx5/mlx5_flow.c +++ b/drivers/net/mlx5/mlx5_flow.c @@ -850,6 +850,34 @@ static int mlx5_flow_table_destroy(struct rte_eth_dev *dev, struct rte_flow_template_table *table, struct rte_flow_error *error); +static struct rte_flow * +mlx5_flow_async_flow_create(struct rte_eth_dev *dev, + uint32_t queue, + const struct rte_flow_op_attr *attr, + struct rte_flow_template_table *table, + const struct rte_flow_item items[], + uint8_t pattern_template_index, + const struct rte_flow_action actions[], + uint8_t action_template_index, + void *user_data, + struct rte_flow_error *error); +static int +mlx5_flow_async_flow_destroy(struct rte_eth_dev *dev, + uint32_t queue, + const struct rte_flow_op_attr *attr, + struct rte_flow *flow, + void *user_data, + struct rte_flow_error *error); +static int +mlx5_flow_pull(struct rte_eth_dev *dev, + uint32_t queue, + struct rte_flow_op_result res[], + uint16_t n_res, + struct rte_flow_error *error); +static int +mlx5_flow_push(struct rte_eth_dev *dev, + uint32_t queue, + struct rte_flow_error *error); static const struct rte_flow_ops mlx5_flow_ops = { .validate = mlx5_flow_validate, @@ -879,6 +907,10 @@ static const struct rte_flow_ops mlx5_flow_ops = { .actions_template_destroy = mlx5_flow_actions_template_destroy, .template_table_create = mlx5_flow_table_create, .template_table_destroy = mlx5_flow_table_destroy, + .async_create = mlx5_flow_async_flow_create, + .async_destroy = mlx5_flow_async_flow_destroy, + .pull = mlx5_flow_pull, + .push = mlx5_flow_push, }; /* Tunnel information. */ @@ -8171,6 +8203,162 @@ mlx5_flow_table_destroy(struct rte_eth_dev *dev, return fops->template_table_destroy(dev, table, error); } +/** + * Enqueue flow creation. + * + * @param[in] dev + * Pointer to the rte_eth_dev structure. + * @param[in] queue_id + * The queue to create the flow. + * @param[in] attr + * Pointer to the flow operation attributes. + * @param[in] items + * Items with flow spec value. + * @param[in] pattern_template_index + * The item pattern flow follows from the table. + * @param[in] actions + * Action with flow spec value. + * @param[in] action_template_index + * The action pattern flow follows from the table. + * @param[in] user_data + * Pointer to the user_data. + * @param[out] error + * Pointer to error structure. + * + * @return + * Flow pointer on success, NULL otherwise and rte_errno is set. + */ +static struct rte_flow * +mlx5_flow_async_flow_create(struct rte_eth_dev *dev, + uint32_t queue_id, + const struct rte_flow_op_attr *attr, + struct rte_flow_template_table *table, + const struct rte_flow_item items[], + uint8_t pattern_template_index, + const struct rte_flow_action actions[], + uint8_t action_template_index, + void *user_data, + struct rte_flow_error *error) +{ + const struct mlx5_flow_driver_ops *fops; + + if (flow_get_drv_type(dev, NULL) != MLX5_FLOW_TYPE_HW) { + rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, + "flow_q create with incorrect steering mode"); + return NULL; + } + fops = flow_get_drv_ops(MLX5_FLOW_TYPE_HW); + return fops->async_flow_create(dev, queue_id, attr, table, + items, pattern_template_index, + actions, action_template_index, + user_data, error); +} + +/** + * Enqueue flow destruction. + * + * @param[in] dev + * Pointer to the rte_eth_dev structure. + * @param[in] queue + * The queue to destroy the flow. + * @param[in] attr + * Pointer to the flow operation attributes. + * @param[in] flow + * Pointer to the flow to be destroyed. + * @param[in] user_data + * Pointer to the user_data. + * @param[out] error + * Pointer to error structure. + * + * @return + * 0 on success, negative value otherwise and rte_errno is set. + */ +static int +mlx5_flow_async_flow_destroy(struct rte_eth_dev *dev, + uint32_t queue, + const struct rte_flow_op_attr *attr, + struct rte_flow *flow, + void *user_data, + struct rte_flow_error *error) +{ + const struct mlx5_flow_driver_ops *fops; + + if (flow_get_drv_type(dev, NULL) != MLX5_FLOW_TYPE_HW) + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, + "flow_q destroy with incorrect steering mode"); + fops = flow_get_drv_ops(MLX5_FLOW_TYPE_HW); + return fops->async_flow_destroy(dev, queue, attr, flow, + user_data, error); +} + +/** + * Pull the enqueued flows. + * + * @param[in] dev + * Pointer to the rte_eth_dev structure. + * @param[in] queue + * The queue to pull the result. + * @param[in/out] res + * Array to save the results. + * @param[in] n_res + * Available result with the array. + * @param[out] error + * Pointer to error structure. + * + * @return + * Result number on success, negative value otherwise and rte_errno is set. + */ +static int +mlx5_flow_pull(struct rte_eth_dev *dev, + uint32_t queue, + struct rte_flow_op_result res[], + uint16_t n_res, + struct rte_flow_error *error) +{ + const struct mlx5_flow_driver_ops *fops; + + if (flow_get_drv_type(dev, NULL) != MLX5_FLOW_TYPE_HW) + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, + "flow_q pull with incorrect steering mode"); + fops = flow_get_drv_ops(MLX5_FLOW_TYPE_HW); + return fops->pull(dev, queue, res, n_res, error); +} + +/** + * Push the enqueued flows. + * + * @param[in] dev + * Pointer to the rte_eth_dev structure. + * @param[in] queue + * The queue to push the flows. + * @param[out] error + * Pointer to error structure. + * + * @return + * 0 on success, negative value otherwise and rte_errno is set. + */ +static int +mlx5_flow_push(struct rte_eth_dev *dev, + uint32_t queue, + struct rte_flow_error *error) +{ + const struct mlx5_flow_driver_ops *fops; + + if (flow_get_drv_type(dev, NULL) != MLX5_FLOW_TYPE_HW) + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, + "flow_q push with incorrect steering mode"); + fops = flow_get_drv_ops(MLX5_FLOW_TYPE_HW); + return fops->push(dev, queue, error); +} + /** * Allocate a new memory for the counter values wrapped by all the needed * management. diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h index 1579036f58..3add4c4a81 100644 --- a/drivers/net/mlx5/mlx5_flow.h +++ b/drivers/net/mlx5/mlx5_flow.h @@ -1017,6 +1017,13 @@ struct rte_flow { #if defined(HAVE_IBV_FLOW_DV_SUPPORT) || !defined(HAVE_INFINIBAND_VERBS_H) +/* HWS flow struct. */ +struct rte_flow_hw { + uint32_t idx; /* Flow index from indexed pool. */ + struct rte_flow_template_table *table; /* The table flow allcated from. */ + struct mlx5dr_rule rule; /* HWS layer data struct. */ +} __rte_packed; + /* Flow item template struct. */ struct rte_flow_pattern_template { LIST_ENTRY(rte_flow_pattern_template) next; @@ -1371,6 +1378,34 @@ typedef int (*mlx5_flow_table_destroy_t) (struct rte_eth_dev *dev, struct rte_flow_template_table *table, struct rte_flow_error *error); +typedef struct rte_flow *(*mlx5_flow_async_flow_create_t) + (struct rte_eth_dev *dev, + uint32_t queue, + const struct rte_flow_op_attr *attr, + struct rte_flow_template_table *table, + const struct rte_flow_item items[], + uint8_t pattern_template_index, + const struct rte_flow_action actions[], + uint8_t action_template_index, + void *user_data, + struct rte_flow_error *error); +typedef int (*mlx5_flow_async_flow_destroy_t) + (struct rte_eth_dev *dev, + uint32_t queue, + const struct rte_flow_op_attr *attr, + struct rte_flow *flow, + void *user_data, + struct rte_flow_error *error); +typedef int (*mlx5_flow_pull_t) + (struct rte_eth_dev *dev, + uint32_t queue, + struct rte_flow_op_result res[], + uint16_t n_res, + struct rte_flow_error *error); +typedef int (*mlx5_flow_push_t) + (struct rte_eth_dev *dev, + uint32_t queue, + struct rte_flow_error *error); struct mlx5_flow_driver_ops { mlx5_flow_validate_t validate; @@ -1417,6 +1452,10 @@ struct mlx5_flow_driver_ops { mlx5_flow_actions_template_destroy_t actions_template_destroy; mlx5_flow_table_create_t template_table_create; mlx5_flow_table_destroy_t template_table_destroy; + mlx5_flow_async_flow_create_t async_flow_create; + mlx5_flow_async_flow_destroy_t async_flow_destroy; + mlx5_flow_pull_t pull; + mlx5_flow_push_t push; }; /* mlx5_flow.c */ @@ -1587,6 +1626,8 @@ mlx5_translate_tunnel_etypes(uint64_t pattern_flags) return 0; } +int flow_hw_q_flow_flush(struct rte_eth_dev *dev, + struct rte_flow_error *error); int mlx5_flow_group_to_table(struct rte_eth_dev *dev, const struct mlx5_flow_tunnel *tunnel, uint32_t group, uint32_t *table, diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c index 8cb1ef842a..accc3a96d9 100644 --- a/drivers/net/mlx5/mlx5_flow_hw.c +++ b/drivers/net/mlx5/mlx5_flow_hw.c @@ -10,6 +10,9 @@ #if defined(HAVE_IBV_FLOW_DV_SUPPORT) || !defined(HAVE_INFINIBAND_VERBS_H) +/* The maximum actions support in the flow. */ +#define MLX5_HW_MAX_ACTS 16 + const struct mlx5_flow_driver_ops mlx5_flow_hw_drv_ops; /* DR action flags with different table. */ @@ -105,6 +108,289 @@ flow_hw_actions_translate(struct rte_eth_dev *dev, return 0; } +/** + * Construct flow action array. + * + * For action template contains dynamic actions, these actions need to + * be updated according to the rte_flow action during flow creation. + * + * @param[in] hw_acts + * Pointer to translated actions from template. + * @param[in] actions + * Array of rte_flow action need to be checked. + * @param[in] rule_acts + * Array of DR rule actions to be used during flow creation.. + * @param[in] acts_num + * Pointer to the real acts_num flow has. + * + * @return + * 0 on success, negative value otherwise and rte_errno is set. + */ +static __rte_always_inline int +flow_hw_actions_construct(struct mlx5_hw_actions *hw_acts, + const struct rte_flow_action actions[], + struct mlx5dr_rule_action *rule_acts, + uint32_t *acts_num) +{ + bool actions_end = false; + uint32_t i; + + for (i = 0; !actions_end || (i >= MLX5_HW_MAX_ACTS); actions++) { + switch (actions->type) { + case RTE_FLOW_ACTION_TYPE_INDIRECT: + break; + case RTE_FLOW_ACTION_TYPE_VOID: + break; + case RTE_FLOW_ACTION_TYPE_DROP: + rule_acts[i++].action = hw_acts->drop; + break; + case RTE_FLOW_ACTION_TYPE_END: + actions_end = true; + break; + default: + break; + } + } + *acts_num = i; + return 0; +} + +/** + * Enqueue HW steering flow creation. + * + * The flow will be applied to the HW only if the postpone bit is not set or + * the extra push function is called. + * The flow creation status should be checked from dequeue result. + * + * @param[in] dev + * Pointer to the rte_eth_dev structure. + * @param[in] queue + * The queue to create the flow. + * @param[in] attr + * Pointer to the flow operation attributes. + * @param[in] items + * Items with flow spec value. + * @param[in] pattern_template_index + * The item pattern flow follows from the table. + * @param[in] actions + * Action with flow spec value. + * @param[in] action_template_index + * The action pattern flow follows from the table. + * @param[in] user_data + * Pointer to the user_data. + * @param[out] error + * Pointer to error structure. + * + * @return + * Flow pointer on success, NULL otherwise and rte_errno is set. + */ +static struct rte_flow * +flow_hw_async_flow_create(struct rte_eth_dev *dev, + uint32_t queue, + const struct rte_flow_op_attr *attr, + struct rte_flow_template_table *table, + const struct rte_flow_item items[], + uint8_t pattern_template_index, + const struct rte_flow_action actions[], + uint8_t action_template_index, + void *user_data, + struct rte_flow_error *error) +{ + struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5dr_rule_attr rule_attr = { + .queue_id = queue, + .user_data = user_data, + .burst = attr->postpone, + }; + struct mlx5dr_rule_action rule_acts[MLX5_HW_MAX_ACTS]; + struct mlx5_hw_actions *hw_acts; + struct rte_flow_hw *flow; + struct mlx5_hw_q_job *job; + uint32_t acts_num, flow_idx; + int ret; + + if (unlikely(!priv->hw_q[queue].job_idx)) { + rte_errno = ENOMEM; + goto error; + } + flow = mlx5_ipool_zmalloc(table->flow, &flow_idx); + if (!flow) + goto error; + /* + * Set the table here in order to know the destination table + * when free the flow afterwards. + */ + flow->table = table; + flow->idx = flow_idx; + job = priv->hw_q[queue].job[--priv->hw_q[queue].job_idx]; + /* + * Set the job type here in order to know if the flow memory + * should be freed or not when get the result from dequeue. + */ + job->type = MLX5_HW_Q_JOB_TYPE_CREATE; + job->flow = flow; + job->user_data = user_data; + rule_attr.user_data = job; + hw_acts = &table->ats[action_template_index].acts; + /* Construct the flow action array based on the input actions.*/ + flow_hw_actions_construct(hw_acts, actions, rule_acts, &acts_num); + ret = mlx5dr_rule_create(table->matcher, + pattern_template_index, items, + rule_acts, acts_num, + &rule_attr, &flow->rule); + if (likely(!ret)) + return (struct rte_flow *)flow; + /* Flow created fail, return the descriptor and flow memory. */ + mlx5_ipool_free(table->flow, flow_idx); + priv->hw_q[queue].job_idx++; +error: + rte_flow_error_set(error, rte_errno, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, + "fail to create rte flow"); + return NULL; +} + +/** + * Enqueue HW steering flow destruction. + * + * The flow will be applied to the HW only if the postpone bit is not set or + * the extra push function is called. + * The flow destruction status should be checked from dequeue result. + * + * @param[in] dev + * Pointer to the rte_eth_dev structure. + * @param[in] queue + * The queue to destroy the flow. + * @param[in] attr + * Pointer to the flow operation attributes. + * @param[in] flow + * Pointer to the flow to be destroyed. + * @param[in] user_data + * Pointer to the user_data. + * @param[out] error + * Pointer to error structure. + * + * @return + * 0 on success, negative value otherwise and rte_errno is set. + */ +static int +flow_hw_async_flow_destroy(struct rte_eth_dev *dev, + uint32_t queue, + const struct rte_flow_op_attr *attr, + struct rte_flow *flow, + void *user_data, + struct rte_flow_error *error) +{ + struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5dr_rule_attr rule_attr = { + .queue_id = queue, + .user_data = user_data, + .burst = attr->postpone, + }; + struct rte_flow_hw *fh = (struct rte_flow_hw *)flow; + struct mlx5_hw_q_job *job; + int ret; + + if (unlikely(!priv->hw_q[queue].job_idx)) { + rte_errno = ENOMEM; + goto error; + } + job = priv->hw_q[queue].job[--priv->hw_q[queue].job_idx]; + job->type = MLX5_HW_Q_JOB_TYPE_DESTROY; + job->user_data = user_data; + job->flow = fh; + rule_attr.user_data = job; + ret = mlx5dr_rule_destroy(&fh->rule, &rule_attr); + if (likely(!ret)) + return 0; + priv->hw_q[queue].job_idx++; +error: + return rte_flow_error_set(error, rte_errno, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, + "fail to create rte flow"); +} + +/** + * Pull the enqueued flows. + * + * For flows enqueued from creation/destruction, the status should be + * checked from the dequeue result. + * + * @param[in] dev + * Pointer to the rte_eth_dev structure. + * @param[in] queue + * The queue to pull the result. + * @param[in/out] res + * Array to save the results. + * @param[in] n_res + * Available result with the array. + * @param[out] error + * Pointer to error structure. + * + * @return + * Result number on success, negative value otherwise and rte_errno is set. + */ +static int +flow_hw_pull(struct rte_eth_dev *dev, + uint32_t queue, + struct rte_flow_op_result res[], + uint16_t n_res, + struct rte_flow_error *error) +{ + struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5_hw_q_job *job; + int ret, i; + + ret = mlx5dr_send_queue_poll(priv->dr_ctx, queue, res, n_res); + if (ret < 0) + return rte_flow_error_set(error, rte_errno, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, + "fail to query flow queue"); + for (i = 0; i < ret; i++) { + job = (struct mlx5_hw_q_job *)res[i].user_data; + /* Restore user data. */ + res[i].user_data = job->user_data; + if (job->type == MLX5_HW_Q_JOB_TYPE_DESTROY) + mlx5_ipool_free(job->flow->table->flow, job->flow->idx); + priv->hw_q[queue].job[priv->hw_q[queue].job_idx++] = job; + } + return ret; +} + +/** + * Push the enqueued flows to HW. + * + * Force apply all the enqueued flows to the HW. + * + * @param[in] dev + * Pointer to the rte_eth_dev structure. + * @param[in] queue + * The queue to push the flow. + * @param[out] error + * Pointer to error structure. + * + * @return + * 0 on success, negative value otherwise and rte_errno is set. + */ +static int +flow_hw_push(struct rte_eth_dev *dev, + uint32_t queue, + struct rte_flow_error *error) +{ + struct mlx5_priv *priv = dev->data->dev_private; + int ret; + + ret = mlx5dr_send_queue_action(priv->dr_ctx, queue, + MLX5DR_SEND_QUEUE_ACTION_DRAIN); + if (ret) { + rte_flow_error_set(error, rte_errno, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, + "fail to push flows"); + return ret; + } + return 0; +} + /** * Create flow table. * @@ -152,7 +438,7 @@ flow_hw_table_create(struct rte_eth_dev *dev, .data = &flow_attr, }; struct mlx5_indexed_pool_config cfg = { - .size = sizeof(struct rte_flow), + .size = sizeof(struct rte_flow_hw), .trunk_size = 1 << 12, .per_core_cache = 1 << 13, .need_lock = 1, @@ -894,6 +1180,10 @@ const struct mlx5_flow_driver_ops mlx5_flow_hw_drv_ops = { .actions_template_destroy = flow_hw_actions_template_destroy, .template_table_create = flow_hw_table_create, .template_table_destroy = flow_hw_table_destroy, + .async_flow_create = flow_hw_async_flow_create, + .async_flow_destroy = flow_hw_async_flow_destroy, + .pull = flow_hw_pull, + .push = flow_hw_push, }; #endif -- 2.25.1