From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id BB0C5A0C45; Wed, 6 Oct 2021 06:49:22 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id C4B88413E2; Wed, 6 Oct 2021 06:49:05 +0200 (CEST) Received: from NAM11-CO1-obe.outbound.protection.outlook.com (mail-co1nam11on2071.outbound.protection.outlook.com [40.107.220.71]) by mails.dpdk.org (Postfix) with ESMTP id 095A4413C4 for ; Wed, 6 Oct 2021 06:49:05 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=CtpkW0ngU2YrhzqW3tH0EtfJaw4EZ+RMePZ/DBeoYcnGF43cwN7H0w8bwn66KI4JKGrABYLHRSqwGNXkxvsGqVHmaksY9xHe4loFoORreYGphEspyhyBZ89tdMLKN209e5BSmvRdLygtbLH3uQyJ68tNBaEr2tsJGVefYyZfU223vXUFdNx41ymeTt7zG/fxUjsWHinFTzg5tAOrjX07Sn5cyXacKbgbCqnIAdAWvhzeed3nVyIYXPELQhlaxqRNKeHNogMRdsuNyx9hCAXOl7edbAiKqLxhsA/M7iszOy4USA92b4l8YInuQxclt6uf4hyLAL84dxhzNIxBtv+YnA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=sZ+ybkKc+rl6ffUH0347R4s910mVdamhCHiSOZhN8Qg=; b=i5jAt9/wN11KbUZGFrU5Bk4WxXdzxMPT5smEqPNg3uUPI+2uDGiRDHEUTWqgsVTNXThFg5iAlAtsKvGv8+tj9dM3RbI/oR3SWPhI+dK2ASWTQDDZtkie8e3Paqk5bQ3mg01YjOXb+3+T81md6bINQopMdoeSJZjxRSAfdQFOq6f8ujZJ8xwOhDdP15qxeLYoXikyKagwDECArXEn2MIHF88oMKacZa1ie7b4Lnbcu9IoRHC5y5Pab2JKAOoP0jYOh93q5ePSmqERhvXY8hNxLrYaHrQTaT0XUYpc2Rf91LTzmU4vFTkg5EnwBNx1moRFdmN3tY/5txaZnS94s7jK4g== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.34) smtp.rcpttodomain=intel.com smtp.mailfrom=nvidia.com; dmarc=pass (p=quarantine sp=none pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=sZ+ybkKc+rl6ffUH0347R4s910mVdamhCHiSOZhN8Qg=; b=AbXn/C91LOxF66I8YAgz2Q88gZuUAcqazVmJnu9z2z97VDjJpphKi7+9TwrP0Meubg1g50ILuCRzK8U6O/WOhRLhMfxAtQTELXnBwS/QgfvU3u6O26pOdRKI1epiRdLvrCmwZ1DJl5EGXxfgqrGtwMvxGaDcZiWn0L77pdvHRVYXKPoX0AehoG9QTOuQ1pLxk63shbH7OsRO2uzNjHEpNgoUpBelhrIj1HcBEueIgyj8M0FMoaXng6gcm0u68zUAN2SWHBpkRa6o86nF2NwF4Tu/G9AT7CQkVTSOwL7JNR7avR3ZLdqPAU5MxaCDDPGpCS6vuBeoBOu21JMQvNWxgA== Received: from BN9PR03CA0323.namprd03.prod.outlook.com (2603:10b6:408:112::28) by DM6PR12MB3612.namprd12.prod.outlook.com (2603:10b6:5:11b::32) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4566.22; Wed, 6 Oct 2021 04:49:02 +0000 Received: from BN8NAM11FT046.eop-nam11.prod.protection.outlook.com (2603:10b6:408:112:cafe::77) by BN9PR03CA0323.outlook.office365.com (2603:10b6:408:112::28) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4566.22 via Frontend Transport; Wed, 6 Oct 2021 04:49:02 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.34) smtp.mailfrom=nvidia.com; intel.com; dkim=none (message not signed) header.d=none;intel.com; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.34 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.34; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.34) by BN8NAM11FT046.mail.protection.outlook.com (10.13.177.127) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4587.18 via Frontend Transport; Wed, 6 Oct 2021 04:49:01 +0000 Received: from nvidia.com (172.20.187.6) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Wed, 6 Oct 2021 04:48:59 +0000 From: Alexander Kozyrev To: CC: , , , , , , , , Date: Wed, 6 Oct 2021 07:48:35 +0300 Message-ID: <20211006044835.3936226-4-akozyrev@nvidia.com> X-Mailer: git-send-email 2.18.2 In-Reply-To: <20211006044835.3936226-1-akozyrev@nvidia.com> References: <20211006044835.3936226-1-akozyrev@nvidia.com> MIME-Version: 1.0 Content-Type: text/plain X-Originating-IP: [172.20.187.6] X-ClientProxiedBy: HQMAIL107.nvidia.com (172.20.187.13) To HQMAIL107.nvidia.com (172.20.187.13) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 970f1c13-70b5-41c5-8e2c-08d988849e05 X-MS-TrafficTypeDiagnostic: DM6PR12MB3612: X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:243; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: zNfa5GAhfRoYgCgPTKMA204h1YVemI+j7s+rFHJpYLlpz+7JEoPb93MgcLoR2/iXyOx8OpXSOxPFLdkWJNdEg9x9EwbikoB4xik5BguS9M5bwXTNKvZ8QtQgZSB8RqAoGn3JmtzxTSjbGFAvtwZbt1uzzk6i1EsDALEF5JhKUkvN70V9rA/Gt+odErfMXTyToMLslTF4B7WCYvuPH1Oi1JK+TcuKECFJLKUdhyLmrxFWoSC6i3oEoRvyWESWpWIM8Ut0PQt6CwZ7sBCthLUOT7IMT8hcXIbElRyphQGygTWRnstYPS9c0T3VQ1O6vQBvSb4KdoEdtlM+Ft08kmNQp7+dlsEcuKWMGqKWYdeGD025RAcjfjMQfrxkNj1DuDGq7/mH/zuzQ2Lq/QlSYC+eD2Iw3mhFmmbRIdxbwswlgnI/DkCImJEgV43QNBLOzXDN0ClE++I5t8cTlJDh9LouhMKJHgVk7Q/LC1xhr1kZ9if3IkDgxggxejgGo3M5DhlNeG/G5tnz3QOQqvWCIWQhJhT1KYSHAprrV9s3fz3KRShP5i51tYOOaKsOTSQ5MPYYN0W5zzKVG0vdzpMIXaftzV9XnQTclKOS7fBeRx2/aR4nyc7+Z+yvhZykLNkDVShUWGyNnnCqe9CIRg4ESMl4/5A797D52yLJr7uF+kjoQcYGzPMMABZ1yTjWFnBE/pQNComMiYMVDHMMd6EYkPDPBQ== X-Forefront-Antispam-Report: CIP:216.228.112.34; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:schybrid03.nvidia.com; CAT:NONE; SFS:(4636009)(36840700001)(46966006)(4326008)(26005)(16526019)(508600001)(2616005)(336012)(36756003)(70206006)(70586007)(36860700001)(7636003)(30864003)(356005)(186003)(83380400001)(7696005)(6666004)(86362001)(47076005)(6916009)(5660300002)(316002)(8676002)(6286002)(55016002)(1076003)(2906002)(8936002)(54906003)(426003)(82310400003); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 06 Oct 2021 04:49:01.6228 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 970f1c13-70b5-41c5-8e2c-08d988849e05 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.112.34]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT046.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR12MB3612 Subject: [dpdk-dev] [PATCH 3/3] ethdev: add async queue-based flow rules operations X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" A new, faster, queue-based flow rules management mechanism is needed for applications offloading rules inside the datapath. This asynchronous and lockless mechanism frees the CPU for further packet processing and reduces the performance impact of the flow rules creation/destruction on the datapath. Note that queues are not thread-safe and queue-based operations can be safely invoked without any locks from a single thread. The rte_flow_q_flow_create() function enqueues a flow creation to the requested queue. It benefits from already configured resources and sets unique values on top of item and action templates. A flow rule is enqueued on the specified flow queue and offloaded asynchronously to the hardware. The function returns immediately to spare CPU for further packet processing. The application must invoke the rte_flow_q_dequeue() function to complete the flow rule operation offloading, to clear the queue, and to receive the operation status. The rte_flow_q_flow_destroy() function enqueues a flow destruction to the requested queue. Signed-off-by: Alexander Kozyrev Suggested-by: Ori Kam --- lib/ethdev/rte_flow.h | 288 ++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 288 insertions(+) diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h index ba3204b17e..8cdffd8d2e 100644 --- a/lib/ethdev/rte_flow.h +++ b/lib/ethdev/rte_flow.h @@ -4298,6 +4298,13 @@ struct rte_flow_port_attr { * Version of the struct layout, should be 0. */ uint32_t version; + /** + * Number of flow queues to be configured. + * Flow queues are used for asyncronous flow rule creation/destruction. + * The order of operations is not guaranteed inside a queue. + * Flow queues are not thread-safe. + */ + uint16_t nb_queues; /** * Memory size allocated for the flow rules management. * If set to 0, memory is allocated dynamically. @@ -4330,6 +4337,21 @@ struct rte_flow_port_attr { uint32_t fixed_resource_size:1; }; +/** + * Flow engine queue configuration. + */ +__extension__ +struct rte_flow_queue_attr { + /** + * Version of the struct layout, should be 0. + */ + uint32_t version; + /** + * Number of flow rule operations a queue can hold. + */ + uint32_t size; +}; + /** * @warning * @b EXPERIMENTAL: this API may change without prior notice. @@ -4346,6 +4368,8 @@ struct rte_flow_port_attr { * Port identifier of Ethernet device. * @param[in] port_attr * Port configuration attributes. + * @param[in] queue_attr + * Array that holds attributes for each queue. * @param[out] error * Perform verbose error reporting if not NULL. * PMDs initialize this structure in case of error only. @@ -4357,6 +4381,7 @@ __rte_experimental int rte_flow_configure(uint16_t port_id, const struct rte_flow_port_attr *port_attr, + const struct rte_flow_queue_attr *queue_attr[], struct rte_flow_error *error); /** @@ -4626,6 +4651,269 @@ __rte_experimental int rte_flow_table_destroy(uint16_t port_id, struct rte_flow_table *table, struct rte_flow_error *error); + +/** + * Queue operation attributes + */ +__extension__ +struct rte_flow_q_ops_attr { + /** + * Version of the struct layout, should be 0. + */ + uint32_t version; + /** + * The user data that will be returned on the completion. + */ + void *user_data; + /** + * When set, the requested action must be sent to the HW without + * any delay. Any prior requests must be also sent to the HW. + * If this bit is cleared, the application must call the + * rte_flow_queue_flush API to actually send the request to the HW. + */ + uint32_t flush:1; +}; + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Enqueue rule creation operation. + * + * @param port_id + * Port identifier of Ethernet device. + * @param queue + * Flow queue used to insert the rule. + * @param[in] attr + * Rule creation operation attributes. + * @param[in] table + * Table to select templates from. + * @param[in] items + * List of pattern items to be used. + * The list order should match the order in the item template. + * The spec is the only relevant member of the item that is being used. + * @param[in] item_template_index + * Item template index in the table. + * @param[in] actions + * List of actions to be used. + * The list order should match the order in the action template. + * @param[in] action_template_index + * Action template index in the table. + * @param[out] error + * Perform verbose error reporting if not NULL. + * PMDs initialize this structure in case of error only. + * + * @return + * Handle on success, NULL otherwise and rte_errno is set. + * The rule handle doesn't mean that the rule was offloaded. + * Only completion result indicates that the rule was offloaded. + */ +__rte_experimental +struct rte_flow * +rte_flow_q_flow_create(uint16_t port_id, uint32_t queue, + const struct rte_flow_q_ops_attr *attr, + const struct rte_flow_table *table, + const struct rte_flow_item items[], + uint8_t item_template_index, + const struct rte_flow_action actions[], + uint8_t action_template_index, + struct rte_flow_error *error); + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Enqueue rule destruction operation. + * + * This function enqueues a destruction operation on the queue. + * Application should assume that after calling this function + * the rule handle is not valid anymore. + * Completion indicates the full removal of the rule from the HW. + * + * @param port_id + * Port identifier of Ethernet device. + * @param queue + * Flow queue which is used to destroy the rule. + * This must match the queue on which the rule was created. + * @param[in] attr + * Rule destroy operation attributes. + * @param[in] flow + * Flow handle to be destroyed. + * @param[out] error + * Perform verbose error reporting if not NULL. + * PMDs initialize this structure in case of error only. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. + */ +__rte_experimental +int +rte_flow_q_flow_destroy(uint16_t port_id, uint32_t queue, + struct rte_flow_q_ops_attr *attr, + struct rte_flow *flow, + struct rte_flow_error *error); + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Enqueue indirect rule creation operation. + * @see rte_flow_action_handle_create + * + * @param[in] port_id + * Port identifier of Ethernet device. + * @param[in] queue + * Flow queue which is used to create the rule. + * @param[in] attr + * Queue operation attributes. + * @param[in] conf + * Action configuration for the indirect action object creation. + * @param[in] action + * Specific configuration of the indirect action object. + * @param[out] error + * Perform verbose error reporting if not NULL. + * PMDs initialize this structure in case of error only. + * + * @return + * - (0) if success. + * - (-ENODEV) if *port_id* invalid. + * - (-ENOSYS) if underlying device does not support this functionality. + * - (-EIO) if underlying device is removed. + * - (-ENOENT) if action pointed by *action* handle was not found. + * - (-EBUSY) if action pointed by *action* handle still used by some rules + * rte_errno is also set. + */ +__rte_experimental +struct rte_flow_action_handle * +rte_flow_q_action_handle_create(uint16_t port_id, uint32_t queue, + const struct rte_flow_q_ops_attr *attr, + const struct rte_flow_indir_action_conf *conf, + const struct rte_flow_action *action, + struct rte_flow_error *error); + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Enqueue indirect rule destruction operation. + * The destroy queue must be the same + * as the queue on which the action was created. + * + * @param[in] port_id + * Port identifier of Ethernet device. + * @param[in] queue + * Flow queue which is used to destroy the rule. + * @param[in] attr + * Queue operation attributes. + * @param[in] handle + * Handle for the indirect action object to be destroyed. + * @param[out] error + * Perform verbose error reporting if not NULL. + * PMDs initialize this structure in case of error only. + * + * @return + * - (0) if success. + * - (-ENODEV) if *port_id* invalid. + * - (-ENOSYS) if underlying device does not support this functionality. + * - (-EIO) if underlying device is removed. + * - (-ENOENT) if action pointed by *action* handle was not found. + * - (-EBUSY) if action pointed by *action* handle still used by some rules + * rte_errno is also set. + */ +__rte_experimental +int +rte_flow_q_action_handle_destroy(uint16_t port_id, uint32_t queue, + struct rte_flow_q_ops_attr *attr, + struct rte_flow_action_handle *handle, + struct rte_flow_error *error); + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Flush all internally stored rules to the HW. + * Non-flushed rules are rules that were inserted without the flush flag set. + * Can be used to notify the HW about batch of rules prepared by the SW to + * reduce the number of communications between the HW and SW. + * + * @param port_id + * Port identifier of Ethernet device. + * @param queue + * Flow queue to be flushed. + * @param[out] error + * Perform verbose error reporting if not NULL. + * PMDs initialize this structure in case of error only. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. + */ +__rte_experimental +int +rte_flow_q_flush(uint16_t port_id, uint32_t queue, + struct rte_flow_error *error); + +/** + * Dequeue operation status. + */ +enum rte_flow_q_op_status { + /** + * The operation was completed successfully. + */ + RTE_FLOW_Q_OP_SUCCESS, + /** + * The operation was not completed successfully. + */ + RTE_FLOW_Q_OP_ERROR, +}; + +/** + * Dequeue operation result. + */ +struct rte_flow_q_op_res { + /** + * Version of the struct layout, should be 0. + */ + uint32_t version; + /** + * Returns the status of the operation that this completion signals. + */ + enum rte_flow_q_op_status status; + /** + * User data that was supplied during operation submission. + */ + void *user_data; +}; + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Dequeue a rte flow operation. + * The application must invoke this function in order to complete + * the flow rule offloading and to receive the flow rule operation status. + * + * @param port_id + * Port identifier of Ethernet device. + * @param queue + * Flow queue which is used to dequeue the operation. + * @param[out] res + * Array of results that will be set. + * @param[in] n_res + * Maximum number of results that can be returned. + * This value is equal to the size of the res array. + * @param[out] error + * Perform verbose error reporting if not NULL. + * PMDs initialize this structure in case of error only. + * + * @return + * Number of results that were dequeued, + * a negative errno value otherwise and rte_errno is set. + */ +__rte_experimental +int +rte_flow_q_dequeue(uint16_t port_id, uint32_t queue, + struct rte_flow_q_op_res res[], uint16_t n_res, + struct rte_flow_error *error); #ifdef __cplusplus } #endif -- 2.18.2