From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 59FC442A81; Sun, 7 May 2023 09:40:46 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 0E72842D13; Sun, 7 May 2023 09:40:42 +0200 (CEST) Received: from NAM12-DM6-obe.outbound.protection.outlook.com (mail-dm6nam12on2085.outbound.protection.outlook.com [40.107.243.85]) by mails.dpdk.org (Postfix) with ESMTP id 5ADA740DFB for ; Sun, 7 May 2023 09:40:40 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=ZoDBoQEu8FsUiNVN3TyLKS3LSwigzZE0CxlTg/apH4UgPFS6khUMxZVxF6/DCc7N0GQacRskDWajaF20yYk2BNBi3NnU5Zc6dg4K6B/6yaNZASdPS1gT7sOVKVV6K/jmBsvCuxztbQUf8pMrQpO/OYzSrFAOkOLficS3LloaeUxSHLbDxFUdOg5ffSSiqNL8p5YWV3jMefUeKJ2bXSKuCrOjoib5cvUF0yR+iaEbk1dFy7Vawz8FZ5LUHLP5iHmQzUMMDig7lVmg6IPcOoIMBS7QSgX7PuRcsT8An/Idv2ZO1tqD3BND3HSizgRhFJe+QB8X3+xtj46DuNHs0amyVQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=AL/0fBkfrwXlUzl6US4n7PPyZ1ZYyTgvdSzbd1IDf/A=; b=i9vqyMon5BnLJEIteKyb23QrWJMoHM4U2NtgBdZor+hSPykVi14ZDsv4ZbClqs3o80/v7xNYTl7HpScg7+3EnE4ufwXVjZQprjligZq2j1INR0vRAQQLnCzx4sYAcceSsgF8sF3MvMoAO73ZFmYocIumWUgy+f/SWrKapgSgoSvZCmgIxelc76XbjRAhxTofCWyLlaeL/Rkw6dQmlN2xoX8EoGDWOulwCq/LR8w5/KDKJk7HQ6masv4hJ8CHi9KxgPI+xOwb28HWScaJDHuUkEQkJ7UZp2jdVsfvz7LEcFuZ+p6Oe8B3wOia7D6IxYy5nhCgB9UAQ2dXQmnq2fny2g== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.161) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=AL/0fBkfrwXlUzl6US4n7PPyZ1ZYyTgvdSzbd1IDf/A=; b=QrXipp+cHc44ubpVT8SzG0vkDgS33oa2KqZGND6ueuIId9K150985W2qfiC8Kksx7wEPYS3r7RgiaJaN38z0Dka8v+Pj8oKPOXKoOJLALNsJDxOXCG/+cjIiG8Uy6QtreoCsKGyvo3OhpXDBrDczNW9lLhSbOFTfV0LsYLtQQIRkUToWfemTHqNjEopIhF4sQiOxxVIStZ0DECwyavlN5IHRC3bRaAu3SEvb5lDZcCw/R4v2LDdZzKqa46PqR1z4+hAUqd2ZVI+WmPgZsFTuZDnIutN5v4LRt9yBEJhA4xdpBDzAztkW9mTzurOIom0mHLVmrWcgEGhcZIzcwC5E7Q== Received: from BL0PR01CA0026.prod.exchangelabs.com (2603:10b6:208:71::39) by MN0PR12MB6056.namprd12.prod.outlook.com (2603:10b6:208:3cc::12) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.29; Sun, 7 May 2023 07:40:38 +0000 Received: from BL02EPF000145B9.namprd05.prod.outlook.com (2603:10b6:208:71:cafe::4f) by BL0PR01CA0026.outlook.office365.com (2603:10b6:208:71::39) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.31 via Frontend Transport; Sun, 7 May 2023 07:40:38 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.161) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.161 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.161; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.161) by BL02EPF000145B9.mail.protection.outlook.com (10.167.241.209) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.14 via Frontend Transport; Sun, 7 May 2023 07:40:38 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.67) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.5; Sun, 7 May 2023 00:40:17 -0700 Received: from nvidia.com (10.126.231.37) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.37; Sun, 7 May 2023 00:40:14 -0700 From: Gregory Etelson To: CC: , , , Viacheslav Ovsiienko , Matan Azrad Subject: [PATCH v3 2/5] net/mlx5: remove code duplication Date: Sun, 7 May 2023 10:39:49 +0300 Message-ID: <20230507073952.4061-3-getelson@nvidia.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230507073952.4061-1-getelson@nvidia.com> References: <20230118125556.23622-1-getelson@nvidia.com> <20230507073952.4061-1-getelson@nvidia.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain X-Originating-IP: [10.126.231.37] X-ClientProxiedBy: rnnvmail202.nvidia.com (10.129.68.7) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: BL02EPF000145B9:EE_|MN0PR12MB6056:EE_ X-MS-Office365-Filtering-Correlation-Id: b82dc48d-3a2a-4981-7e23-08db4ece5a1c X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: pSXGo0iGio2S0LHGY2yU4OW7/rRpsSIdOYhOsNiJZ3w+RGj79vLdB/YbG2sWyDrDHjfSdDO9VL33fEc5FY2mPBm6ooLEWs0+g1ImfgDpn2JpSWn/Ekde4CcfBVaIc0RXagTOy8YMcTPLKCD68C6VuIPeYHbLvaZ3mJSafff+HgX9V2q4k6Qa+FhUOAQmQw8feC1b22HlbdgMP54JRhnhVq+uoJMpZKwU2WPA7fX0Iz7S5RW7UrChKK1BsHqMjuCSPcfkNW7LKeyE2BZiLgESN8UPR3bzgfre5msYt7wtnDdwSqVXmOOTnW9r7pSp6W3WieZa6y56HoqOw99q2jmInKczK48n0BwsBMaJW5nwyMh3M7gbvBcxcEUi/cy6UARtREIzzfyvRCZjxABL7P/MUjy4tH97qXlIKeC8ryLSrGd2TXlwtxnteGLy+0ZB5eyzSUpu97e7gAbBE3vc2pPadTG8pr5uv21Kt66heHxwh2KH1qFfN6s0KTo0fGWG0heHrzQYvNocWhjjQNLY5Sp8JPUs+vbmgE0APT9fgEOWk9YSJBx2PB6MFJ4QaEugFCXl82FhEGQy6whkb3XFvruGD8NhqnFBc+xZ5V+4hNHVuq6sB0b9WRIPIbmm+q/MtR4VtjwcVCN4NeUDCnhUbZ5hxka45tu7fwEmR5iSrZijhT/FNgtwb1DRvzXQ6l6i4nDRU/9t27fZVMBREDtm3Qrp6g== X-Forefront-Antispam-Report: CIP:216.228.117.161; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge2.nvidia.com; CAT:NONE; SFS:(13230028)(4636009)(39860400002)(136003)(396003)(376002)(346002)(451199021)(40470700004)(36840700001)(46966006)(86362001)(5660300002)(8676002)(8936002)(316002)(82740400003)(82310400005)(4326008)(2906002)(356005)(7636003)(54906003)(478600001)(41300700001)(7696005)(6666004)(36756003)(83380400001)(36860700001)(2616005)(40460700003)(70586007)(6916009)(70206006)(107886003)(336012)(426003)(26005)(186003)(47076005)(16526019)(6286002)(55016003)(40480700001)(1076003); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 07 May 2023 07:40:38.3228 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: b82dc48d-3a2a-4981-7e23-08db4ece5a1c X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.161]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BL02EPF000145B9.namprd05.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN0PR12MB6056 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Replace duplicated code with dedicated functions Signed-off-by: Gregory Etelson Acked-by: Viacheslav Ovsiienko --- drivers/net/mlx5/mlx5.h | 6 +- drivers/net/mlx5/mlx5_flow_hw.c | 182 ++++++++++++++++---------------- 2 files changed, 95 insertions(+), 93 deletions(-) diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h index 18ac90dfe2..c12149b7e7 100644 --- a/drivers/net/mlx5/mlx5.h +++ b/drivers/net/mlx5/mlx5.h @@ -346,11 +346,11 @@ struct mlx5_lb_ctx { }; /* HW steering queue job descriptor type. */ -enum { +enum mlx5_hw_job_type { MLX5_HW_Q_JOB_TYPE_CREATE, /* Flow create job type. */ MLX5_HW_Q_JOB_TYPE_DESTROY, /* Flow destroy job type. */ - MLX5_HW_Q_JOB_TYPE_UPDATE, - MLX5_HW_Q_JOB_TYPE_QUERY, + MLX5_HW_Q_JOB_TYPE_UPDATE, /* Flow update job type. */ + MLX5_HW_Q_JOB_TYPE_QUERY, /* Flow query job type. */ }; #define MLX5_HW_MAX_ITEMS (16) diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c index 2a51d3ee19..350b4d99cf 100644 --- a/drivers/net/mlx5/mlx5_flow_hw.c +++ b/drivers/net/mlx5/mlx5_flow_hw.c @@ -7970,6 +7970,67 @@ flow_hw_action_handle_validate(struct rte_eth_dev *dev, uint32_t queue, return 0; } +static __rte_always_inline bool +flow_hw_action_push(const struct rte_flow_op_attr *attr) +{ + return attr ? !attr->postpone : true; +} + +static __rte_always_inline struct mlx5_hw_q_job * +flow_hw_job_get(struct mlx5_priv *priv, uint32_t queue) +{ + return priv->hw_q[queue].job[--priv->hw_q[queue].job_idx]; +} + +static __rte_always_inline void +flow_hw_job_put(struct mlx5_priv *priv, uint32_t queue) +{ + priv->hw_q[queue].job_idx++; +} + +static __rte_always_inline struct mlx5_hw_q_job * +flow_hw_action_job_init(struct mlx5_priv *priv, uint32_t queue, + const struct rte_flow_action_handle *handle, + void *user_data, void *query_data, + enum mlx5_hw_job_type type, + struct rte_flow_error *error) +{ + struct mlx5_hw_q_job *job; + + MLX5_ASSERT(queue != MLX5_HW_INV_QUEUE); + if (unlikely(!priv->hw_q[queue].job_idx)) { + rte_flow_error_set(error, ENOMEM, + RTE_FLOW_ERROR_TYPE_ACTION_NUM, NULL, + "Action destroy failed due to queue full."); + return NULL; + } + job = flow_hw_job_get(priv, queue); + job->type = type; + job->action = handle; + job->user_data = user_data; + job->query.user = query_data; + return job; +} + +static __rte_always_inline void +flow_hw_action_finalize(struct rte_eth_dev *dev, uint32_t queue, + struct mlx5_hw_q_job *job, + bool push, bool aso, bool status) +{ + struct mlx5_priv *priv = dev->data->dev_private; + if (likely(status)) { + if (push) + __flow_hw_push_action(dev, queue); + if (!aso) + rte_ring_enqueue(push ? + priv->hw_q[queue].indir_cq : + priv->hw_q[queue].indir_iq, + job); + } else { + flow_hw_job_put(priv, queue); + } +} + /** * Create shared action. * @@ -8007,21 +8068,15 @@ flow_hw_action_handle_create(struct rte_eth_dev *dev, uint32_t queue, cnt_id_t cnt_id; uint32_t mtr_id; uint32_t age_idx; - bool push = true; + bool push = flow_hw_action_push(attr); bool aso = false; if (attr) { - MLX5_ASSERT(queue != MLX5_HW_INV_QUEUE); - if (unlikely(!priv->hw_q[queue].job_idx)) { - rte_flow_error_set(error, ENOMEM, - RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, - "Flow queue full."); + job = flow_hw_action_job_init(priv, queue, NULL, user_data, + NULL, MLX5_HW_Q_JOB_TYPE_CREATE, + error); + if (!job) return NULL; - } - job = priv->hw_q[queue].job[--priv->hw_q[queue].job_idx]; - job->type = MLX5_HW_Q_JOB_TYPE_CREATE; - job->user_data = user_data; - push = !attr->postpone; } switch (action->type) { case RTE_FLOW_ACTION_TYPE_AGE: @@ -8084,17 +8139,9 @@ flow_hw_action_handle_create(struct rte_eth_dev *dev, uint32_t queue, break; } if (job) { - if (!handle) { - priv->hw_q[queue].job_idx++; - return NULL; - } job->action = handle; - if (push) - __flow_hw_push_action(dev, queue); - if (aso) - return handle; - rte_ring_enqueue(push ? priv->hw_q[queue].indir_cq : - priv->hw_q[queue].indir_iq, job); + flow_hw_action_finalize(dev, queue, job, push, aso, + handle != NULL); } return handle; } @@ -8142,19 +8189,15 @@ flow_hw_action_handle_update(struct rte_eth_dev *dev, uint32_t queue, uint32_t type = act_idx >> MLX5_INDIRECT_ACTION_TYPE_OFFSET; uint32_t idx = act_idx & ((1u << MLX5_INDIRECT_ACTION_TYPE_OFFSET) - 1); int ret = 0; - bool push = true; + bool push = flow_hw_action_push(attr); bool aso = false; if (attr) { - MLX5_ASSERT(queue != MLX5_HW_INV_QUEUE); - if (unlikely(!priv->hw_q[queue].job_idx)) - return rte_flow_error_set(error, ENOMEM, - RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, - "Action update failed due to queue full."); - job = priv->hw_q[queue].job[--priv->hw_q[queue].job_idx]; - job->type = MLX5_HW_Q_JOB_TYPE_UPDATE; - job->user_data = user_data; - push = !attr->postpone; + job = flow_hw_action_job_init(priv, queue, handle, user_data, + NULL, MLX5_HW_Q_JOB_TYPE_UPDATE, + error); + if (!job) + return -rte_errno; } switch (type) { case MLX5_INDIRECT_ACTION_TYPE_AGE: @@ -8217,19 +8260,8 @@ flow_hw_action_handle_update(struct rte_eth_dev *dev, uint32_t queue, "action type not supported"); break; } - if (job) { - if (ret) { - priv->hw_q[queue].job_idx++; - return ret; - } - job->action = handle; - if (push) - __flow_hw_push_action(dev, queue); - if (aso) - return 0; - rte_ring_enqueue(push ? priv->hw_q[queue].indir_cq : - priv->hw_q[queue].indir_iq, job); - } + if (job) + flow_hw_action_finalize(dev, queue, job, push, aso, ret == 0); return ret; } @@ -8268,20 +8300,16 @@ flow_hw_action_handle_destroy(struct rte_eth_dev *dev, uint32_t queue, struct mlx5_hw_q_job *job = NULL; struct mlx5_aso_mtr *aso_mtr; struct mlx5_flow_meter_info *fm; - bool push = true; + bool push = flow_hw_action_push(attr); bool aso = false; int ret = 0; if (attr) { - MLX5_ASSERT(queue != MLX5_HW_INV_QUEUE); - if (unlikely(!priv->hw_q[queue].job_idx)) - return rte_flow_error_set(error, ENOMEM, - RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, - "Action destroy failed due to queue full."); - job = priv->hw_q[queue].job[--priv->hw_q[queue].job_idx]; - job->type = MLX5_HW_Q_JOB_TYPE_DESTROY; - job->user_data = user_data; - push = !attr->postpone; + job = flow_hw_action_job_init(priv, queue, handle, user_data, + NULL, MLX5_HW_Q_JOB_TYPE_DESTROY, + error); + if (!job) + return -rte_errno; } switch (type) { case MLX5_INDIRECT_ACTION_TYPE_AGE: @@ -8344,19 +8372,8 @@ flow_hw_action_handle_destroy(struct rte_eth_dev *dev, uint32_t queue, "action type not supported"); break; } - if (job) { - if (ret) { - priv->hw_q[queue].job_idx++; - return ret; - } - job->action = handle; - if (push) - __flow_hw_push_action(dev, queue); - if (aso) - return ret; - rte_ring_enqueue(push ? priv->hw_q[queue].indir_cq : - priv->hw_q[queue].indir_iq, job); - } + if (job) + flow_hw_action_finalize(dev, queue, job, push, aso, ret == 0); return ret; } @@ -8595,19 +8612,15 @@ flow_hw_action_handle_query(struct rte_eth_dev *dev, uint32_t queue, uint32_t type = act_idx >> MLX5_INDIRECT_ACTION_TYPE_OFFSET; uint32_t age_idx = act_idx & MLX5_HWS_AGE_IDX_MASK; int ret; - bool push = true; + bool push = flow_hw_action_push(attr); bool aso = false; if (attr) { - MLX5_ASSERT(queue != MLX5_HW_INV_QUEUE); - if (unlikely(!priv->hw_q[queue].job_idx)) - return rte_flow_error_set(error, ENOMEM, - RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, - "Action destroy failed due to queue full."); - job = priv->hw_q[queue].job[--priv->hw_q[queue].job_idx]; - job->type = MLX5_HW_Q_JOB_TYPE_QUERY; - job->user_data = user_data; - push = !attr->postpone; + job = flow_hw_action_job_init(priv, queue, handle, user_data, + data, MLX5_HW_Q_JOB_TYPE_QUERY, + error); + if (!job) + return -rte_errno; } switch (type) { case MLX5_INDIRECT_ACTION_TYPE_AGE: @@ -8630,19 +8643,8 @@ flow_hw_action_handle_query(struct rte_eth_dev *dev, uint32_t queue, "action type not supported"); break; } - if (job) { - if (ret) { - priv->hw_q[queue].job_idx++; - return ret; - } - job->action = handle; - if (push) - __flow_hw_push_action(dev, queue); - if (aso) - return ret; - rte_ring_enqueue(push ? priv->hw_q[queue].indir_cq : - priv->hw_q[queue].indir_iq, job); - } + if (job) + flow_hw_action_finalize(dev, queue, job, push, aso, ret == 0); return 0; } -- 2.34.1