From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 1FC55A0548; Tue, 27 Apr 2021 12:44:48 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 3158741230; Tue, 27 Apr 2021 12:44:36 +0200 (CEST) Received: from NAM10-BN7-obe.outbound.protection.outlook.com (mail-bn7nam10on2062.outbound.protection.outlook.com [40.107.92.62]) by mails.dpdk.org (Postfix) with ESMTP id 75B77411C5 for ; Tue, 27 Apr 2021 12:44:29 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=ERwwzJ+yImIbME6CUPQ4ZcylJPe7f4UZO6Ts82RJyXfN7c+8/bHP+5sHd2p+pee9OlDVzKPG7G8xDn/Kdo2sFDbzqOoaiCzohSauDrrQS9Xl/QoeLZKtjybrPQyDtdqLoUb5TRCKg3n2qV/XiH+EYXeHYSaCuFjK9pto8v2WInPc8GSZCSZkRLwKcQl/QrHY76zjYNzQpPGjJe9gob3lWg4pBh3oWwypE9MNym4BWqzMIJx/tASbdiDjaQGeOEF/7C5oaeO85ik0Mp0Uwqulm5YO5jT15QVo8kRti+4Q6jyJag+96tBk5qLZqeU2P1ooYPefdfJgM1+bntF+oo9DRw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=cVwJGGOqJ+kMtvdyOyk435FwCBV5GSg1ATBXfniGFyU=; b=n9oJ0Rlxo188oqYEuzsDBd13D8TqSEZx9kRlmuf4NGHVslIpJT+h4tBqMrCIXLb/Y0/FVSCoeiKF31wAt64RcCLrD3+IUKfoSwK+78JA63A3WyJFrWNsmItEvuwFYYyJBfZA6KhtkCHDhhrDrvCT9oyPC1TrENd2W+qvEbGNg9pIPXfWwFfppuopu45JgjQiPC9widO/9UNUj6KigqGcYd0YWf8SlvXz0x/9cNnM1iRzxMUNQXuR7LqN6n+ujOgcOn2I4byhmqqzrePfw1U2xH7N8YpYKKFYhqcrMLpTEwnKYMp4ohFR0rQEff2Snq2XzWCtI6HqMrCC+AppLSjO0g== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.34) smtp.rcpttodomain=monjalon.net smtp.mailfrom=nvidia.com; dmarc=pass (p=none sp=none pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=cVwJGGOqJ+kMtvdyOyk435FwCBV5GSg1ATBXfniGFyU=; b=CcDKMCx02MtDwaxtegZ7LzpvZ+Jam5wCKpaCwlPtgacRv0JZ61QMawVGYxwOMRutjwLd1NYlGhWxdhxJGW9k7oD45OCeeacAUbqFZr/aiqArKeewf/JefDLnLDeiesB4WCe52ohKigwEz9oZKEKdJJeZ33M7Nnws7V2qsT/aJTG5UpBuoKUvCTPEFkzvEJTCGM0u5/zboTzeFqOfLNOGCTQuEaCJbc8RTTZoOypKxacYGuVTFNUJ/1RXLnyAXP+rA/YVbVI7xAW9U0VXC7/jHiUN6W0qS+jE9eBt+rU68JkZislgztmTGwSoBz0qxhbvbR1DA2mee5DskhZbOWBFIQ== Received: from BN9PR03CA0505.namprd03.prod.outlook.com (2603:10b6:408:130::30) by CY4PR1201MB2533.namprd12.prod.outlook.com (2603:10b6:903:d9::23) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4087.25; Tue, 27 Apr 2021 10:44:27 +0000 Received: from BN8NAM11FT017.eop-nam11.prod.protection.outlook.com (2603:10b6:408:130:cafe::a3) by BN9PR03CA0505.outlook.office365.com (2603:10b6:408:130::30) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4065.22 via Frontend Transport; Tue, 27 Apr 2021 10:44:27 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.34) smtp.mailfrom=nvidia.com; monjalon.net; dkim=none (message not signed) header.d=none;monjalon.net; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.34 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.34; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.34) by BN8NAM11FT017.mail.protection.outlook.com (10.13.177.93) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4065.21 via Frontend Transport; Tue, 27 Apr 2021 10:44:27 +0000 Received: from nvidia.com (172.20.145.6) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Tue, 27 Apr 2021 10:44:24 +0000 From: Li Zhang To: , , , , CC: , , , Date: Tue, 27 Apr 2021 13:43:53 +0300 Message-ID: <20210427104354.4112-4-lizh@nvidia.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20210427104354.4112-1-lizh@nvidia.com> References: <20210401081624.1482490-1-lizh@nvidia.com> <20210427104354.4112-1-lizh@nvidia.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain X-Originating-IP: [172.20.145.6] X-ClientProxiedBy: HQMAIL101.nvidia.com (172.20.187.10) To HQMAIL107.nvidia.com (172.20.187.13) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 1c9f2fca-9874-4f64-c774-08d909696e21 X-MS-TrafficTypeDiagnostic: CY4PR1201MB2533: X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:3513; X-MS-Exchange-SenderADCheck: 1 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: T4J28ApUkwwbPrmgjP+GR4QfUOdAv5nrfZ/wjx2s7hoXuRK1OMNJfHFNY6BHUqpHIhepjbsN5wy7jAGnu22R7wQtFogLRbUMND053gEvVCCFIKVLpg34OrhhTMP+dHh2rmWJ7SUZvYYNsym81hxf4W/A5talNlg5cF8VYZFsCL7XEbO2zBjMD9WTuZ6q4MPULPlohAsxgGkVeX9s3CAVXgP39FzfprNmHauBSsWvEssUjajiQ44kcF/CZXaB+qtFYADn4Y/VnWpI+rTzfB9ldeADP11solvK2NJ+oSE/WDffAMtRZAwDgRSk3wakkXnEQ3l/eLx1v28SxSqQJZYb/+r9LkX8TyB8QPMWmxq9htVG7PXd2RqlDLtELKDyzLg8iIy5x7m3Kxi2gaRlkB6/BrUIyxjKymH48jYDf3vKXNfG9trLO7nsx0pCWo31XCn2TpshBom2tMYVuX+hoJ12Ejwe0cwPK68JDXaw4oDReX3oxT0ILKf5pAPw8hl2hRb/eDBo2ov/gN2oGuVAIR8m8vr8obwMOU1feKPgH2Giu6EivWtOpTqZl54jJh6aj01nUQ91wDArAKhwN7I/9OqBKK1VB/dab2Mw41Afx2hWQih5Lli+bnr24VD9f1L9ZsHkimo8+gE8U1CuPvoy3grMLxIkVXULNHXbsf7KQ8qtdh0= X-Forefront-Antispam-Report: CIP:216.228.112.34; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:schybrid03.nvidia.com; CAT:NONE; SFS:(4636009)(136003)(376002)(346002)(39860400002)(396003)(36840700001)(46966006)(70586007)(70206006)(336012)(316002)(16526019)(186003)(86362001)(36860700001)(26005)(6286002)(2616005)(426003)(107886003)(47076005)(55016002)(83380400001)(5660300002)(8676002)(2906002)(7696005)(82310400003)(4326008)(1076003)(478600001)(8936002)(82740400003)(110136005)(36906005)(36756003)(356005)(6636002)(7636003)(54906003)(6666004); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 27 Apr 2021 10:44:27.1130 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 1c9f2fca-9874-4f64-c774-08d909696e21 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.112.34]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT017.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: CY4PR1201MB2533 Subject: [dpdk-dev] [PATCH v8 3/4] net/mlx5: prepare sub-policy for a flow with meter X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" When a flow has a RSS action, the driver splits each sub flow finally is configured with a different HW TIR action. Any RSS action configured in meter policy may cause a split in the flow configuration. To save performance, any TIR action will be configured in different flow table, so policy can be split to sub-policies per TIR in the flow creation time. Create a function to prepare the policy and its sub-policies for a configured flow with meter. Signed-off-by: Li Zhang Acked-by: Matan Azrad --- drivers/net/mlx5/mlx5_flow.h | 10 +++ drivers/net/mlx5/mlx5_flow_dv.c | 145 ++++++++++++++++++++++++++++++++ 2 files changed, 155 insertions(+) diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h index 98f6132332..a80c7903a2 100644 --- a/drivers/net/mlx5/mlx5_flow.h +++ b/drivers/net/mlx5/mlx5_flow.h @@ -1094,6 +1094,11 @@ typedef int (*mlx5_flow_create_mtr_tbls_t)(struct rte_eth_dev *dev, typedef void (*mlx5_flow_destroy_mtr_tbls_t)(struct rte_eth_dev *dev, struct mlx5_flow_meter_info *fm); typedef void (*mlx5_flow_destroy_mtr_drop_tbls_t)(struct rte_eth_dev *dev); +typedef struct mlx5_flow_meter_sub_policy * + (*mlx5_flow_meter_sub_policy_rss_prepare_t) + (struct rte_eth_dev *dev, + struct mlx5_flow_meter_policy *mtr_policy, + struct mlx5_flow_rss_desc *rss_desc[MLX5_MTR_RTE_COLORS]); typedef uint32_t (*mlx5_flow_mtr_alloc_t) (struct rte_eth_dev *dev); typedef void (*mlx5_flow_mtr_free_t)(struct rte_eth_dev *dev, @@ -1186,6 +1191,7 @@ struct mlx5_flow_driver_ops { mlx5_flow_destroy_policy_rules_t destroy_policy_rules; mlx5_flow_create_def_policy_t create_def_policy; mlx5_flow_destroy_def_policy_t destroy_def_policy; + mlx5_flow_meter_sub_policy_rss_prepare_t meter_sub_policy_rss_prepare; mlx5_flow_counter_alloc_t counter_alloc; mlx5_flow_counter_free_t counter_free; mlx5_flow_counter_query_t counter_query; @@ -1417,6 +1423,10 @@ int mlx5_flow_create_mtr_tbls(struct rte_eth_dev *dev, void mlx5_flow_destroy_mtr_tbls(struct rte_eth_dev *dev, struct mlx5_flow_meter_info *fm); void mlx5_flow_destroy_mtr_drop_tbls(struct rte_eth_dev *dev); +struct mlx5_flow_meter_sub_policy *mlx5_flow_meter_sub_policy_rss_prepare + (struct rte_eth_dev *dev, + struct mlx5_flow_meter_policy *mtr_policy, + struct mlx5_flow_rss_desc *rss_desc[MLX5_MTR_RTE_COLORS]); int mlx5_flow_dv_discover_counter_offset_support(struct rte_eth_dev *dev); int mlx5_action_handle_flush(struct rte_eth_dev *dev); void mlx5_release_tunnel_hub(struct mlx5_dev_ctx_shared *sh, uint16_t port_id); diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c index 6e2a3e85f7..6bccdf5b16 100644 --- a/drivers/net/mlx5/mlx5_flow_dv.c +++ b/drivers/net/mlx5/mlx5_flow_dv.c @@ -14874,6 +14874,150 @@ flow_dv_create_mtr_tbls(struct rte_eth_dev *dev, return -1; } +/** + * Find the policy table for prefix table with RSS. + * + * @param[in] dev + * Pointer to Ethernet device. + * @param[in] mtr_policy + * Pointer to meter policy table. + * @param[in] rss_desc + * Pointer to rss_desc + * @return + * Pointer to table set on success, NULL otherwise and rte_errno is set. + */ +static struct mlx5_flow_meter_sub_policy * +flow_dv_meter_sub_policy_rss_prepare(struct rte_eth_dev *dev, + struct mlx5_flow_meter_policy *mtr_policy, + struct mlx5_flow_rss_desc *rss_desc[MLX5_MTR_RTE_COLORS]) +{ + struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5_flow_meter_sub_policy *sub_policy = NULL; + uint32_t sub_policy_idx = 0; + uint32_t hrxq_idx[MLX5_MTR_RTE_COLORS] = {0}; + uint32_t i, j; + struct mlx5_hrxq *hrxq; + struct mlx5_flow_handle dh; + struct mlx5_meter_policy_action_container *act_cnt; + uint32_t domain = MLX5_MTR_DOMAIN_INGRESS; + uint16_t sub_policy_num; + + rte_spinlock_lock(&mtr_policy->sl); + for (i = 0; i < MLX5_MTR_RTE_COLORS; i++) { + if (!rss_desc[i]) + continue; + hrxq_idx[i] = mlx5_hrxq_get(dev, rss_desc[i]); + if (!hrxq_idx[i]) { + rte_spinlock_unlock(&mtr_policy->sl); + return NULL; + } + } + sub_policy_num = (mtr_policy->sub_policy_num >> + (MLX5_MTR_SUB_POLICY_NUM_SHIFT * domain)) & + MLX5_MTR_SUB_POLICY_NUM_MASK; + for (i = 0; i < sub_policy_num; + i++) { + for (j = 0; j < MLX5_MTR_RTE_COLORS; j++) { + if (rss_desc[j] && + hrxq_idx[j] != + mtr_policy->sub_policys[domain][i]->rix_hrxq[j]) + break; + } + if (j >= MLX5_MTR_RTE_COLORS) { + /* + * Found the sub policy table with + * the same queue per color + */ + rte_spinlock_unlock(&mtr_policy->sl); + for (j = 0; j < MLX5_MTR_RTE_COLORS; j++) + mlx5_hrxq_release(dev, hrxq_idx[j]); + return mtr_policy->sub_policys[domain][i]; + } + } + /* Create sub policy. */ + if (!mtr_policy->sub_policys[domain][0]->rix_hrxq[0]) { + /* Reuse the first dummy sub_policy*/ + sub_policy = mtr_policy->sub_policys[domain][0]; + sub_policy_idx = sub_policy->idx; + } else { + sub_policy = mlx5_ipool_zmalloc + (priv->sh->ipool[MLX5_IPOOL_MTR_POLICY], + &sub_policy_idx); + if (!sub_policy || + sub_policy_idx > MLX5_MAX_SUB_POLICY_TBL_NUM) { + for (i = 0; i < MLX5_MTR_RTE_COLORS; i++) + mlx5_hrxq_release(dev, hrxq_idx[i]); + goto rss_sub_policy_error; + } + sub_policy->idx = sub_policy_idx; + sub_policy->main_policy = mtr_policy; + } + for (i = 0; i < MLX5_MTR_RTE_COLORS; i++) { + if (!rss_desc[i]) + continue; + sub_policy->rix_hrxq[i] = hrxq_idx[i]; + /* + * Overwrite the last action from + * RSS action to Queue action. + */ + hrxq = mlx5_ipool_get(priv->sh->ipool[MLX5_IPOOL_HRXQ], + hrxq_idx[i]); + if (!hrxq) { + DRV_LOG(ERR, "Failed to create policy hrxq"); + goto rss_sub_policy_error; + } + act_cnt = &mtr_policy->act_cnt[i]; + if (act_cnt->rix_mark || act_cnt->modify_hdr) { + memset(&dh, 0, sizeof(struct mlx5_flow_handle)); + if (act_cnt->rix_mark) + dh.mark = 1; + dh.fate_action = MLX5_FLOW_FATE_QUEUE; + dh.rix_hrxq = hrxq_idx[i]; + flow_drv_rxq_flags_set(dev, &dh); + } + } + if (__flow_dv_create_policy_acts_rules(dev, mtr_policy, + sub_policy, domain)) { + DRV_LOG(ERR, "Failed to create policy " + "rules per domain."); + goto rss_sub_policy_error; + } + if (sub_policy != mtr_policy->sub_policys[domain][0]) { + i = (mtr_policy->sub_policy_num >> + (MLX5_MTR_SUB_POLICY_NUM_SHIFT * domain)) & + MLX5_MTR_SUB_POLICY_NUM_MASK; + mtr_policy->sub_policys[domain][i] = sub_policy; + i++; + if (i > MLX5_MTR_RSS_MAX_SUB_POLICY) + goto rss_sub_policy_error; + mtr_policy->sub_policy_num &= ~(MLX5_MTR_SUB_POLICY_NUM_MASK << + (MLX5_MTR_SUB_POLICY_NUM_SHIFT * domain)); + mtr_policy->sub_policy_num |= + (i & MLX5_MTR_SUB_POLICY_NUM_MASK) << + (MLX5_MTR_SUB_POLICY_NUM_SHIFT * domain); + } + rte_spinlock_unlock(&mtr_policy->sl); + return sub_policy; +rss_sub_policy_error: + if (sub_policy) { + __flow_dv_destroy_sub_policy_rules(dev, sub_policy); + if (sub_policy != mtr_policy->sub_policys[domain][0]) { + i = (mtr_policy->sub_policy_num >> + (MLX5_MTR_SUB_POLICY_NUM_SHIFT * domain)) & + MLX5_MTR_SUB_POLICY_NUM_MASK; + mtr_policy->sub_policys[domain][i] = NULL; + mlx5_ipool_free + (priv->sh->ipool[MLX5_IPOOL_MTR_POLICY], + sub_policy->idx); + } + } + if (sub_policy_idx) + mlx5_ipool_free(priv->sh->ipool[MLX5_IPOOL_MTR_POLICY], + sub_policy_idx); + rte_spinlock_unlock(&mtr_policy->sl); + return NULL; +} + /** * Validate the batch counter support in root table. * @@ -15464,6 +15608,7 @@ const struct mlx5_flow_driver_ops mlx5_flow_dv_drv_ops = { .destroy_policy_rules = flow_dv_destroy_policy_rules, .create_def_policy = flow_dv_create_def_policy, .destroy_def_policy = flow_dv_destroy_def_policy, + .meter_sub_policy_rss_prepare = flow_dv_meter_sub_policy_rss_prepare, .counter_alloc = flow_dv_counter_allocate, .counter_free = flow_dv_counter_free, .counter_query = flow_dv_counter_query, -- 2.27.0