From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 54238A0553; Thu, 20 Oct 2022 17:44:24 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id E967942C1C; Thu, 20 Oct 2022 17:43:00 +0200 (CEST) Received: from NAM11-DM6-obe.outbound.protection.outlook.com (mail-dm6nam11on2077.outbound.protection.outlook.com [40.107.223.77]) by mails.dpdk.org (Postfix) with ESMTP id 4890F42C3F for ; Thu, 20 Oct 2022 17:42:53 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=mrDnSSl0u6P4AoXnEKtlH1Kuf5gg9kbUQG/1smse4UzMFBzq5deh1QvH3E+Z8Jsv++HGJ25MLA8N5YCPJppaQCZEYqpwS1XCAxCN2rkItjM3D/wOADssvjhUTeFd8p+IUoGyktsCxAwsCjZv7fPk+a4vu1RIaCHn8pJ9KQNYVoVKhzkaC8Z8TA/7UBV9Ey+qx/LFJtVJTJ2fdpE/rEBM1ku3e/+vrdiPg0d1RFecyYMIL4AhHblVvOf6NfPiZAt4HuXGRRpUj2x3UGleuu8u/WTbYLkl3xhtTw4pmYSZPkUrC2qfw3vXOlsDoVQTm0P55Wy3pDnHY0n4XMtqaKIPDg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=GnbupPn8oG3cIKCGBBYNPl7JAWGyLKz+VFHu92VzDTA=; b=fVBtKEUwCirAVtc8gIiTqhq30tBF3qLyk1pXoRKMzdCT3ixEv3xQ92KnF191fct+1ggqj4xuu5K3rseesdSiDMhx3Nu36RrkvvH4201wP3v5OzOtfwqOhYm+hNdeyce56mbAe+5oJDaViWBs8WpMF/fVeJ/nrp2v+TTC9iF9fiXGa8re6LsGMq/UF1n9CjBMNaKXKzwQlRwuQ6gWTrjIUp2ImAvsg2yFhEZrcwZmMoY1V1pS9Tm8m32zgpu16JIsYd1xY1FXBVfkvepDSWamFrGA79rpS6wOZtsUwU3MipeGBw605Ayq4SUqT3I25AXb1NHxVZiGyo6HldJhGAckKg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=none (sender ip is 216.228.117.160) smtp.rcpttodomain=ashroe.eu smtp.mailfrom=nvidia.com; dmarc=fail (p=reject sp=reject pct=100) action=oreject header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=GnbupPn8oG3cIKCGBBYNPl7JAWGyLKz+VFHu92VzDTA=; b=ZEh/tYmN25Im6QBPgRJiOk1RejEhqo0nMtBDSNnWX8AOAljvVjW+O+aSIX1P1Vjs/1D+uXzfwjIXANyV3x0VcSn4mxy6ITA6MZ+7lxZ/6JkvJbd7qC2m1ZkF+JUJDJ260wXWz8ggI6yg43L06YIFu+hhffyIqgulWVyGXAE7Tv/mjT4WYsbTloCOrM7OVNLojZJIULlrMXVzPJoVOO5X6F1sHLVG+DrzuLpIZrwqIsodMUL+mAH4IdMW2THgqtpnJPizbwKRNddOn8CB9M/6E+l1TPnCIk0JGbpCbkaVgzelMcngqjYQJlSoteumElTRzY84b/r54yYetAwqDJOpHQ== Received: from BN0PR08CA0018.namprd08.prod.outlook.com (2603:10b6:408:142::29) by DM4PR12MB6087.namprd12.prod.outlook.com (2603:10b6:8:b1::12) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5723.30; Thu, 20 Oct 2022 15:42:50 +0000 Received: from BN8NAM11FT116.eop-nam11.prod.protection.outlook.com (2603:10b6:408:142:cafe::a8) by BN0PR08CA0018.outlook.office365.com (2603:10b6:408:142::29) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5723.35 via Frontend Transport; Thu, 20 Oct 2022 15:42:50 +0000 X-MS-Exchange-Authentication-Results: spf=none (sender IP is 216.228.117.160) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=fail action=oreject header.from=nvidia.com; Received-SPF: None (protection.outlook.com: nvidia.com does not designate permitted sender hosts) Received: from mail.nvidia.com (216.228.117.160) by BN8NAM11FT116.mail.protection.outlook.com (10.13.176.67) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5746.16 via Frontend Transport; Thu, 20 Oct 2022 15:42:50 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.26; Thu, 20 Oct 2022 08:42:44 -0700 Received: from nvidia.com (10.126.231.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.29; Thu, 20 Oct 2022 08:42:42 -0700 From: Suanming Mou To: Matan Azrad , Viacheslav Ovsiienko , Ray Kinsella CC: , , , "Dariusz Sosnowski" , Xueming Li Subject: [PATCH v6 16/18] net/mlx5: support device control for E-Switch default rule Date: Thu, 20 Oct 2022 18:41:50 +0300 Message-ID: <20221020154152.28228-17-suanmingm@nvidia.com> X-Mailer: git-send-email 2.18.1 In-Reply-To: <20221020154152.28228-1-suanmingm@nvidia.com> References: <20220923144334.27736-1-suanmingm@nvidia.com> <20221020154152.28228-1-suanmingm@nvidia.com> MIME-Version: 1.0 Content-Type: text/plain X-Originating-IP: [10.126.231.35] X-ClientProxiedBy: rnnvmail201.nvidia.com (10.129.68.8) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: BN8NAM11FT116:EE_|DM4PR12MB6087:EE_ X-MS-Office365-Filtering-Correlation-Id: 3cc98e24-0d91-4d7f-4e35-08dab2b1beb0 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: idSkSsjW0zTMox7qlJdpdUmMOwXYYhkJCUUTfT80XPYXZvOvG8vb09LrB/ywOHH64HWCC7uI/kjxU5O8j/R2rWGA7H7xl4DRaI7M2vA7jt2N6ERct/dSUZqZfUhMT2ZvbR3umnCDfPRFcASTCLXIBNcdDFsvuzgxT/q5ArdtoY+EkY8KKsp03/Xz31GCJIO5Ph6gYDQHpd2Mb9Z1DSA2xXC7LkIwkbN/ruoowe33OFPkNyg/p98/Q8I8J8G728m1sivdes2cyUyLxMb/QxmxgAULxfO1Kzzf0Cuscq/V2IfB74hm/QELgnH8ipGoWOwJqF24ZpBtHYX9NWue8IHLKCJCYa9i6AHuMhSRyLqrJLwTnXpQLRC7RV4NMCrfsDfcNqZ5KukJJjxeFJdS94i9ZoaTAZ0WIQhqc5LmtjAn/dUl0MjHdSCOe1yUE4R2XNO6dW+U1CEa6ADg8JgX//pb+o3+SoJlkEekyo5/aWcf0Urar1eyhBW5xFYEPQe0T7AE2tureUQoDwMzNW+3yEkTxmrAYcK2JBFTuU2ndM8FIrUEFxdWRat0kqhmh9pZ5bzcX2R3ZvuRIDu8GvyXgIdFys0yvrvg2MirC3ptGOY/jWy70R23P3eAr7ccw/pUfRMw8aGoj/1fP62hvlpLAbKZOGZf89lAvTk/+Q057XfdoyWCql2cMXzaQAnPpxYBplH8Q3N3xmUMGFLHhSVCNIIb+CJ+7sV1cOBZu0MGRby4xaYlYv5P2JYBjYmCKS3twWzNBaHZK8n0+vnl7O7RMrCFDQ== X-Forefront-Antispam-Report: CIP:216.228.117.160; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge1.nvidia.com; CAT:NONE; SFS:(13230022)(4636009)(136003)(376002)(39860400002)(346002)(396003)(451199015)(46966006)(40470700004)(36840700001)(30864003)(54906003)(83380400001)(426003)(110136005)(8676002)(6666004)(107886003)(70206006)(70586007)(36756003)(6286002)(4326008)(36860700001)(55016003)(316002)(41300700001)(7696005)(2906002)(26005)(86362001)(40480700001)(7636003)(47076005)(40460700003)(5660300002)(8936002)(2616005)(82740400003)(186003)(356005)(478600001)(16526019)(1076003)(82310400005)(336012); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 20 Oct 2022 15:42:50.2340 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 3cc98e24-0d91-4d7f-4e35-08dab2b1beb0 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.160]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT116.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM4PR12MB6087 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Dariusz Sosnowski This patch adds support for fdb_def_rule_en device argument to HW Steering, which controls: - creation of default FDB jump flow rule, - ability of the user to create transfer flow rules in root table. Signed-off-by: Dariusz Sosnowski Signed-off-by: Xueming Li --- doc/guides/nics/features/mlx5.ini | 1 + drivers/net/mlx5/linux/mlx5_os.c | 14 ++ drivers/net/mlx5/mlx5.h | 4 +- drivers/net/mlx5/mlx5_flow.c | 20 +-- drivers/net/mlx5/mlx5_flow.h | 5 +- drivers/net/mlx5/mlx5_flow_dv.c | 62 ++++--- drivers/net/mlx5/mlx5_flow_hw.c | 273 +++++++++++++++--------------- drivers/net/mlx5/mlx5_trigger.c | 31 ++-- drivers/net/mlx5/mlx5_tx.h | 1 + drivers/net/mlx5/mlx5_txq.c | 47 +++++ drivers/net/mlx5/rte_pmd_mlx5.h | 17 ++ drivers/net/mlx5/version.map | 1 + 12 files changed, 288 insertions(+), 188 deletions(-) diff --git a/doc/guides/nics/features/mlx5.ini b/doc/guides/nics/features/mlx5.ini index de4b109c31..0ac0fa9663 100644 --- a/doc/guides/nics/features/mlx5.ini +++ b/doc/guides/nics/features/mlx5.ini @@ -85,6 +85,7 @@ vxlan = Y vxlan_gpe = Y represented_port = Y meter_color = Y +port_representor = Y [rte_flow actions] age = I diff --git a/drivers/net/mlx5/linux/mlx5_os.c b/drivers/net/mlx5/linux/mlx5_os.c index 5f1fd9b4e7..a6cb802500 100644 --- a/drivers/net/mlx5/linux/mlx5_os.c +++ b/drivers/net/mlx5/linux/mlx5_os.c @@ -1567,6 +1567,20 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev, rte_rwlock_init(&priv->ind_tbls_lock); if (priv->sh->config.dv_flow_en == 2) { #ifdef HAVE_MLX5_HWS_SUPPORT + if (priv->sh->config.dv_esw_en) { + if (priv->sh->dv_regc0_mask == UINT32_MAX) { + DRV_LOG(ERR, "E-Switch port metadata is required when using HWS " + "but it is disabled (configure it through devlink)"); + err = ENOTSUP; + goto error; + } + if (priv->sh->dv_regc0_mask == 0) { + DRV_LOG(ERR, "E-Switch with HWS is not supported " + "(no available bits in reg_c[0])"); + err = ENOTSUP; + goto error; + } + } if (priv->vport_meta_mask) flow_hw_set_port_info(eth_dev); if (priv->sh->config.dv_esw_en && diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h index 42a1e206c0..a715df693e 100644 --- a/drivers/net/mlx5/mlx5.h +++ b/drivers/net/mlx5/mlx5.h @@ -2028,7 +2028,7 @@ int mlx5_flow_ops_get(struct rte_eth_dev *dev, const struct rte_flow_ops **ops); int mlx5_flow_start_default(struct rte_eth_dev *dev); void mlx5_flow_stop_default(struct rte_eth_dev *dev); int mlx5_flow_verify(struct rte_eth_dev *dev); -int mlx5_ctrl_flow_source_queue(struct rte_eth_dev *dev, uint32_t queue); +int mlx5_ctrl_flow_source_queue(struct rte_eth_dev *dev, uint32_t sq_num); int mlx5_ctrl_flow_vlan(struct rte_eth_dev *dev, struct rte_flow_item_eth *eth_spec, struct rte_flow_item_eth *eth_mask, @@ -2040,7 +2040,7 @@ int mlx5_ctrl_flow(struct rte_eth_dev *dev, int mlx5_flow_lacp_miss(struct rte_eth_dev *dev); struct rte_flow *mlx5_flow_create_esw_table_zero_flow(struct rte_eth_dev *dev); uint32_t mlx5_flow_create_devx_sq_miss_flow(struct rte_eth_dev *dev, - uint32_t txq); + uint32_t sq_num); void mlx5_flow_async_pool_query_handle(struct mlx5_dev_ctx_shared *sh, uint64_t async_id, int status); void mlx5_set_query_alarm(struct mlx5_dev_ctx_shared *sh); diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c index 9121b90b4e..01ad1f774b 100644 --- a/drivers/net/mlx5/mlx5_flow.c +++ b/drivers/net/mlx5/mlx5_flow.c @@ -7159,14 +7159,14 @@ mlx5_flow_create_esw_table_zero_flow(struct rte_eth_dev *dev) * * @param dev * Pointer to Ethernet device. - * @param txq - * Txq index. + * @param sq_num + * SQ number. * * @return * Flow ID on success, 0 otherwise and rte_errno is set. */ uint32_t -mlx5_flow_create_devx_sq_miss_flow(struct rte_eth_dev *dev, uint32_t txq) +mlx5_flow_create_devx_sq_miss_flow(struct rte_eth_dev *dev, uint32_t sq_num) { struct rte_flow_attr attr = { .group = 0, @@ -7178,8 +7178,8 @@ mlx5_flow_create_devx_sq_miss_flow(struct rte_eth_dev *dev, uint32_t txq) struct rte_flow_item_port_id port_spec = { .id = MLX5_PORT_ESW_MGR, }; - struct mlx5_rte_flow_item_sq txq_spec = { - .queue = txq, + struct mlx5_rte_flow_item_sq sq_spec = { + .queue = sq_num, }; struct rte_flow_item pattern[] = { { @@ -7189,7 +7189,7 @@ mlx5_flow_create_devx_sq_miss_flow(struct rte_eth_dev *dev, uint32_t txq) { .type = (enum rte_flow_item_type) MLX5_RTE_FLOW_ITEM_TYPE_SQ, - .spec = &txq_spec, + .spec = &sq_spec, }, { .type = RTE_FLOW_ITEM_TYPE_END, @@ -7560,22 +7560,22 @@ mlx5_flow_verify(struct rte_eth_dev *dev __rte_unused) * * @param dev * Pointer to Ethernet device. - * @param queue - * The queue index. + * @param sq_num + * The SQ hw number. * * @return * 0 on success, a negative errno value otherwise and rte_errno is set. */ int mlx5_ctrl_flow_source_queue(struct rte_eth_dev *dev, - uint32_t queue) + uint32_t sq_num) { const struct rte_flow_attr attr = { .egress = 1, .priority = 0, }; struct mlx5_rte_flow_item_sq queue_spec = { - .queue = queue, + .queue = sq_num, }; struct mlx5_rte_flow_item_sq queue_mask = { .queue = UINT32_MAX, diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h index 8ba3c2ddb1..1a4b33d592 100644 --- a/drivers/net/mlx5/mlx5_flow.h +++ b/drivers/net/mlx5/mlx5_flow.h @@ -116,7 +116,7 @@ struct mlx5_flow_action_copy_mreg { /* Matches on source queue. */ struct mlx5_rte_flow_item_sq { - uint32_t queue; + uint32_t queue; /* DevX SQ number */ }; /* Feature name to allocate metadata register. */ @@ -2491,9 +2491,8 @@ int mlx5_flow_pick_transfer_proxy(struct rte_eth_dev *dev, int mlx5_flow_hw_flush_ctrl_flows(struct rte_eth_dev *dev); -int mlx5_flow_hw_esw_create_mgr_sq_miss_flow(struct rte_eth_dev *dev); int mlx5_flow_hw_esw_create_sq_miss_flow(struct rte_eth_dev *dev, - uint32_t txq); + uint32_t sqn); int mlx5_flow_hw_esw_create_default_jump_flow(struct rte_eth_dev *dev); int mlx5_flow_hw_create_tx_default_mreg_copy_flow(struct rte_eth_dev *dev); int mlx5_flow_actions_validate(struct rte_eth_dev *dev, diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c index 5c6ecc4a1a..dbe55a5103 100644 --- a/drivers/net/mlx5/mlx5_flow_dv.c +++ b/drivers/net/mlx5/mlx5_flow_dv.c @@ -10125,6 +10125,29 @@ flow_dv_translate_item_port_id(struct rte_eth_dev *dev, void *key, return 0; } +/** + * Translate port representor item to eswitch match on port id. + * + * @param[in] dev + * The devich to configure through. + * @param[in, out] key + * Flow matcher value. + * @param[in] key_type + * Set flow matcher mask or value. + * + * @return + * 0 on success, a negative errno value otherwise. + */ +static int +flow_dv_translate_item_port_representor(struct rte_eth_dev *dev, void *key, + uint32_t key_type) +{ + flow_dv_translate_item_source_vport(key, + key_type & MLX5_SET_MATCHER_V ? + mlx5_flow_get_esw_manager_vport_id(dev) : 0xffff); + return 0; +} + /** * Translate represented port item to eswitch match on port id. * @@ -11404,10 +11427,10 @@ flow_dv_translate_create_counter(struct rte_eth_dev *dev, } /** - * Add Tx queue matcher + * Add SQ matcher * - * @param[in] dev - * Pointer to the dev struct. + * @param[in, out] matcher + * Flow matcher. * @param[in, out] key * Flow matcher value. * @param[in] item @@ -11416,40 +11439,29 @@ flow_dv_translate_create_counter(struct rte_eth_dev *dev, * Set flow matcher mask or value. */ static void -flow_dv_translate_item_tx_queue(struct rte_eth_dev *dev, - void *key, - const struct rte_flow_item *item, - uint32_t key_type) +flow_dv_translate_item_sq(void *key, + const struct rte_flow_item *item, + uint32_t key_type) { const struct mlx5_rte_flow_item_sq *queue_m; const struct mlx5_rte_flow_item_sq *queue_v; const struct mlx5_rte_flow_item_sq queue_mask = { .queue = UINT32_MAX, }; - void *misc_v = - MLX5_ADDR_OF(fte_match_param, key, misc_parameters); - struct mlx5_txq_ctrl *txq = NULL; + void *misc_v = MLX5_ADDR_OF(fte_match_param, key, misc_parameters); uint32_t queue; MLX5_ITEM_UPDATE(item, key_type, queue_v, queue_m, &queue_mask); if (!queue_m || !queue_v) return; if (key_type & MLX5_SET_MATCHER_V) { - txq = mlx5_txq_get(dev, queue_v->queue); - if (!txq) - return; - if (txq->is_hairpin) - queue = txq->obj->sq->id; - else - queue = txq->obj->sq_obj.sq->id; + queue = queue_v->queue; if (key_type == MLX5_SET_MATCHER_SW_V) queue &= queue_m->queue; } else { queue = queue_m->queue; } MLX5_SET(fte_match_set_misc, misc_v, source_sqn, queue); - if (txq) - mlx5_txq_release(dev, queue_v->queue); } /** @@ -13195,6 +13207,11 @@ flow_dv_translate_items(struct rte_eth_dev *dev, (dev, key, items, wks->attr, key_type); last_item = MLX5_FLOW_ITEM_PORT_ID; break; + case RTE_FLOW_ITEM_TYPE_PORT_REPRESENTOR: + flow_dv_translate_item_port_representor + (dev, key, key_type); + last_item = MLX5_FLOW_ITEM_PORT_REPRESENTOR; + break; case RTE_FLOW_ITEM_TYPE_REPRESENTED_PORT: flow_dv_translate_item_represented_port (dev, key, items, wks->attr, key_type); @@ -13401,7 +13418,7 @@ flow_dv_translate_items(struct rte_eth_dev *dev, last_item = MLX5_FLOW_ITEM_TAG; break; case MLX5_RTE_FLOW_ITEM_TYPE_SQ: - flow_dv_translate_item_tx_queue(dev, key, items, key_type); + flow_dv_translate_item_sq(key, items, key_type); last_item = MLX5_FLOW_ITEM_SQ; break; case RTE_FLOW_ITEM_TYPE_GTP: @@ -13611,7 +13628,6 @@ flow_dv_translate_items_sws(struct rte_eth_dev *dev, wks.last_item = tunnel ? MLX5_FLOW_ITEM_INNER_FLEX : MLX5_FLOW_ITEM_OUTER_FLEX; break; - default: ret = flow_dv_translate_items(dev, items, &wks_m, match_mask, MLX5_SET_MATCHER_SW_M, error); @@ -13634,7 +13650,9 @@ flow_dv_translate_items_sws(struct rte_eth_dev *dev, * in use. */ if (!(wks.item_flags & MLX5_FLOW_ITEM_PORT_ID) && - !(wks.item_flags & MLX5_FLOW_ITEM_REPRESENTED_PORT) && priv->sh->esw_mode && + !(wks.item_flags & MLX5_FLOW_ITEM_REPRESENTED_PORT) && + !(wks.item_flags & MLX5_FLOW_ITEM_PORT_REPRESENTOR) && + priv->sh->esw_mode && !(attr->egress && !attr->transfer) && attr->group != MLX5_FLOW_MREG_CP_TABLE_GROUP) { if (flow_dv_translate_item_port_id_all(dev, match_mask, diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c index 07b58db044..1516ee9e25 100644 --- a/drivers/net/mlx5/mlx5_flow_hw.c +++ b/drivers/net/mlx5/mlx5_flow_hw.c @@ -3176,7 +3176,10 @@ flow_hw_translate_group(struct rte_eth_dev *dev, struct mlx5_priv *priv = dev->data->dev_private; const struct rte_flow_attr *flow_attr = &cfg->attr.flow_attr; - if (priv->sh->config.dv_esw_en && cfg->external && flow_attr->transfer) { + if (priv->sh->config.dv_esw_en && + priv->fdb_def_rule && + cfg->external && + flow_attr->transfer) { if (group > MLX5_HW_MAX_TRANSFER_GROUP) return rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ATTR_GROUP, @@ -5140,14 +5143,23 @@ flow_hw_free_vport_actions(struct mlx5_priv *priv) } static uint32_t -flow_hw_usable_lsb_vport_mask(struct mlx5_priv *priv) +flow_hw_esw_mgr_regc_marker_mask(struct rte_eth_dev *dev) { - uint32_t usable_mask = ~priv->vport_meta_mask; + uint32_t mask = MLX5_SH(dev)->dv_regc0_mask; - if (usable_mask) - return (1 << rte_bsf32(usable_mask)); - else - return 0; + /* Mask is verified during device initialization. */ + MLX5_ASSERT(mask != 0); + return mask; +} + +static uint32_t +flow_hw_esw_mgr_regc_marker(struct rte_eth_dev *dev) +{ + uint32_t mask = MLX5_SH(dev)->dv_regc0_mask; + + /* Mask is verified during device initialization. */ + MLX5_ASSERT(mask != 0); + return RTE_BIT32(rte_bsf32(mask)); } /** @@ -5173,12 +5185,19 @@ flow_hw_create_ctrl_esw_mgr_pattern_template(struct rte_eth_dev *dev) struct rte_flow_item_ethdev port_mask = { .port_id = UINT16_MAX, }; + struct mlx5_rte_flow_item_sq sq_mask = { + .queue = UINT32_MAX, + }; struct rte_flow_item items[] = { { .type = RTE_FLOW_ITEM_TYPE_REPRESENTED_PORT, .spec = &port_spec, .mask = &port_mask, }, + { + .type = (enum rte_flow_item_type)MLX5_RTE_FLOW_ITEM_TYPE_SQ, + .mask = &sq_mask, + }, { .type = RTE_FLOW_ITEM_TYPE_END, }, @@ -5188,9 +5207,10 @@ flow_hw_create_ctrl_esw_mgr_pattern_template(struct rte_eth_dev *dev) } /** - * Creates a flow pattern template used to match REG_C_0 and a TX queue. - * Matching on REG_C_0 is set up to match on least significant bit usable - * by user-space, which is set when packet was originated from E-Switch Manager. + * Creates a flow pattern template used to match REG_C_0 and a SQ. + * Matching on REG_C_0 is set up to match on all bits usable by user-space. + * If traffic was sent from E-Switch Manager, then all usable bits will be set to 0, + * except the least significant bit, which will be set to 1. * * This template is used to set up a table for SQ miss default flow. * @@ -5203,8 +5223,6 @@ flow_hw_create_ctrl_esw_mgr_pattern_template(struct rte_eth_dev *dev) static struct rte_flow_pattern_template * flow_hw_create_ctrl_regc_sq_pattern_template(struct rte_eth_dev *dev) { - struct mlx5_priv *priv = dev->data->dev_private; - uint32_t marker_bit = flow_hw_usable_lsb_vport_mask(priv); struct rte_flow_pattern_template_attr attr = { .relaxed_matching = 0, .transfer = 1, @@ -5214,6 +5232,7 @@ flow_hw_create_ctrl_regc_sq_pattern_template(struct rte_eth_dev *dev) }; struct rte_flow_item_tag reg_c0_mask = { .index = 0xff, + .data = flow_hw_esw_mgr_regc_marker_mask(dev), }; struct mlx5_rte_flow_item_sq queue_mask = { .queue = UINT32_MAX, @@ -5235,12 +5254,6 @@ flow_hw_create_ctrl_regc_sq_pattern_template(struct rte_eth_dev *dev) }, }; - if (!marker_bit) { - DRV_LOG(ERR, "Unable to set up pattern template for SQ miss table"); - return NULL; - } - reg_c0_spec.data = marker_bit; - reg_c0_mask.data = marker_bit; return flow_hw_pattern_template_create(dev, &attr, items, NULL); } @@ -5332,9 +5345,8 @@ flow_hw_create_tx_default_mreg_copy_pattern_template(struct rte_eth_dev *dev) static struct rte_flow_actions_template * flow_hw_create_ctrl_regc_jump_actions_template(struct rte_eth_dev *dev) { - struct mlx5_priv *priv = dev->data->dev_private; - uint32_t marker_bit = flow_hw_usable_lsb_vport_mask(priv); - uint32_t marker_bit_mask = UINT32_MAX; + uint32_t marker_mask = flow_hw_esw_mgr_regc_marker_mask(dev); + uint32_t marker_bits = flow_hw_esw_mgr_regc_marker(dev); struct rte_flow_actions_template_attr attr = { .transfer = 1, }; @@ -5347,7 +5359,7 @@ flow_hw_create_ctrl_regc_jump_actions_template(struct rte_eth_dev *dev) .src = { .field = RTE_FLOW_FIELD_VALUE, }, - .width = 1, + .width = __builtin_popcount(marker_mask), }; struct rte_flow_action_modify_field set_reg_m = { .operation = RTE_FLOW_MODIFY_SET, @@ -5394,13 +5406,9 @@ flow_hw_create_ctrl_regc_jump_actions_template(struct rte_eth_dev *dev) } }; - if (!marker_bit) { - DRV_LOG(ERR, "Unable to set up actions template for SQ miss table"); - return NULL; - } - set_reg_v.dst.offset = rte_bsf32(marker_bit); - rte_memcpy(set_reg_v.src.value, &marker_bit, sizeof(marker_bit)); - rte_memcpy(set_reg_m.src.value, &marker_bit_mask, sizeof(marker_bit_mask)); + set_reg_v.dst.offset = rte_bsf32(marker_mask); + rte_memcpy(set_reg_v.src.value, &marker_bits, sizeof(marker_bits)); + rte_memcpy(set_reg_m.src.value, &marker_mask, sizeof(marker_mask)); return flow_hw_actions_template_create(dev, &attr, actions_v, actions_m, NULL); } @@ -5587,7 +5595,7 @@ flow_hw_create_ctrl_sq_miss_root_table(struct rte_eth_dev *dev, struct rte_flow_template_table_attr attr = { .flow_attr = { .group = 0, - .priority = 0, + .priority = MLX5_HW_LOWEST_PRIO_ROOT, .ingress = 0, .egress = 0, .transfer = 1, @@ -5702,7 +5710,7 @@ flow_hw_create_ctrl_jump_table(struct rte_eth_dev *dev, struct rte_flow_template_table_attr attr = { .flow_attr = { .group = 0, - .priority = MLX5_HW_LOWEST_PRIO_ROOT, + .priority = 0, .ingress = 0, .egress = 0, .transfer = 1, @@ -7800,141 +7808,123 @@ flow_hw_flush_all_ctrl_flows(struct rte_eth_dev *dev) } int -mlx5_flow_hw_esw_create_mgr_sq_miss_flow(struct rte_eth_dev *dev) +mlx5_flow_hw_esw_create_sq_miss_flow(struct rte_eth_dev *dev, uint32_t sqn) { - struct mlx5_priv *priv = dev->data->dev_private; - struct rte_flow_item_ethdev port_spec = { + uint16_t port_id = dev->data->port_id; + struct rte_flow_item_ethdev esw_mgr_spec = { .port_id = MLX5_REPRESENTED_PORT_ESW_MGR, }; - struct rte_flow_item_ethdev port_mask = { + struct rte_flow_item_ethdev esw_mgr_mask = { .port_id = MLX5_REPRESENTED_PORT_ESW_MGR, }; - struct rte_flow_item items[] = { - { - .type = RTE_FLOW_ITEM_TYPE_REPRESENTED_PORT, - .spec = &port_spec, - .mask = &port_mask, - }, - { - .type = RTE_FLOW_ITEM_TYPE_END, - }, - }; - struct rte_flow_action_modify_field modify_field = { - .operation = RTE_FLOW_MODIFY_SET, - .dst = { - .field = (enum rte_flow_field_id)MLX5_RTE_FLOW_FIELD_META_REG, - }, - .src = { - .field = RTE_FLOW_FIELD_VALUE, - }, - .width = 1, - }; - struct rte_flow_action_jump jump = { - .group = 1, - }; - struct rte_flow_action actions[] = { - { - .type = RTE_FLOW_ACTION_TYPE_MODIFY_FIELD, - .conf = &modify_field, - }, - { - .type = RTE_FLOW_ACTION_TYPE_JUMP, - .conf = &jump, - }, - { - .type = RTE_FLOW_ACTION_TYPE_END, - }, - }; - - MLX5_ASSERT(priv->master); - if (!priv->dr_ctx || - !priv->hw_esw_sq_miss_root_tbl) - return 0; - return flow_hw_create_ctrl_flow(dev, dev, - priv->hw_esw_sq_miss_root_tbl, - items, 0, actions, 0); -} - -int -mlx5_flow_hw_esw_create_sq_miss_flow(struct rte_eth_dev *dev, uint32_t txq) -{ - uint16_t port_id = dev->data->port_id; struct rte_flow_item_tag reg_c0_spec = { .index = (uint8_t)REG_C_0, + .data = flow_hw_esw_mgr_regc_marker(dev), }; struct rte_flow_item_tag reg_c0_mask = { .index = 0xff, + .data = flow_hw_esw_mgr_regc_marker_mask(dev), }; - struct mlx5_rte_flow_item_sq queue_spec = { - .queue = txq, - }; - struct mlx5_rte_flow_item_sq queue_mask = { - .queue = UINT32_MAX, - }; - struct rte_flow_item items[] = { - { - .type = (enum rte_flow_item_type) - MLX5_RTE_FLOW_ITEM_TYPE_TAG, - .spec = ®_c0_spec, - .mask = ®_c0_mask, - }, - { - .type = (enum rte_flow_item_type) - MLX5_RTE_FLOW_ITEM_TYPE_SQ, - .spec = &queue_spec, - .mask = &queue_mask, - }, - { - .type = RTE_FLOW_ITEM_TYPE_END, - }, + struct mlx5_rte_flow_item_sq sq_spec = { + .queue = sqn, }; struct rte_flow_action_ethdev port = { .port_id = port_id, }; - struct rte_flow_action actions[] = { - { - .type = RTE_FLOW_ACTION_TYPE_REPRESENTED_PORT, - .conf = &port, - }, - { - .type = RTE_FLOW_ACTION_TYPE_END, - }, - }; + struct rte_flow_item items[3] = { { 0 } }; + struct rte_flow_action actions[3] = { { 0 } }; struct rte_eth_dev *proxy_dev; struct mlx5_priv *proxy_priv; uint16_t proxy_port_id = dev->data->port_id; - uint32_t marker_bit; int ret; - RTE_SET_USED(txq); ret = rte_flow_pick_transfer_proxy(port_id, &proxy_port_id, NULL); if (ret) { - DRV_LOG(ERR, "Unable to pick proxy port for port %u", port_id); + DRV_LOG(ERR, "Unable to pick transfer proxy port for port %u. Transfer proxy " + "port must be present to create default SQ miss flows.", + port_id); return ret; } proxy_dev = &rte_eth_devices[proxy_port_id]; proxy_priv = proxy_dev->data->dev_private; - if (!proxy_priv->dr_ctx) + if (!proxy_priv->dr_ctx) { + DRV_LOG(DEBUG, "Transfer proxy port (port %u) of port %u must be configured " + "for HWS to create default SQ miss flows. Default flows will " + "not be created.", + proxy_port_id, port_id); return 0; + } if (!proxy_priv->hw_esw_sq_miss_root_tbl || !proxy_priv->hw_esw_sq_miss_tbl) { - DRV_LOG(ERR, "port %u proxy port %u was configured but default" - " flow tables are not created", - port_id, proxy_port_id); + DRV_LOG(ERR, "Transfer proxy port (port %u) of port %u was configured, but " + "default flow tables were not created.", + proxy_port_id, port_id); rte_errno = ENOMEM; return -rte_errno; } - marker_bit = flow_hw_usable_lsb_vport_mask(proxy_priv); - if (!marker_bit) { - DRV_LOG(ERR, "Unable to set up control flow in SQ miss table"); - rte_errno = EINVAL; - return -rte_errno; + /* + * Create a root SQ miss flow rule - match E-Switch Manager and SQ, + * and jump to group 1. + */ + items[0] = (struct rte_flow_item){ + .type = RTE_FLOW_ITEM_TYPE_REPRESENTED_PORT, + .spec = &esw_mgr_spec, + .mask = &esw_mgr_mask, + }; + items[1] = (struct rte_flow_item){ + .type = (enum rte_flow_item_type)MLX5_RTE_FLOW_ITEM_TYPE_SQ, + .spec = &sq_spec, + }; + items[2] = (struct rte_flow_item){ + .type = RTE_FLOW_ITEM_TYPE_END, + }; + actions[0] = (struct rte_flow_action){ + .type = RTE_FLOW_ACTION_TYPE_MODIFY_FIELD, + }; + actions[1] = (struct rte_flow_action){ + .type = RTE_FLOW_ACTION_TYPE_JUMP, + }; + actions[2] = (struct rte_flow_action) { + .type = RTE_FLOW_ACTION_TYPE_END, + }; + ret = flow_hw_create_ctrl_flow(dev, proxy_dev, proxy_priv->hw_esw_sq_miss_root_tbl, + items, 0, actions, 0); + if (ret) { + DRV_LOG(ERR, "Port %u failed to create root SQ miss flow rule for SQ %u, ret %d", + port_id, sqn, ret); + return ret; } - reg_c0_spec.data = marker_bit; - reg_c0_mask.data = marker_bit; - return flow_hw_create_ctrl_flow(dev, proxy_dev, - proxy_priv->hw_esw_sq_miss_tbl, - items, 0, actions, 0); + /* + * Create a non-root SQ miss flow rule - match REG_C_0 marker and SQ, + * and forward to port. + */ + items[0] = (struct rte_flow_item){ + .type = (enum rte_flow_item_type)MLX5_RTE_FLOW_ITEM_TYPE_TAG, + .spec = ®_c0_spec, + .mask = ®_c0_mask, + }; + items[1] = (struct rte_flow_item){ + .type = (enum rte_flow_item_type)MLX5_RTE_FLOW_ITEM_TYPE_SQ, + .spec = &sq_spec, + }; + items[2] = (struct rte_flow_item){ + .type = RTE_FLOW_ITEM_TYPE_END, + }; + actions[0] = (struct rte_flow_action){ + .type = RTE_FLOW_ACTION_TYPE_REPRESENTED_PORT, + .conf = &port, + }; + actions[1] = (struct rte_flow_action){ + .type = RTE_FLOW_ACTION_TYPE_END, + }; + ret = flow_hw_create_ctrl_flow(dev, proxy_dev, proxy_priv->hw_esw_sq_miss_tbl, + items, 0, actions, 0); + if (ret) { + DRV_LOG(ERR, "Port %u failed to create HWS SQ miss flow rule for SQ %u, ret %d", + port_id, sqn, ret); + return ret; + } + return 0; } int @@ -7972,17 +7962,24 @@ mlx5_flow_hw_esw_create_default_jump_flow(struct rte_eth_dev *dev) ret = rte_flow_pick_transfer_proxy(port_id, &proxy_port_id, NULL); if (ret) { - DRV_LOG(ERR, "Unable to pick proxy port for port %u", port_id); + DRV_LOG(ERR, "Unable to pick transfer proxy port for port %u. Transfer proxy " + "port must be present to create default FDB jump rule.", + port_id); return ret; } proxy_dev = &rte_eth_devices[proxy_port_id]; proxy_priv = proxy_dev->data->dev_private; - if (!proxy_priv->dr_ctx) + if (!proxy_priv->dr_ctx) { + DRV_LOG(DEBUG, "Transfer proxy port (port %u) of port %u must be configured " + "for HWS to create default FDB jump rule. Default rule will " + "not be created.", + proxy_port_id, port_id); return 0; + } if (!proxy_priv->hw_esw_zero_tbl) { - DRV_LOG(ERR, "port %u proxy port %u was configured but default" - " flow tables are not created", - port_id, proxy_port_id); + DRV_LOG(ERR, "Transfer proxy port (port %u) of port %u was configured, but " + "default flow tables were not created.", + proxy_port_id, port_id); rte_errno = EINVAL; return -rte_errno; } diff --git a/drivers/net/mlx5/mlx5_trigger.c b/drivers/net/mlx5/mlx5_trigger.c index c260c81e57..715f2891cf 100644 --- a/drivers/net/mlx5/mlx5_trigger.c +++ b/drivers/net/mlx5/mlx5_trigger.c @@ -426,7 +426,7 @@ mlx5_hairpin_queue_peer_update(struct rte_eth_dev *dev, uint16_t peer_queue, mlx5_txq_release(dev, peer_queue); return -rte_errno; } - peer_info->qp_id = txq_ctrl->obj->sq->id; + peer_info->qp_id = mlx5_txq_get_sqn(txq_ctrl); peer_info->vhca_id = priv->sh->cdev->config.hca_attr.vhca_id; /* 1-to-1 mapping, only the first one is used. */ peer_info->peer_q = txq_ctrl->hairpin_conf.peers[0].queue; @@ -818,7 +818,7 @@ mlx5_hairpin_bind_single_port(struct rte_eth_dev *dev, uint16_t rx_port) } /* Pass TxQ's information to peer RxQ and try binding. */ cur.peer_q = rx_queue; - cur.qp_id = txq_ctrl->obj->sq->id; + cur.qp_id = mlx5_txq_get_sqn(txq_ctrl); cur.vhca_id = priv->sh->cdev->config.hca_attr.vhca_id; cur.tx_explicit = txq_ctrl->hairpin_conf.tx_explicit; cur.manual_bind = txq_ctrl->hairpin_conf.manual_bind; @@ -1300,8 +1300,6 @@ mlx5_traffic_enable_hws(struct rte_eth_dev *dev) int ret; if (priv->sh->config.dv_esw_en && priv->master) { - if (mlx5_flow_hw_esw_create_mgr_sq_miss_flow(dev)) - goto error; if (priv->sh->config.dv_xmeta_en == MLX5_XMETA_MODE_META32_HWS) if (mlx5_flow_hw_create_tx_default_mreg_copy_flow(dev)) goto error; @@ -1312,10 +1310,7 @@ mlx5_traffic_enable_hws(struct rte_eth_dev *dev) if (!txq) continue; - if (txq->is_hairpin) - queue = txq->obj->sq->id; - else - queue = txq->obj->sq_obj.sq->id; + queue = mlx5_txq_get_sqn(txq); if ((priv->representor || priv->master) && priv->sh->config.dv_esw_en) { if (mlx5_flow_hw_esw_create_sq_miss_flow(dev, queue)) { @@ -1325,9 +1320,15 @@ mlx5_traffic_enable_hws(struct rte_eth_dev *dev) } mlx5_txq_release(dev, i); } - if ((priv->master || priv->representor) && priv->sh->config.dv_esw_en) { - if (mlx5_flow_hw_esw_create_default_jump_flow(dev)) - goto error; + if (priv->sh->config.fdb_def_rule) { + if ((priv->master || priv->representor) && priv->sh->config.dv_esw_en) { + if (!mlx5_flow_hw_esw_create_default_jump_flow(dev)) + priv->fdb_def_rule = 1; + else + goto error; + } + } else { + DRV_LOG(INFO, "port %u FDB default rule is disabled", dev->data->port_id); } return 0; error: @@ -1393,14 +1394,18 @@ mlx5_traffic_enable(struct rte_eth_dev *dev) txq_ctrl->hairpin_conf.tx_explicit == 0 && txq_ctrl->hairpin_conf.peers[0].port == priv->dev_data->port_id) { - ret = mlx5_ctrl_flow_source_queue(dev, i); + ret = mlx5_ctrl_flow_source_queue(dev, + mlx5_txq_get_sqn(txq_ctrl)); if (ret) { mlx5_txq_release(dev, i); goto error; } } if (priv->sh->config.dv_esw_en) { - if (mlx5_flow_create_devx_sq_miss_flow(dev, i) == 0) { + uint32_t q = mlx5_txq_get_sqn(txq_ctrl); + + if (mlx5_flow_create_devx_sq_miss_flow(dev, q) == 0) { + mlx5_txq_release(dev, i); DRV_LOG(ERR, "Port %u Tx queue %u SQ create representor devx default miss rule failed.", dev->data->port_id, i); diff --git a/drivers/net/mlx5/mlx5_tx.h b/drivers/net/mlx5/mlx5_tx.h index e0fc1872fe..6471ebf59f 100644 --- a/drivers/net/mlx5/mlx5_tx.h +++ b/drivers/net/mlx5/mlx5_tx.h @@ -213,6 +213,7 @@ struct mlx5_txq_ctrl *mlx5_txq_get(struct rte_eth_dev *dev, uint16_t idx); int mlx5_txq_release(struct rte_eth_dev *dev, uint16_t idx); int mlx5_txq_releasable(struct rte_eth_dev *dev, uint16_t idx); int mlx5_txq_verify(struct rte_eth_dev *dev); +int mlx5_txq_get_sqn(struct mlx5_txq_ctrl *txq); void txq_alloc_elts(struct mlx5_txq_ctrl *txq_ctrl); void txq_free_elts(struct mlx5_txq_ctrl *txq_ctrl); uint64_t mlx5_get_tx_port_offloads(struct rte_eth_dev *dev); diff --git a/drivers/net/mlx5/mlx5_txq.c b/drivers/net/mlx5/mlx5_txq.c index 9150ced72d..5543f2c570 100644 --- a/drivers/net/mlx5/mlx5_txq.c +++ b/drivers/net/mlx5/mlx5_txq.c @@ -27,6 +27,8 @@ #include "mlx5_tx.h" #include "mlx5_rxtx.h" #include "mlx5_autoconf.h" +#include "rte_pmd_mlx5.h" +#include "mlx5_flow.h" /** * Allocate TX queue elements. @@ -1274,6 +1276,51 @@ mlx5_txq_verify(struct rte_eth_dev *dev) return ret; } +int +mlx5_txq_get_sqn(struct mlx5_txq_ctrl *txq) +{ + return txq->is_hairpin ? txq->obj->sq->id : txq->obj->sq_obj.sq->id; +} + +int +rte_pmd_mlx5_external_sq_enable(uint16_t port_id, uint32_t sq_num) +{ + struct rte_eth_dev *dev; + struct mlx5_priv *priv; + uint32_t flow; + + if (rte_eth_dev_is_valid_port(port_id) < 0) { + DRV_LOG(ERR, "There is no Ethernet device for port %u.", + port_id); + rte_errno = ENODEV; + return -rte_errno; + } + dev = &rte_eth_devices[port_id]; + priv = dev->data->dev_private; + if ((!priv->representor && !priv->master) || + !priv->sh->config.dv_esw_en) { + DRV_LOG(ERR, "Port %u must be represetnor or master port in E-Switch mode.", + port_id); + rte_errno = EINVAL; + return -rte_errno; + } + if (sq_num == 0) { + DRV_LOG(ERR, "Invalid SQ number."); + rte_errno = EINVAL; + return -rte_errno; + } +#ifdef HAVE_MLX5_HWS_SUPPORT + if (priv->sh->config.dv_flow_en == 2) + return mlx5_flow_hw_esw_create_sq_miss_flow(dev, sq_num); +#endif + flow = mlx5_flow_create_devx_sq_miss_flow(dev, sq_num); + if (flow > 0) + return 0; + DRV_LOG(ERR, "Port %u failed to create default miss flow for SQ %u.", + port_id, sq_num); + return -rte_errno; +} + /** * Set the Tx queue dynamic timestamp (mask and offset) * diff --git a/drivers/net/mlx5/rte_pmd_mlx5.h b/drivers/net/mlx5/rte_pmd_mlx5.h index fbfdd9737b..d4caea5b20 100644 --- a/drivers/net/mlx5/rte_pmd_mlx5.h +++ b/drivers/net/mlx5/rte_pmd_mlx5.h @@ -139,6 +139,23 @@ int rte_pmd_mlx5_external_rx_queue_id_unmap(uint16_t port_id, __rte_experimental int rte_pmd_mlx5_host_shaper_config(int port_id, uint8_t rate, uint32_t flags); +/** + * Enable traffic for external SQ. + * + * @param[in] port_id + * The port identifier of the Ethernet device. + * @param[in] sq_num + * SQ HW number. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. + * Possible values for rte_errno: + * - EINVAL - invalid sq_number or port type. + * - ENODEV - there is no Ethernet device for this port id. + */ +__rte_experimental +int rte_pmd_mlx5_external_sq_enable(uint16_t port_id, uint32_t sq_num); + #ifdef __cplusplus } #endif diff --git a/drivers/net/mlx5/version.map b/drivers/net/mlx5/version.map index 9942de5079..848270da13 100644 --- a/drivers/net/mlx5/version.map +++ b/drivers/net/mlx5/version.map @@ -14,4 +14,5 @@ EXPERIMENTAL { rte_pmd_mlx5_external_rx_queue_id_unmap; # added in 22.07 rte_pmd_mlx5_host_shaper_config; + rte_pmd_mlx5_external_sq_enable; }; -- 2.25.1