From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 44862A0547; Thu, 21 Oct 2021 10:57:03 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id E7544411C5; Thu, 21 Oct 2021 10:57:00 +0200 (CEST) Received: from NAM12-MW2-obe.outbound.protection.outlook.com (mail-mw2nam12on2086.outbound.protection.outlook.com [40.107.244.86]) by mails.dpdk.org (Postfix) with ESMTP id 91A90411C4 for ; Thu, 21 Oct 2021 10:56:59 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=d7y09rGLcZpZevdOc+JedZ0xKs7IbPbBb37zxASO5kFDVUgK8nkelZZmGsM416jCIQNcb13u8lrZG9wnvCKzb6dIbeIpxkOLLdgOrgWA0BIJfa4BZnrQWpEcwBkcxmbLMQ8r3Mnvrjgg19Y1zPT5SwztEqkvFb2zcdXENvSukaXyPL5vvJ7DnmVPMf6lRk1CHlavUdBxKlF/wXBUaYaEC1Mqgaix2Ntgt9oasH1dlIBKrw48YkVfeoDkIumu0+Vbf762WR4RiCSOXTNaeLDAHaJI3wFysJguE4Q3pMVBV7pRbrV89piEz9b3y0l/US0XJ/OluhdCzMuSL5UD5D/ZGQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=oa8O3Gxx9FCUZhFdvjLqsNllGSTN5i/JQ5Pd+fc52Lc=; b=EKbBmU8u441vBmNY/ZzKleXo/dGX34IjqarfxSzvc9opmWqUeSoybbTvBYjaZFdySOt3Efnzr1UxrZONgFQNsDnR0N3/qECki79t59FQeaT0uTYbYwMFM3ahx677AokVO4pH5hM69XdTilopsPVOeVTwziu40837qw0EKsCJpiauTKzMeMHnBEYGl9PQHPPAVnbi9vidoJViUUWgWPh8WYGxhFU0+Sw36A2njf9r4Pf5mlGhPWMlMUXwrpAEUaRowbWpbuMmM4WzI7ZPGc2UvdDPNbQOF8gMQ2FouMBALwOLDeTUy26rMuieCnOxbPCCmerUuLjlif7dICfVHurpSA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.34) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=oa8O3Gxx9FCUZhFdvjLqsNllGSTN5i/JQ5Pd+fc52Lc=; b=uQEIAEsdfy6T759H2pDh5N9GchJVclErNZ5Q1lzw1NqsxMARvL4M3hzuOkcNaKZQEpIgeLE/d8bnYUTP5whyfqhlBbo+cU+6yCHlZ+f59jJWFqFz6GafMik8nwZ95D5rdV5vmjP7Fcv7BUgIfJe1G9mZbdpiTKQQsLP5KyQhTU6uCHn1GNcA+90Vx4aZbjjuyNEklN8iXymFbUnLIKnON7A2UhiAWFT1UL0XRLVSnULGURxPqptnjd0aXoNnJ/F+ZMFT6/mWxdis9AX9cNb8qIwPareNl0KLF0Rgy2km76BoKi6Mh0HnDL35hsDsM3pAvKINbGN5lHCW9O3bjC28Uw== Received: from DM6PR13CA0006.namprd13.prod.outlook.com (2603:10b6:5:bc::19) by BL0PR12MB5012.namprd12.prod.outlook.com (2603:10b6:208:1ca::22) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4628.16; Thu, 21 Oct 2021 08:56:56 +0000 Received: from DM6NAM11FT056.eop-nam11.prod.protection.outlook.com (2603:10b6:5:bc:cafe::fb) by DM6PR13CA0006.outlook.office365.com (2603:10b6:5:bc::19) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4649.8 via Frontend Transport; Thu, 21 Oct 2021 08:56:56 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.34) smtp.mailfrom=nvidia.com; dpdk.org; dkim=none (message not signed) header.d=none;dpdk.org; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.34 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.34; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.34) by DM6NAM11FT056.mail.protection.outlook.com (10.13.173.99) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4628.16 via Frontend Transport; Thu, 21 Oct 2021 08:56:56 +0000 Received: from nvidia.com (172.20.187.6) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Thu, 21 Oct 2021 08:56:53 +0000 From: Rongwei Liu To: , , , CC: , Date: Thu, 21 Oct 2021 11:56:36 +0300 Message-ID: <20211021085637.3627922-3-rongweil@nvidia.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20211021085637.3627922-1-rongweil@nvidia.com> References: <20211021085637.3627922-1-rongweil@nvidia.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain X-Originating-IP: [172.20.187.6] X-ClientProxiedBy: HQMAIL105.nvidia.com (172.20.187.12) To HQMAIL107.nvidia.com (172.20.187.13) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 43fb63e9-d080-4d46-2ba3-08d99470bc6f X-MS-TrafficTypeDiagnostic: BL0PR12MB5012: X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:2733; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: 3VjfhKhHMR4Ng4WqGjJ0+NpAk76kUzSF+yC56+n5ubs0YsA+awGF2GF+Ml9X/kztWuyGnOqupbqxT4ZfsDTWuGcggw7gJhWipXwj5rpeZzHoUnK4rlv1SEYYcoeBOzBO4opgoFNEYiD6LOdSl85eQFZ3qcdxsx1KmyHxSbwscWOiNmh600XeQzUo4KzQ+2jP12pgFtO8OhMsTAFwwKqyhRPDYH9J/J/5FTwey0D9WWM41tbOfTMtkjTZXskwU6ReAy5+tyYL/VMNB+V4zbKi2QqMCk5fH3kCTG640NKNKV3xkQfgiCBV/b7EX345xrKn6Ry5TEXpAebtmo4XHu1zJSMO0fBR+XuNnXGUcbrFHfJSrZ54zSAu+TP+lw8u0u/xQtuxTx5d+pUqTKTbJzRvFngyXWBbFpwDkfRqrJyu0kshblLK9bD5fNsWFsJMsRzBr9iA8ICbZarvZmOkkP2fLAfjIAZMgN6OSxjguYKeS01dIODTaEIDESU0OJtbJABNwpRzRgrE1uG2QXWvFZrIE+BfC4BxBmPeD7MuyBcPUNQyoOkzHUhonzgLXNedgjIgaTcalVYh5gVvgH0EzkKj7uyByf63m6uJ7TSzX+1Zd0VA4lVLPPB8bVvzioXpC5goViTUxbQS1AConFymZut6viHorkHkPKEZhVkpldrKo5XlCDpd9cARWO3eyH1oCCrGOjJpK11NGaCEAvhFB6HUq5uqOQwla5xgATtlc4XYyqU= X-Forefront-Antispam-Report: CIP:216.228.112.34; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:schybrid03.nvidia.com; CAT:NONE; SFS:(4636009)(46966006)(36840700001)(16526019)(186003)(47076005)(26005)(4326008)(7636003)(70586007)(356005)(70206006)(2616005)(426003)(86362001)(54906003)(55016002)(36906005)(110136005)(36860700001)(508600001)(316002)(30864003)(336012)(82310400003)(8936002)(36756003)(2906002)(6666004)(7696005)(6286002)(1076003)(8676002)(83380400001)(107886003)(5660300002)(309714004); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 21 Oct 2021 08:56:56.3189 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 43fb63e9-d080-4d46-2ba3-08d99470bc6f X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.112.34]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: DM6NAM11FT056.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: BL0PR12MB5012 Subject: [dpdk-dev] [PATCH v1 2/2] net/mlx5: set txq affinity in round-robin X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Previously, we set txq affinity to 0 and let firmware to perform round-robin when bonding. Firmware uses a global counter to assign txq affinity to different physical ports accord to remainder after division. There are three dis-advantages: 1. The global counter is shared between kernel and dpdk. 2. After restarting pmd or port, the previous counter value is reused, so the new affinity is unpredictable. 3. There is no way to get what affinity is set by firmware. In this update, we will create several TISs up to the number of bonding ports and bind each TIS to one PF port. For each port, it will start to pick up TIS using its port index. Upper layer application can quickly calculate each txq's affinity without querying. At DPDK layer, when creating txq with 2 bonding ports, the affinity is set like: port 0: 1-->2-->1-->2 port 1: 2-->1-->2-->1 port 2: 1-->2-->1-->2 Note: Only applicable to DevX api. This affinity subjects to HW hash. Signed-off-by: Rongwei Liu Acked-by: Matan Azrad --- doc/guides/nics/mlx5.rst | 4 ++ drivers/net/mlx5/linux/mlx5_os.c | 2 +- drivers/net/mlx5/mlx5.c | 81 ++++++++++++++++++++++++++++---- drivers/net/mlx5/mlx5.h | 10 +++- drivers/net/mlx5/mlx5_devx.c | 37 ++++++++++++++- drivers/net/mlx5/mlx5_txpp.c | 4 +- 6 files changed, 124 insertions(+), 14 deletions(-) diff --git a/doc/guides/nics/mlx5.rst b/doc/guides/nics/mlx5.rst index 7b540504f9..dd059b227d 100644 --- a/doc/guides/nics/mlx5.rst +++ b/doc/guides/nics/mlx5.rst @@ -464,6 +464,10 @@ Limitations - In order to achieve best insertion rate, application should manage the flows per lcore. - Better to disable memory reclaim by setting ``reclaim_mem_mode`` to 0 to accelerate the flow object allocation and release with cache. +- HW hashed bonding + + - TXQ affinity subjects to HW hash once enabled. + Statistics ---------- diff --git a/drivers/net/mlx5/linux/mlx5_os.c b/drivers/net/mlx5/linux/mlx5_os.c index 8a25ec8730..7356c91c92 100644 --- a/drivers/net/mlx5/linux/mlx5_os.c +++ b/drivers/net/mlx5/linux/mlx5_os.c @@ -878,7 +878,6 @@ mlx5_representor_match(struct mlx5_dev_spawn_data *spawn, return false; } - /** * Spawn an Ethernet device from Verbs information. * @@ -1668,6 +1667,7 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev, */ MLX5_ASSERT(spawn->ifindex); priv->if_index = spawn->ifindex; + priv->lag_affinity_idx = sh->refcnt - 1; eth_dev->data->dev_private = priv; priv->dev_data = eth_dev->data; eth_dev->data->mac_addrs = priv->mac; diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c index c712fc3465..ae54b18ad5 100644 --- a/drivers/net/mlx5/mlx5.c +++ b/drivers/net/mlx5/mlx5.c @@ -1256,6 +1256,68 @@ mlx5_dev_ctx_shared_mempool_subscribe(struct rte_eth_dev *dev) return 0; } +/** + * Set up multiple TISs with different affinities according to + * number of bonding ports + * + * @param priv + * Pointer of shared context. + * + * @return + * Zero on success, -1 otherwise. + */ +static int +mlx5_setup_tis(struct mlx5_dev_ctx_shared *sh) +{ + int i; + struct mlx5_devx_lag_context lag_ctx = { 0 }; + struct mlx5_devx_tis_attr tis_attr = { 0 }; + + tis_attr.transport_domain = sh->td->id; + if (sh->bond.n_port) { + if (!mlx5_devx_cmd_query_lag(sh->ctx, &lag_ctx)) { + sh->lag.tx_remap_affinity[0] = + lag_ctx.tx_remap_affinity_1; + sh->lag.tx_remap_affinity[1] = + lag_ctx.tx_remap_affinity_2; + sh->lag.affinity_mode = lag_ctx.port_select_mode; + } else { + DRV_LOG(ERR, "Failed to query lag affinity."); + return -1; + } + if (sh->lag.affinity_mode == MLX5_LAG_MODE_TIS) { + for (i = 0; i < sh->bond.n_port; i++) { + tis_attr.lag_tx_port_affinity = + MLX5_IFC_LAG_MAP_TIS_AFFINITY(i, + sh->bond.n_port); + sh->tis[i] = mlx5_devx_cmd_create_tis(sh->ctx, + &tis_attr); + if (!sh->tis[i]) { + DRV_LOG(ERR, "Failed to TIS %d/%d for bonding device" + " %s.", i, sh->bond.n_port, + sh->ibdev_name); + return -1; + } + } + DRV_LOG(DEBUG, "LAG number of ports : %d, affinity_1 & 2 : pf%d & %d.\n", + sh->bond.n_port, lag_ctx.tx_remap_affinity_1, + lag_ctx.tx_remap_affinity_2); + return 0; + } + if (sh->lag.affinity_mode == MLX5_LAG_MODE_HASH) + DRV_LOG(INFO, "Device %s enabled HW hash based LAG.", + sh->ibdev_name); + } + tis_attr.lag_tx_port_affinity = 0; + sh->tis[0] = mlx5_devx_cmd_create_tis(sh->ctx, &tis_attr); + if (!sh->tis[0]) { + DRV_LOG(ERR, "Failed to TIS 0 for bonding device" + " %s.", sh->ibdev_name); + return -1; + } + return 0; +} + /** * Allocate shared device context. If there is multiport device the * master and representors will share this context, if there is single @@ -1283,7 +1345,6 @@ mlx5_alloc_shared_dev_ctx(const struct mlx5_dev_spawn_data *spawn, struct mlx5_dev_ctx_shared *sh; int err = 0; uint32_t i; - struct mlx5_devx_tis_attr tis_attr = { 0 }; MLX5_ASSERT(spawn); /* Secondary process should not create the shared context. */ @@ -1354,9 +1415,7 @@ mlx5_alloc_shared_dev_ctx(const struct mlx5_dev_spawn_data *spawn, err = ENOMEM; goto error; } - tis_attr.transport_domain = sh->td->id; - sh->tis = mlx5_devx_cmd_create_tis(sh->ctx, &tis_attr); - if (!sh->tis) { + if (mlx5_setup_tis(sh)) { DRV_LOG(ERR, "TIS allocation failure"); err = ENOMEM; goto error; @@ -1420,10 +1479,13 @@ mlx5_alloc_shared_dev_ctx(const struct mlx5_dev_spawn_data *spawn, MLX5_ASSERT(sh); if (sh->share_cache.cache.table) mlx5_mr_btree_free(&sh->share_cache.cache); - if (sh->tis) - claim_zero(mlx5_devx_cmd_destroy(sh->tis)); if (sh->td) claim_zero(mlx5_devx_cmd_destroy(sh->td)); + i = 0; + do { + if (sh->tis[i]) + claim_zero(mlx5_devx_cmd_destroy(sh->tis[i])); + } while (++i < (uint32_t)sh->bond.n_port); if (sh->devx_rx_uar) mlx5_glue->devx_free_uar(sh->devx_rx_uar); if (sh->tx_uar) @@ -1449,6 +1511,7 @@ void mlx5_free_shared_dev_ctx(struct mlx5_dev_ctx_shared *sh) { int ret; + int i = 0; pthread_mutex_lock(&mlx5_dev_ctx_list_mutex); #ifdef RTE_LIBRTE_MLX5_DEBUG @@ -1510,8 +1573,10 @@ mlx5_free_shared_dev_ctx(struct mlx5_dev_ctx_shared *sh) } if (sh->pd) claim_zero(mlx5_os_dealloc_pd(sh->pd)); - if (sh->tis) - claim_zero(mlx5_devx_cmd_destroy(sh->tis)); + do { + if (sh->tis[i]) + claim_zero(mlx5_devx_cmd_destroy(sh->tis[i])); + } while (++i < sh->bond.n_port); if (sh->td) claim_zero(mlx5_devx_cmd_destroy(sh->td)); if (sh->devx_rx_uar) diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h index adab9dc052..dc385a8cbb 100644 --- a/drivers/net/mlx5/mlx5.h +++ b/drivers/net/mlx5/mlx5.h @@ -1120,6 +1120,12 @@ struct mlx5_aso_ct_pools_mng { struct mlx5_aso_sq aso_sq; /* ASO queue objects. */ }; +/* LAG attr. */ +struct mlx5_lag { + uint8_t tx_remap_affinity[16]; /* The PF port number of affinity */ + uint8_t affinity_mode; /* TIS or hash based affinity */ +}; + /* * Shared Infiniband device context for Master/Representors * which belong to same IB device with multiple IB ports. @@ -1187,8 +1193,9 @@ struct mlx5_dev_ctx_shared { struct rte_intr_handle intr_handle; /* Interrupt handler for device. */ struct rte_intr_handle intr_handle_devx; /* DEVX interrupt handler. */ void *devx_comp; /* DEVX async comp obj. */ - struct mlx5_devx_obj *tis; /* TIS object. */ + struct mlx5_devx_obj *tis[16]; /* TIS object. */ struct mlx5_devx_obj *td; /* Transport domain. */ + struct mlx5_lag lag; /* LAG attributes */ void *tx_uar; /* Tx/packet pacing shared UAR. */ struct mlx5_flex_parser_profiles fp[MLX5_FLEX_PARSER_MAX]; /* Flex parser profiles information. */ @@ -1454,6 +1461,7 @@ struct mlx5_priv { uint32_t rss_shared_actions; /* RSS shared actions. */ struct mlx5_devx_obj *q_counters; /* DevX queue counter object. */ uint32_t counter_set_id; /* Queue counter ID to set in DevX objects. */ + uint32_t lag_affinity_idx; /* LAG mode queue 0 affinity starting. */ }; #define PORT_ID(priv) ((priv)->dev_data->port_id) diff --git a/drivers/net/mlx5/mlx5_devx.c b/drivers/net/mlx5/mlx5_devx.c index a49602cb95..a24b1b897d 100644 --- a/drivers/net/mlx5/mlx5_devx.c +++ b/drivers/net/mlx5/mlx5_devx.c @@ -888,6 +888,37 @@ mlx5_devx_drop_action_destroy(struct rte_eth_dev *dev) rte_errno = ENOTSUP; } +/** + * Select TXQ TIS number. + * + * @param dev + * Pointer to Ethernet device. + * @param queue_idx + * Queue index in DPDK Tx queue array. + * + * @return + * > 0 on success, a negative errno value otherwise. + */ +static uint32_t +mlx5_get_txq_tis_num(struct rte_eth_dev *dev, uint16_t queue_idx) +{ + struct mlx5_priv *priv = dev->data->dev_private; + int tis_idx; + + if (priv->sh->bond.n_port && priv->sh->lag.affinity_mode == + MLX5_LAG_MODE_TIS) { + tis_idx = (priv->lag_affinity_idx + queue_idx) % + priv->sh->bond.n_port; + DRV_LOG(INFO, "port %d txq %d gets affinity %d and maps to PF %d.", + dev->data->port_id, queue_idx, tis_idx + 1, + priv->sh->lag.tx_remap_affinity[tis_idx]); + } else { + tis_idx = 0; + } + MLX5_ASSERT(priv->sh->tis[tis_idx]); + return priv->sh->tis[tis_idx]->id; +} + /** * Create the Tx hairpin queue object. * @@ -935,7 +966,8 @@ mlx5_txq_obj_hairpin_new(struct rte_eth_dev *dev, uint16_t idx) attr.wq_attr.log_hairpin_num_packets = attr.wq_attr.log_hairpin_data_sz - MLX5_HAIRPIN_QUEUE_STRIDE; - attr.tis_num = priv->sh->tis->id; + + attr.tis_num = mlx5_get_txq_tis_num(dev, idx); tmpl->sq = mlx5_devx_cmd_create_sq(priv->sh->ctx, &attr); if (!tmpl->sq) { DRV_LOG(ERR, @@ -992,14 +1024,15 @@ mlx5_txq_create_devx_sq_resources(struct rte_eth_dev *dev, uint16_t idx, .allow_swp = !!priv->config.swp, .cqn = txq_obj->cq_obj.cq->id, .tis_lst_sz = 1, - .tis_num = priv->sh->tis->id, .wq_attr = (struct mlx5_devx_wq_attr){ .pd = priv->sh->pdn, .uar_page = mlx5_os_get_devx_uar_page_id(priv->sh->tx_uar), }, .ts_format = mlx5_ts_format_conv(priv->sh->sq_ts_format), + .tis_num = mlx5_get_txq_tis_num(dev, idx), }; + /* Create Send Queue object with DevX. */ return mlx5_devx_sq_create(priv->sh->ctx, &txq_obj->sq_obj, log_desc_n, &sq_attr, priv->sh->numa_node); diff --git a/drivers/net/mlx5/mlx5_txpp.c b/drivers/net/mlx5/mlx5_txpp.c index 2be7e71f89..6e874fa090 100644 --- a/drivers/net/mlx5/mlx5_txpp.c +++ b/drivers/net/mlx5/mlx5_txpp.c @@ -230,7 +230,7 @@ mlx5_txpp_create_rearm_queue(struct mlx5_dev_ctx_shared *sh) .cd_master = 1, .state = MLX5_SQC_STATE_RST, .tis_lst_sz = 1, - .tis_num = sh->tis->id, + .tis_num = sh->tis[0]->id, .wq_attr = (struct mlx5_devx_wq_attr){ .pd = sh->pdn, .uar_page = mlx5_os_get_devx_uar_page_id(sh->tx_uar), @@ -433,7 +433,7 @@ mlx5_txpp_create_clock_queue(struct mlx5_dev_ctx_shared *sh) /* Create send queue object for Clock Queue. */ if (sh->txpp.test) { sq_attr.tis_lst_sz = 1; - sq_attr.tis_num = sh->tis->id; + sq_attr.tis_num = sh->tis[0]->id; sq_attr.non_wire = 0; sq_attr.static_sq_wq = 1; } else { -- 2.27.0