From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 75205A0548; Thu, 4 Nov 2021 13:35:53 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 9DEED4271C; Thu, 4 Nov 2021 13:35:22 +0100 (CET) Received: from NAM12-BN8-obe.outbound.protection.outlook.com (mail-bn8nam12on2053.outbound.protection.outlook.com [40.107.237.53]) by mails.dpdk.org (Postfix) with ESMTP id A892642715 for ; Thu, 4 Nov 2021 13:35:20 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=DWP7xidF6aM9JD96hXqfL1Jr1rTIzZTNmKtKuvG5cy7DTo54Tpm5uiq7BVAeFYW+uKa2EF8zqb6L7at586nnpix6T5EHzwfvT2WNQ3tM4DsD1wJf1CdAA4llLlC4DZvk1LEeO/IBwBHI/0kFoshsUBh98AyuBvn+NhqniPsxjkjPfMWsqkKS4PHyLV8d2nCqxKcyeMXbGTiqHO9pOI+EmE6DCSvQGYzaFoAnRtsjofgUKvAxogUw9GbTIonkmbayACBk96xjlt5JOHyHvHC5SpmfeI3z4zyM2dAd6pBq99FG0Zpjk9OtpHRCHcHDCJDnqZ5Mdin1bcPxPBM92TzPbQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=KyYFdjJAbFYVa/RE3BsMq/GQK3yxVPInlXaZDbXIED4=; b=iwhhkTkI5fKptGfuEmsF6RC10mdYq6eHtVXMYHzcHRfIIqicuUY23L8AuWDyoQKqZzql0ZpSYJQLFuYKTd3DBux7VA3mcomFXVhuyV7ThRYEkyt0X05yLlhUtAIn4yI2n7kKPxGyb8liLgxPxrcPxuuwMTixZsk/Ilu3tzq7ExP1YKQTAm6MibUaCHoJX+MNSZwkktFRoSW/c1n+z904R0NV/zmzK2UONu7N5xDGcjA7Ykzejhi2c+O2NkHae8xAkpj7lJvfJB6Z9KubjHdqPc92rdhbBmm7jkhOIatULY6BED2z2Zc1j6l7/+tWUT1JBLGS1k/Y+xdMqzJEJUK4/Q== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.34) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=KyYFdjJAbFYVa/RE3BsMq/GQK3yxVPInlXaZDbXIED4=; b=EcNv9I24r3gLj18ndfi5gQ3Wp75IRAUD55hRMX1oOrLWzkOpMENHUEczy8SP3z5AkSjsnmG8RcTybNqgBBETVnsER533D9skhxdy25EkbIQkGU0WK/u69zPgYPtSVYBkiS2LJwQ1m9efKcqWZNnEGTT5Csn9mUGwXTxe9Em0CGfyApTqU2pVNRQz+qQxuzK1yu099Ivxo54MN8yphrUofut5wCVhhGFD3i8rNmFnFkjLdKOTf2S9Z4dYHxzQ8QlqxRX7b2ZcaMUipJUKf5mKPDJ+X9leoDCqtZypscCQeB+mXlBSoa2Q/ShsHpNqZ47K1Ki4TFZiKyR0Jjsk0d1v/A== Received: from DM6PR06CA0078.namprd06.prod.outlook.com (2603:10b6:5:336::11) by DM6PR12MB4219.namprd12.prod.outlook.com (2603:10b6:5:217::14) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4669.11; Thu, 4 Nov 2021 12:35:19 +0000 Received: from DM6NAM11FT029.eop-nam11.prod.protection.outlook.com (2603:10b6:5:336:cafe::48) by DM6PR06CA0078.outlook.office365.com (2603:10b6:5:336::11) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4669.11 via Frontend Transport; Thu, 4 Nov 2021 12:35:19 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.34) smtp.mailfrom=nvidia.com; dpdk.org; dkim=none (message not signed) header.d=none;dpdk.org; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.34 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.34; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.34) by DM6NAM11FT029.mail.protection.outlook.com (10.13.173.23) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4669.10 via Frontend Transport; Thu, 4 Nov 2021 12:35:18 +0000 Received: from nvidia.com (172.20.187.6) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Thu, 4 Nov 2021 12:34:59 +0000 From: Xueming Li To: CC: , Lior Margalit , "Slava Ovsiienko" , Matan Azrad Date: Thu, 4 Nov 2021 20:33:18 +0800 Message-ID: <20211104123320.1638915-13-xuemingl@nvidia.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20211104123320.1638915-1-xuemingl@nvidia.com> References: <20210727034204.20649-1-xuemingl@nvidia.com> <20211104123320.1638915-1-xuemingl@nvidia.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain X-Originating-IP: [172.20.187.6] X-ClientProxiedBy: HQMAIL101.nvidia.com (172.20.187.10) To HQMAIL107.nvidia.com (172.20.187.13) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 460951ce-21d9-4a23-7f8f-08d99f8f8f9f X-MS-TrafficTypeDiagnostic: DM6PR12MB4219: X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:3513; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: FcoahX3M5CdktEfhMuQEVsziPZBWMTD4qSAgRbw39eBBHBRrTfCMhf2j5DM2k3T2B1EmdCAbhYFeAZj22bLgSnBPPcO4Gm6jTMrgJwbZiwO1UCa+kzjMtYLS9DDMXpFtcyjjB3UjQQuuO3XuSORc8cDW9iv6zFil5bBgwg9Ih3XeV/RpVXSpa3TEhjFu+wquPxWdf9g7hl4MCyGEGw46e9iyEh/g4BhKpYmvZ6GqllXN4stwt0LgV7icm81jzQUXCqzc8SMd2LU+zkT/W101tXcTqi+ELPisQ3ubhPiBc78r5ATvT61XnxSHN0efv0cF4W43Jgo9K65WQdl/pA9Ah993EoGmVfogwKk5SZEQR1ysgIerMdt/fMSrCl0qSD0PyWlpPL7u2xrt6hJGTQW8//3BuYU9PWZyVyYh1uzJ+uYPPd3cAc5FkLfwlMJCkQiYxAyq6cCLI0WmT1cPdpfsGIho0Ij+swFnvqy7HZPBuNqtMK+Gap6WTWiOYmUy+6aCdrYeOxhNTTa/BYZ9y0BtJhjwlEvCzxXCm8KnL+ZGCeAgyyQgKuNIL2xoNTHjEux3jnQA76h/aSNOWARePN7akSNUYc3ex53w+JPHYN4YvD4pxZ7WOpc+AUpuQubCa8BP3USbf0OagLXZ4m9qbeLr2/+P7QIWi4k9b7U20sXOYyENmyd2ScVInSQIgqzRivbvcLz7QmiikfGCxHQW241yzHCfGJ3KilOBmaEEAPs149XajwhixN7ryBw6ETW985SB X-Forefront-Antispam-Report: CIP:216.228.112.34; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:schybrid03.nvidia.com; CAT:NONE; SFS:(4636009)(46966006)(36840700001)(36860700001)(6916009)(47076005)(6666004)(8936002)(316002)(36756003)(83380400001)(30864003)(356005)(7636003)(1076003)(8676002)(4326008)(6286002)(55016002)(2906002)(5660300002)(186003)(70586007)(16526019)(2616005)(7696005)(82310400003)(508600001)(70206006)(86362001)(54906003)(336012)(107886003)(26005)(426003)(21314003)(309714004); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 04 Nov 2021 12:35:18.7340 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 460951ce-21d9-4a23-7f8f-08d99f8f8f9f X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.112.34]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: DM6NAM11FT029.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR12MB4219 Subject: [dpdk-dev] [PATCH v4 12/14] net/mlx5: remove Rx queue data list from device X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Rx queue data list(priv->rxqs) can be replaced by Rx queue list(priv->rxq_privs), removes it and replaces with universal wrapper API. Signed-off-by: Xueming Li Acked-by: Slava Ovsiienko --- drivers/net/mlx5/linux/mlx5_verbs.c | 7 ++--- drivers/net/mlx5/mlx5.c | 10 +----- drivers/net/mlx5/mlx5.h | 1 - drivers/net/mlx5/mlx5_devx.c | 12 +++++--- drivers/net/mlx5/mlx5_ethdev.c | 6 +--- drivers/net/mlx5/mlx5_flow.c | 47 +++++++++++++++-------------- drivers/net/mlx5/mlx5_rss.c | 6 ++-- drivers/net/mlx5/mlx5_rx.c | 15 +++++---- drivers/net/mlx5/mlx5_rx.h | 9 +++--- drivers/net/mlx5/mlx5_rxq.c | 43 ++++++++++++-------------- drivers/net/mlx5/mlx5_rxtx_vec.c | 6 ++-- drivers/net/mlx5/mlx5_stats.c | 9 +++--- drivers/net/mlx5/mlx5_trigger.c | 2 +- 13 files changed, 79 insertions(+), 94 deletions(-) diff --git a/drivers/net/mlx5/linux/mlx5_verbs.c b/drivers/net/mlx5/linux/mlx5_verbs.c index 5d4ae3ea752..f78916c868f 100644 --- a/drivers/net/mlx5/linux/mlx5_verbs.c +++ b/drivers/net/mlx5/linux/mlx5_verbs.c @@ -486,11 +486,10 @@ mlx5_ibv_ind_table_new(struct rte_eth_dev *dev, const unsigned int log_n, MLX5_ASSERT(ind_tbl); for (i = 0; i != ind_tbl->queues_n; ++i) { - struct mlx5_rxq_data *rxq = (*priv->rxqs)[ind_tbl->queues[i]]; - struct mlx5_rxq_ctrl *rxq_ctrl = - container_of(rxq, struct mlx5_rxq_ctrl, rxq); + struct mlx5_rxq_priv *rxq = mlx5_rxq_get(dev, + ind_tbl->queues[i]); - wq[i] = rxq_ctrl->obj->wq; + wq[i] = rxq->ctrl->obj->wq; } MLX5_ASSERT(i > 0); /* Finalise indirection table. */ diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c index 374cc9757aa..8614b8ffddd 100644 --- a/drivers/net/mlx5/mlx5.c +++ b/drivers/net/mlx5/mlx5.c @@ -1687,20 +1687,12 @@ mlx5_dev_close(struct rte_eth_dev *dev) /* Free the eCPRI flex parser resource. */ mlx5_flex_parser_ecpri_release(dev); mlx5_flex_item_port_cleanup(dev); - if (priv->rxqs != NULL) { + if (priv->rxq_privs != NULL) { /* XXX race condition if mlx5_rx_burst() is still running. */ rte_delay_us_sleep(1000); for (i = 0; (i != priv->rxqs_n); ++i) mlx5_rxq_release(dev, i); priv->rxqs_n = 0; - priv->rxqs = NULL; - } - if (priv->representor) { - /* Each representor has a dedicated interrupts handler */ - mlx5_free(dev->intr_handle); - dev->intr_handle = NULL; - } - if (priv->rxq_privs != NULL) { mlx5_free(priv->rxq_privs); priv->rxq_privs = NULL; } diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h index 967d92b4ad6..a037a33debf 100644 --- a/drivers/net/mlx5/mlx5.h +++ b/drivers/net/mlx5/mlx5.h @@ -1410,7 +1410,6 @@ struct mlx5_priv { unsigned int rxqs_n; /* RX queues array size. */ unsigned int txqs_n; /* TX queues array size. */ struct mlx5_rxq_priv *(*rxq_privs)[]; /* RX queue non-shared data. */ - struct mlx5_rxq_data *(*rxqs)[]; /* (Shared) RX queues. */ struct mlx5_txq_data *(*txqs)[]; /* TX queues. */ struct rte_mempool *mprq_mp; /* Mempool for Multi-Packet RQ. */ struct rte_eth_rss_conf rss_conf; /* RSS configuration. */ diff --git a/drivers/net/mlx5/mlx5_devx.c b/drivers/net/mlx5/mlx5_devx.c index b90a5d82458..668d47025e8 100644 --- a/drivers/net/mlx5/mlx5_devx.c +++ b/drivers/net/mlx5/mlx5_devx.c @@ -684,15 +684,17 @@ mlx5_devx_tir_attr_set(struct rte_eth_dev *dev, const uint8_t *rss_key, /* NULL queues designate drop queue. */ if (ind_tbl->queues != NULL) { - struct mlx5_rxq_data *rxq_data = - (*priv->rxqs)[ind_tbl->queues[0]]; struct mlx5_rxq_ctrl *rxq_ctrl = - container_of(rxq_data, struct mlx5_rxq_ctrl, rxq); - rxq_obj_type = rxq_ctrl->type; + mlx5_rxq_ctrl_get(dev, ind_tbl->queues[0]); + rxq_obj_type = rxq_ctrl != NULL ? rxq_ctrl->type : + MLX5_RXQ_TYPE_STANDARD; /* Enable TIR LRO only if all the queues were configured for. */ for (i = 0; i < ind_tbl->queues_n; ++i) { - if (!(*priv->rxqs)[ind_tbl->queues[i]]->lro) { + struct mlx5_rxq_data *rxq_i = + mlx5_rxq_data_get(dev, ind_tbl->queues[i]); + + if (rxq_i != NULL && !rxq_i->lro) { lro = false; break; } diff --git a/drivers/net/mlx5/mlx5_ethdev.c b/drivers/net/mlx5/mlx5_ethdev.c index cde505955df..bb38d5d2ade 100644 --- a/drivers/net/mlx5/mlx5_ethdev.c +++ b/drivers/net/mlx5/mlx5_ethdev.c @@ -114,7 +114,6 @@ mlx5_dev_configure(struct rte_eth_dev *dev) rte_errno = ENOMEM; return -rte_errno; } - priv->rxqs = (void *)dev->data->rx_queues; priv->txqs = (void *)dev->data->tx_queues; if (txqs_n != priv->txqs_n) { DRV_LOG(INFO, "port %u Tx queues number update: %u -> %u", @@ -171,11 +170,8 @@ mlx5_dev_configure_rss_reta(struct rte_eth_dev *dev) return -rte_errno; } for (i = 0, j = 0; i < rxqs_n; i++) { - struct mlx5_rxq_data *rxq_data; - struct mlx5_rxq_ctrl *rxq_ctrl; + struct mlx5_rxq_ctrl *rxq_ctrl = mlx5_rxq_ctrl_get(dev, i); - rxq_data = (*priv->rxqs)[i]; - rxq_ctrl = container_of(rxq_data, struct mlx5_rxq_ctrl, rxq); if (rxq_ctrl && rxq_ctrl->type == MLX5_RXQ_TYPE_STANDARD) rss_queue_arr[j++] = i; } diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c index 5435660a2dd..2f30a355258 100644 --- a/drivers/net/mlx5/mlx5_flow.c +++ b/drivers/net/mlx5/mlx5_flow.c @@ -1210,10 +1210,11 @@ flow_drv_rxq_flags_set(struct rte_eth_dev *dev, return; for (i = 0; i != ind_tbl->queues_n; ++i) { int idx = ind_tbl->queues[i]; - struct mlx5_rxq_ctrl *rxq_ctrl = - container_of((*priv->rxqs)[idx], - struct mlx5_rxq_ctrl, rxq); + struct mlx5_rxq_ctrl *rxq_ctrl = mlx5_rxq_ctrl_get(dev, idx); + MLX5_ASSERT(rxq_ctrl != NULL); + if (rxq_ctrl == NULL) + continue; /* * To support metadata register copy on Tx loopback, * this must be always enabled (metadata may arive @@ -1305,10 +1306,11 @@ flow_drv_rxq_flags_trim(struct rte_eth_dev *dev, MLX5_ASSERT(dev->data->dev_started); for (i = 0; i != ind_tbl->queues_n; ++i) { int idx = ind_tbl->queues[i]; - struct mlx5_rxq_ctrl *rxq_ctrl = - container_of((*priv->rxqs)[idx], - struct mlx5_rxq_ctrl, rxq); + struct mlx5_rxq_ctrl *rxq_ctrl = mlx5_rxq_ctrl_get(dev, idx); + MLX5_ASSERT(rxq_ctrl != NULL); + if (rxq_ctrl == NULL) + continue; if (priv->config.dv_flow_en && priv->config.dv_xmeta_en != MLX5_XMETA_MODE_LEGACY && mlx5_flow_ext_mreg_supported(dev)) { @@ -1369,18 +1371,16 @@ flow_rxq_flags_clear(struct rte_eth_dev *dev) unsigned int i; for (i = 0; i != priv->rxqs_n; ++i) { - struct mlx5_rxq_ctrl *rxq_ctrl; + struct mlx5_rxq_priv *rxq = mlx5_rxq_get(dev, i); unsigned int j; - if (!(*priv->rxqs)[i]) + if (rxq == NULL || rxq->ctrl == NULL) continue; - rxq_ctrl = container_of((*priv->rxqs)[i], - struct mlx5_rxq_ctrl, rxq); - rxq_ctrl->flow_mark_n = 0; - rxq_ctrl->rxq.mark = 0; + rxq->ctrl->flow_mark_n = 0; + rxq->ctrl->rxq.mark = 0; for (j = 0; j != MLX5_FLOW_TUNNEL; ++j) - rxq_ctrl->flow_tunnels_n[j] = 0; - rxq_ctrl->rxq.tunnel = 0; + rxq->ctrl->flow_tunnels_n[j] = 0; + rxq->ctrl->rxq.tunnel = 0; } } @@ -1394,13 +1394,15 @@ void mlx5_flow_rxq_dynf_metadata_set(struct rte_eth_dev *dev) { struct mlx5_priv *priv = dev->data->dev_private; - struct mlx5_rxq_data *data; unsigned int i; for (i = 0; i != priv->rxqs_n; ++i) { - if (!(*priv->rxqs)[i]) + struct mlx5_rxq_priv *rxq = mlx5_rxq_get(dev, i); + struct mlx5_rxq_data *data; + + if (rxq == NULL || rxq->ctrl == NULL) continue; - data = (*priv->rxqs)[i]; + data = &rxq->ctrl->rxq; if (!rte_flow_dynf_metadata_avail()) { data->dynf_meta = 0; data->flow_meta_mask = 0; @@ -1591,7 +1593,7 @@ mlx5_flow_validate_action_queue(const struct rte_flow_action *action, RTE_FLOW_ERROR_TYPE_ACTION_CONF, &queue->index, "queue index out of range"); - if (!(*priv->rxqs)[queue->index]) + if (mlx5_rxq_get(dev, queue->index) == NULL) return rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION_CONF, &queue->index, @@ -1622,7 +1624,7 @@ mlx5_flow_validate_action_queue(const struct rte_flow_action *action, * 0 on success, a negative errno code on error. */ static int -mlx5_validate_rss_queues(const struct rte_eth_dev *dev, +mlx5_validate_rss_queues(struct rte_eth_dev *dev, const uint16_t *queues, uint32_t queues_n, const char **error, uint32_t *queue_idx) { @@ -1631,20 +1633,19 @@ mlx5_validate_rss_queues(const struct rte_eth_dev *dev, uint32_t i; for (i = 0; i != queues_n; ++i) { - struct mlx5_rxq_ctrl *rxq_ctrl; + struct mlx5_rxq_ctrl *rxq_ctrl = mlx5_rxq_ctrl_get(dev, + queues[i]); if (queues[i] >= priv->rxqs_n) { *error = "queue index out of range"; *queue_idx = i; return -EINVAL; } - if (!(*priv->rxqs)[queues[i]]) { + if (rxq_ctrl == NULL) { *error = "queue is not configured"; *queue_idx = i; return -EINVAL; } - rxq_ctrl = container_of((*priv->rxqs)[queues[i]], - struct mlx5_rxq_ctrl, rxq); if (i == 0) rxq_type = rxq_ctrl->type; if (rxq_type != rxq_ctrl->type) { diff --git a/drivers/net/mlx5/mlx5_rss.c b/drivers/net/mlx5/mlx5_rss.c index a04e22398db..75af05b7b02 100644 --- a/drivers/net/mlx5/mlx5_rss.c +++ b/drivers/net/mlx5/mlx5_rss.c @@ -65,9 +65,11 @@ mlx5_rss_hash_update(struct rte_eth_dev *dev, priv->rss_conf.rss_hf = rss_conf->rss_hf; /* Enable the RSS hash in all Rx queues. */ for (i = 0, idx = 0; idx != priv->rxqs_n; ++i) { - if (!(*priv->rxqs)[i]) + struct mlx5_rxq_priv *rxq = mlx5_rxq_get(dev, i); + + if (rxq == NULL || rxq->ctrl == NULL) continue; - (*priv->rxqs)[i]->rss_hash = !!rss_conf->rss_hf && + rxq->ctrl->rxq.rss_hash = !!rss_conf->rss_hf && !!(dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS); ++idx; } diff --git a/drivers/net/mlx5/mlx5_rx.c b/drivers/net/mlx5/mlx5_rx.c index d41905a2a04..1ffa1b95b88 100644 --- a/drivers/net/mlx5/mlx5_rx.c +++ b/drivers/net/mlx5/mlx5_rx.c @@ -148,10 +148,8 @@ void mlx5_rxq_info_get(struct rte_eth_dev *dev, uint16_t rx_queue_id, struct rte_eth_rxq_info *qinfo) { - struct mlx5_priv *priv = dev->data->dev_private; - struct mlx5_rxq_data *rxq = (*priv->rxqs)[rx_queue_id]; - struct mlx5_rxq_ctrl *rxq_ctrl = - container_of(rxq, struct mlx5_rxq_ctrl, rxq); + struct mlx5_rxq_ctrl *rxq_ctrl = mlx5_rxq_ctrl_get(dev, rx_queue_id); + struct mlx5_rxq_data *rxq = mlx5_rxq_data_get(dev, rx_queue_id); if (!rxq) return; @@ -162,7 +160,10 @@ mlx5_rxq_info_get(struct rte_eth_dev *dev, uint16_t rx_queue_id, qinfo->conf.rx_thresh.wthresh = 0; qinfo->conf.rx_free_thresh = rxq->rq_repl_thresh; qinfo->conf.rx_drop_en = 1; - qinfo->conf.rx_deferred_start = rxq_ctrl ? 0 : 1; + if (rxq_ctrl == NULL || rxq_ctrl->obj == NULL) + qinfo->conf.rx_deferred_start = 0; + else + qinfo->conf.rx_deferred_start = 1; qinfo->conf.offloads = dev->data->dev_conf.rxmode.offloads; qinfo->scattered_rx = dev->data->scattered_rx; qinfo->nb_desc = mlx5_rxq_mprq_enabled(rxq) ? @@ -191,10 +192,8 @@ mlx5_rx_burst_mode_get(struct rte_eth_dev *dev, struct rte_eth_burst_mode *mode) { eth_rx_burst_t pkt_burst = dev->rx_pkt_burst; - struct mlx5_priv *priv = dev->data->dev_private; - struct mlx5_rxq_data *rxq; + struct mlx5_rxq_priv *rxq = mlx5_rxq_get(dev, rx_queue_id); - rxq = (*priv->rxqs)[rx_queue_id]; if (!rxq) { rte_errno = EINVAL; return -rte_errno; diff --git a/drivers/net/mlx5/mlx5_rx.h b/drivers/net/mlx5/mlx5_rx.h index 337dcca59fb..413e36f6d8d 100644 --- a/drivers/net/mlx5/mlx5_rx.h +++ b/drivers/net/mlx5/mlx5_rx.h @@ -603,14 +603,13 @@ mlx5_mprq_enabled(struct rte_eth_dev *dev) return 0; /* All the configured queues should be enabled. */ for (i = 0; i < priv->rxqs_n; ++i) { - struct mlx5_rxq_data *rxq = (*priv->rxqs)[i]; - struct mlx5_rxq_ctrl *rxq_ctrl = container_of - (rxq, struct mlx5_rxq_ctrl, rxq); + struct mlx5_rxq_ctrl *rxq_ctrl = mlx5_rxq_ctrl_get(dev, i); - if (rxq == NULL || rxq_ctrl->type != MLX5_RXQ_TYPE_STANDARD) + if (rxq_ctrl == NULL || + rxq_ctrl->type != MLX5_RXQ_TYPE_STANDARD) continue; n_ibv++; - if (mlx5_rxq_mprq_enabled(rxq)) + if (mlx5_rxq_mprq_enabled(&rxq_ctrl->rxq)) ++n; } /* Multi-Packet RQ can't be partially configured. */ diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c index 2850a220399..f3fc618ed2c 100644 --- a/drivers/net/mlx5/mlx5_rxq.c +++ b/drivers/net/mlx5/mlx5_rxq.c @@ -748,7 +748,7 @@ mlx5_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc, } DRV_LOG(DEBUG, "port %u adding Rx queue %u to list", dev->data->port_id, idx); - (*priv->rxqs)[idx] = &rxq_ctrl->rxq; + dev->data->rx_queues[idx] = &rxq_ctrl->rxq; return 0; } @@ -830,7 +830,7 @@ mlx5_rx_hairpin_queue_setup(struct rte_eth_dev *dev, uint16_t idx, } DRV_LOG(DEBUG, "port %u adding hairpin Rx queue %u to list", dev->data->port_id, idx); - (*priv->rxqs)[idx] = &rxq_ctrl->rxq; + dev->data->rx_queues[idx] = &rxq_ctrl->rxq; return 0; } @@ -1163,7 +1163,7 @@ mlx5_mprq_free_mp(struct rte_eth_dev *dev) rte_mempool_free(mp); /* Unset mempool for each Rx queue. */ for (i = 0; i != priv->rxqs_n; ++i) { - struct mlx5_rxq_data *rxq = (*priv->rxqs)[i]; + struct mlx5_rxq_data *rxq = mlx5_rxq_data_get(dev, i); if (rxq == NULL) continue; @@ -1204,12 +1204,13 @@ mlx5_mprq_alloc_mp(struct rte_eth_dev *dev) return 0; /* Count the total number of descriptors configured. */ for (i = 0; i != priv->rxqs_n; ++i) { - struct mlx5_rxq_data *rxq = (*priv->rxqs)[i]; - struct mlx5_rxq_ctrl *rxq_ctrl = container_of - (rxq, struct mlx5_rxq_ctrl, rxq); + struct mlx5_rxq_ctrl *rxq_ctrl = mlx5_rxq_ctrl_get(dev, i); + struct mlx5_rxq_data *rxq; - if (rxq == NULL || rxq_ctrl->type != MLX5_RXQ_TYPE_STANDARD) + if (rxq_ctrl == NULL || + rxq_ctrl->type != MLX5_RXQ_TYPE_STANDARD) continue; + rxq = &rxq_ctrl->rxq; n_ibv++; desc += 1 << rxq->elts_n; /* Get the max number of strides. */ @@ -1292,13 +1293,12 @@ mlx5_mprq_alloc_mp(struct rte_eth_dev *dev) exit: /* Set mempool for each Rx queue. */ for (i = 0; i != priv->rxqs_n; ++i) { - struct mlx5_rxq_data *rxq = (*priv->rxqs)[i]; - struct mlx5_rxq_ctrl *rxq_ctrl = container_of - (rxq, struct mlx5_rxq_ctrl, rxq); + struct mlx5_rxq_ctrl *rxq_ctrl = mlx5_rxq_ctrl_get(dev, i); - if (rxq == NULL || rxq_ctrl->type != MLX5_RXQ_TYPE_STANDARD) + if (rxq_ctrl == NULL || + rxq_ctrl->type != MLX5_RXQ_TYPE_STANDARD) continue; - rxq->mprq_mp = mp; + rxq_ctrl->rxq.mprq_mp = mp; } DRV_LOG(INFO, "port %u Multi-Packet RQ is configured", dev->data->port_id); @@ -1777,8 +1777,7 @@ mlx5_rxq_get(struct rte_eth_dev *dev, uint16_t idx) { struct mlx5_priv *priv = dev->data->dev_private; - if (priv->rxq_privs == NULL) - return NULL; + MLX5_ASSERT(priv->rxq_privs != NULL); return (*priv->rxq_privs)[idx]; } @@ -1862,7 +1861,7 @@ mlx5_rxq_release(struct rte_eth_dev *dev, uint16_t idx) LIST_REMOVE(rxq, owner_entry); LIST_REMOVE(rxq_ctrl, next); mlx5_free(rxq_ctrl); - (*priv->rxqs)[idx] = NULL; + dev->data->rx_queues[idx] = NULL; mlx5_free(rxq); (*priv->rxq_privs)[idx] = NULL; } @@ -1908,14 +1907,10 @@ enum mlx5_rxq_type mlx5_rxq_get_type(struct rte_eth_dev *dev, uint16_t idx) { struct mlx5_priv *priv = dev->data->dev_private; - struct mlx5_rxq_ctrl *rxq_ctrl = NULL; + struct mlx5_rxq_ctrl *rxq_ctrl = mlx5_rxq_ctrl_get(dev, idx); - if (idx < priv->rxqs_n && (*priv->rxqs)[idx]) { - rxq_ctrl = container_of((*priv->rxqs)[idx], - struct mlx5_rxq_ctrl, - rxq); + if (idx < priv->rxqs_n && rxq_ctrl != NULL) return rxq_ctrl->type; - } return MLX5_RXQ_TYPE_UNDEFINED; } @@ -2682,13 +2677,13 @@ mlx5_rxq_timestamp_set(struct rte_eth_dev *dev) { struct mlx5_priv *priv = dev->data->dev_private; struct mlx5_dev_ctx_shared *sh = priv->sh; - struct mlx5_rxq_data *data; unsigned int i; for (i = 0; i != priv->rxqs_n; ++i) { - if (!(*priv->rxqs)[i]) + struct mlx5_rxq_data *data = mlx5_rxq_data_get(dev, i); + + if (data == NULL) continue; - data = (*priv->rxqs)[i]; data->sh = sh; data->rt_timestamp = priv->config.rt_timestamp; } diff --git a/drivers/net/mlx5/mlx5_rxtx_vec.c b/drivers/net/mlx5/mlx5_rxtx_vec.c index 511681841ca..6212ce8247d 100644 --- a/drivers/net/mlx5/mlx5_rxtx_vec.c +++ b/drivers/net/mlx5/mlx5_rxtx_vec.c @@ -578,11 +578,11 @@ mlx5_check_vec_rx_support(struct rte_eth_dev *dev) return -ENOTSUP; /* All the configured queues should support. */ for (i = 0; i < priv->rxqs_n; ++i) { - struct mlx5_rxq_data *rxq = (*priv->rxqs)[i]; + struct mlx5_rxq_data *rxq_data = mlx5_rxq_data_get(dev, i); - if (!rxq) + if (!rxq_data) continue; - if (mlx5_rxq_check_vec_support(rxq) < 0) + if (mlx5_rxq_check_vec_support(rxq_data) < 0) break; } if (i != priv->rxqs_n) diff --git a/drivers/net/mlx5/mlx5_stats.c b/drivers/net/mlx5/mlx5_stats.c index ae2f5668a74..732775954ad 100644 --- a/drivers/net/mlx5/mlx5_stats.c +++ b/drivers/net/mlx5/mlx5_stats.c @@ -107,7 +107,7 @@ mlx5_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats) memset(&tmp, 0, sizeof(tmp)); /* Add software counters. */ for (i = 0; (i != priv->rxqs_n); ++i) { - struct mlx5_rxq_data *rxq = (*priv->rxqs)[i]; + struct mlx5_rxq_data *rxq = mlx5_rxq_data_get(dev, i); if (rxq == NULL) continue; @@ -181,10 +181,11 @@ mlx5_stats_reset(struct rte_eth_dev *dev) unsigned int i; for (i = 0; (i != priv->rxqs_n); ++i) { - if ((*priv->rxqs)[i] == NULL) + struct mlx5_rxq_data *rxq_data = mlx5_rxq_data_get(dev, i); + + if (rxq_data == NULL) continue; - memset(&(*priv->rxqs)[i]->stats, 0, - sizeof(struct mlx5_rxq_stats)); + memset(&rxq_data->stats, 0, sizeof(struct mlx5_rxq_stats)); } for (i = 0; (i != priv->txqs_n); ++i) { if ((*priv->txqs)[i] == NULL) diff --git a/drivers/net/mlx5/mlx5_trigger.c b/drivers/net/mlx5/mlx5_trigger.c index 2cf62a9780d..72475e4b5b5 100644 --- a/drivers/net/mlx5/mlx5_trigger.c +++ b/drivers/net/mlx5/mlx5_trigger.c @@ -227,7 +227,7 @@ mlx5_rxq_start(struct rte_eth_dev *dev) if (!rxq_ctrl->obj) { DRV_LOG(ERR, "Port %u Rx queue %u can't allocate resources.", - dev->data->port_id, (*priv->rxqs)[i]->idx); + dev->data->port_id, i); rte_errno = ENOMEM; goto error; } -- 2.33.0