From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 8F019A0548; Sun, 26 Sep 2021 13:20:15 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 0D427410FF; Sun, 26 Sep 2021 13:20:11 +0200 (CEST) Received: from NAM12-BN8-obe.outbound.protection.outlook.com (mail-bn8nam12on2059.outbound.protection.outlook.com [40.107.237.59]) by mails.dpdk.org (Postfix) with ESMTP id BDB2F410F7 for ; Sun, 26 Sep 2021 13:20:09 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=W9LTcUFto+nGTRxRqckVKMkGj4kjg57c8+FOLr553z5u6w/3bMkwsMaQ/SYRN8hZKY4UvPtYyxL1ID/xNVHT7jfriph7nperYHQfoFDusqG7cQt+Q/Jas1vIzsfl0kG2kyT2B4vB2SBk6vzjGl9TY2wfHbvMyrELEamwvRGHbCSdjKaTQBtNx8LUAc0XE4WyOBx/9vpp8lI0pd1BDnKmGcUmsJk5jd117naJCDG7OPN4lHm0Egu6aEWz4D4Aw0BfHoFH7zcGABM3ApOqD4nxMvUkLxxIq2IanL4RwpmrlRHFUkOwNSZp1czxvVPImAJgbXmZd2YVFKdsNpW1BNYHVA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version; bh=e80ndCJjqHeLzVtqrgCpSxuaAK8o4c0ijedlHtlPQmU=; b=AbDDjUbPoAWZKF4Ha2tFZgaaCMNG/Hf24rb/kEqPEPg8ACOPXRsjQYSBoDCMOJuCwNiVni8bmhpelSNBMTg6Ss7FvJ9BsaxXRJZtQhD7oEzkRaHSYrF1u2LXEsxxedal7P+9J11ZNQwDRV9q1RIC8LAjXpU/oybH6QMgrHSkqjZZ3iw9SNiJ4qCK1n/bq9g4IZ0rEXi/TaO/IE4RZsqdi8O5Y+wkhx4lvEuX/raHns7+yH2L0v2n02/G1mSET1Xm0FZjnHEv8oApPtYzVIdke5VmmHX95MWj+perRB01w/DVRBnlQQG2kR4M2OP1lBm6uKPmvYrNC4cahHILAoUkiQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.35) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=quarantine sp=none pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=e80ndCJjqHeLzVtqrgCpSxuaAK8o4c0ijedlHtlPQmU=; b=aPYGwwHTBLkGLKfiSnL1cRexVZdIuYFVvhGo5/acxaSjannZyj5v9lBLQEB2VUZNz9z35Ttq2s1WiJ0A1492fmj4GUAGIylER9ZOXLRiF4xOOkM8E1OheI8kkkiZIkhjTDT0B0tUGzbmQ0zetClzohg6jw2ZE5ZVrM6qkliAuAFEgV7Ycv3OUOcF8V6BoLs1WFQjsKu66Qh3P4WHkyDtLcQUVx8nm54TuYwhnaYlGUZX+o9NFzwIEEAALCZPZtV7jv/6aaPqvWFR0j0yCoAyjs61STB5GaoKwMR4fBYnuFKc0G+ecmM70mNEd2hfDc3Znfrzl8WTPoAuSHX+LdkIUg== Received: from BN1PR12CA0027.namprd12.prod.outlook.com (2603:10b6:408:e1::32) by CY4PR12MB1223.namprd12.prod.outlook.com (2603:10b6:903:38::15) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4544.21; Sun, 26 Sep 2021 11:20:07 +0000 Received: from BN8NAM11FT022.eop-nam11.prod.protection.outlook.com (2603:10b6:408:e1:cafe::eb) by BN1PR12CA0027.outlook.office365.com (2603:10b6:408:e1::32) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4544.15 via Frontend Transport; Sun, 26 Sep 2021 11:20:07 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.35) smtp.mailfrom=nvidia.com; dpdk.org; dkim=none (message not signed) header.d=none;dpdk.org; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.35 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.35; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.35) by BN8NAM11FT022.mail.protection.outlook.com (10.13.176.112) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4544.13 via Frontend Transport; Sun, 26 Sep 2021 11:20:07 +0000 Received: from DRHQMAIL107.nvidia.com (10.27.9.16) by HQMAIL111.nvidia.com (172.20.187.18) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Sun, 26 Sep 2021 11:20:02 +0000 Received: from nvidia.com (172.20.187.5) by DRHQMAIL107.nvidia.com (10.27.9.16) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Sun, 26 Sep 2021 11:19:58 +0000 From: Xueming Li To: CC: , Lior Margalit , Matan Azrad , Viacheslav Ovsiienko Date: Sun, 26 Sep 2021 19:18:59 +0800 Message-ID: <20210926111904.237736-7-xuemingl@nvidia.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20210926111904.237736-1-xuemingl@nvidia.com> References: <20210926111904.237736-1-xuemingl@nvidia.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain X-Originating-IP: [172.20.187.5] X-ClientProxiedBy: HQMAIL107.nvidia.com (172.20.187.13) To DRHQMAIL107.nvidia.com (10.27.9.16) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 3420b393-4dca-4a5f-b673-08d980df98ad X-MS-TrafficTypeDiagnostic: CY4PR12MB1223: X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:513; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: +Q0sY7cEiJ+7fWHN6mgDOECyA8izHJwpthQiA58vMyUN25jV8oHlE6RLGh1d2n8AOrlSG557Ap4tYDAGpkq9wYx4cc9H746ctT836D1JdHnTCwrAehahd4EVKOrOJJGA77Lrq3xpTRrU9sAbIh6GX6TZRWpKbAnZ6VWVnmQveQ2XWBre73+Z0+qNaOnLrXoPUZFFwjM+Yulfpyh16ZInbOlaZd8pi90sdH0MAuf73I2NqwP7a7BXZR7nseX+lmUmXfblWXxKWwoLptf45CYwif5+4zzbaunxML1+E/IWnvk+BYSQzSOrs+HqpQ3YRC9FlkbKWuNaQsk0ZL70jkGdodKlSaAFthm/mYYsfaCCfnFNoLOyxICnTyNB7VYzZ5xheq4bh2Zm/bsjkVwhW7w5UNI8I3iq2jssZb+jUt+WHuodVF6jajS6/ZasR/3lBGJBEr0eW5spKwjyQiXbTObrCOd48jxrC1dAl6/1PJ27PQbYwvM2+wxhdW0kBHvbYH+6VPnfLBrLIl6DaVD3LiV5rcRK6/pX74xpsZUDP3isvIt+n1KFOpQ7gB6hBpj0oYbVoiLo5fLcboJmQuiRJp3lwusUUiIbmSaOxm05Yv8wIxgYi9hePw0t+l7kXIg+w/ptNCnOfkD0Cw0m+yXzyF6EgyDsWKmT0Pn4/eJDJjrWGbOAX2QtBYVNhiu4i8Q585HgQPVwvW2HZ7IXZpVOYkpL9A== X-Forefront-Antispam-Report: CIP:216.228.112.35; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:schybrid02.nvidia.com; CAT:NONE; SFS:(4636009)(36840700001)(46966006)(2616005)(426003)(86362001)(508600001)(356005)(336012)(7636003)(1076003)(107886003)(8676002)(6286002)(8936002)(55016002)(47076005)(186003)(70586007)(36860700001)(36756003)(7696005)(5660300002)(16526019)(83380400001)(30864003)(70206006)(6916009)(2906002)(82310400003)(26005)(54906003)(316002)(4326008)(36906005); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 26 Sep 2021 11:20:07.4401 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 3420b393-4dca-4a5f-b673-08d980df98ad X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.112.35]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT022.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: CY4PR12MB1223 Subject: [dpdk-dev] [PATCH 06/11] net/mlx5: move Rx queue reference count X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Rx queue reference count is counter of RQ, used on RQ table. To prepare for shared Rx queue, move it from rxq_ctrl to Rx queue private data. Signed-off-by: Xueming Li --- drivers/net/mlx5/mlx5_rx.h | 8 +- drivers/net/mlx5/mlx5_rxq.c | 173 +++++++++++++++++++++----------- drivers/net/mlx5/mlx5_trigger.c | 57 +++++------ 3 files changed, 144 insertions(+), 94 deletions(-) diff --git a/drivers/net/mlx5/mlx5_rx.h b/drivers/net/mlx5/mlx5_rx.h index db6252e8e86..fe19414c130 100644 --- a/drivers/net/mlx5/mlx5_rx.h +++ b/drivers/net/mlx5/mlx5_rx.h @@ -160,7 +160,6 @@ enum mlx5_rxq_type { struct mlx5_rxq_ctrl { struct mlx5_rxq_data rxq; /* Data path structure. */ LIST_ENTRY(mlx5_rxq_ctrl) next; /* Pointer to the next element. */ - uint32_t refcnt; /* Reference counter. */ LIST_HEAD(priv, mlx5_rxq_priv) owners; /* Owner rxq list. */ struct mlx5_rxq_obj *obj; /* Verbs/DevX elements. */ struct mlx5_dev_ctx_shared *sh; /* Shared context. */ @@ -179,6 +178,7 @@ struct mlx5_rxq_ctrl { /* RX queue private data. */ struct mlx5_rxq_priv { uint16_t idx; /* Queue index. */ + uint32_t refcnt; /* Reference counter. */ struct mlx5_rxq_ctrl *ctrl; /* Shared Rx Queue. */ LIST_ENTRY(mlx5_rxq_priv) owner_entry; /* Entry in shared rxq_ctrl. */ struct mlx5_priv *priv; /* Back pointer to private data. */ @@ -216,7 +216,11 @@ struct mlx5_rxq_ctrl *mlx5_rxq_new(struct rte_eth_dev *dev, struct mlx5_rxq_ctrl *mlx5_rxq_hairpin_new (struct rte_eth_dev *dev, struct mlx5_rxq_priv *rxq, uint16_t desc, const struct rte_eth_hairpin_conf *hairpin_conf); -struct mlx5_rxq_ctrl *mlx5_rxq_get(struct rte_eth_dev *dev, uint16_t idx); +struct mlx5_rxq_priv *mlx5_rxq_ref(struct rte_eth_dev *dev, uint16_t idx); +uint32_t mlx5_rxq_deref(struct rte_eth_dev *dev, uint16_t idx); +struct mlx5_rxq_priv *mlx5_rxq_get(struct rte_eth_dev *dev, uint16_t idx); +struct mlx5_rxq_ctrl *mlx5_rxq_ctrl_get(struct rte_eth_dev *dev, uint16_t idx); +struct mlx5_rxq_data *mlx5_rxq_data_get(struct rte_eth_dev *dev, uint16_t idx); int mlx5_rxq_release(struct rte_eth_dev *dev, uint16_t idx); int mlx5_rxq_verify(struct rte_eth_dev *dev); int rxq_alloc_elts(struct mlx5_rxq_ctrl *rxq_ctrl); diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c index 70e73690aa7..7f28646f55c 100644 --- a/drivers/net/mlx5/mlx5_rxq.c +++ b/drivers/net/mlx5/mlx5_rxq.c @@ -386,15 +386,13 @@ mlx5_get_rx_port_offloads(void) static int mlx5_rxq_releasable(struct rte_eth_dev *dev, uint16_t idx) { - struct mlx5_priv *priv = dev->data->dev_private; - struct mlx5_rxq_ctrl *rxq_ctrl; + struct mlx5_rxq_priv *rxq = mlx5_rxq_get(dev, idx); - if (!(*priv->rxqs)[idx]) { + if (rxq == NULL) { rte_errno = EINVAL; return -rte_errno; } - rxq_ctrl = container_of((*priv->rxqs)[idx], struct mlx5_rxq_ctrl, rxq); - return (__atomic_load_n(&rxq_ctrl->refcnt, __ATOMIC_RELAXED) == 1); + return (__atomic_load_n(&rxq->refcnt, __ATOMIC_RELAXED) == 1); } /* Fetches and drops all SW-owned and error CQEs to synchronize CQ. */ @@ -874,8 +872,8 @@ mlx5_rx_intr_vec_enable(struct rte_eth_dev *dev) intr_handle->type = RTE_INTR_HANDLE_EXT; for (i = 0; i != n; ++i) { /* This rxq obj must not be released in this function. */ - struct mlx5_rxq_ctrl *rxq_ctrl = mlx5_rxq_get(dev, i); - struct mlx5_rxq_obj *rxq_obj = rxq_ctrl ? rxq_ctrl->obj : NULL; + struct mlx5_rxq_priv *rxq = mlx5_rxq_get(dev, i); + struct mlx5_rxq_obj *rxq_obj = rxq ? rxq->ctrl->obj : NULL; int rc; /* Skip queues that cannot request interrupts. */ @@ -885,11 +883,9 @@ mlx5_rx_intr_vec_enable(struct rte_eth_dev *dev) intr_handle->intr_vec[i] = RTE_INTR_VEC_RXTX_OFFSET + RTE_MAX_RXTX_INTR_VEC_ID; - /* Decrease the rxq_ctrl's refcnt */ - if (rxq_ctrl) - mlx5_rxq_release(dev, i); continue; } + mlx5_rxq_ref(dev, i); if (count >= RTE_MAX_RXTX_INTR_VEC_ID) { DRV_LOG(ERR, "port %u too many Rx queues for interrupt" @@ -949,7 +945,7 @@ mlx5_rx_intr_vec_disable(struct rte_eth_dev *dev) * Need to access directly the queue to release the reference * kept in mlx5_rx_intr_vec_enable(). */ - mlx5_rxq_release(dev, i); + mlx5_rxq_deref(dev, i); } free: rte_intr_free_epoll_fd(intr_handle); @@ -998,19 +994,14 @@ mlx5_arm_cq(struct mlx5_rxq_data *rxq, int sq_n_rxq) int mlx5_rx_intr_enable(struct rte_eth_dev *dev, uint16_t rx_queue_id) { - struct mlx5_rxq_ctrl *rxq_ctrl; - - rxq_ctrl = mlx5_rxq_get(dev, rx_queue_id); - if (!rxq_ctrl) + struct mlx5_rxq_priv *rxq = mlx5_rxq_get(dev, rx_queue_id); + if (!rxq) goto error; - if (rxq_ctrl->irq) { - if (!rxq_ctrl->obj) { - mlx5_rxq_release(dev, rx_queue_id); + if (rxq->ctrl->irq) { + if (!rxq->ctrl->obj) goto error; - } - mlx5_arm_cq(&rxq_ctrl->rxq, rxq_ctrl->rxq.cq_arm_sn); + mlx5_arm_cq(&rxq->ctrl->rxq, rxq->ctrl->rxq.cq_arm_sn); } - mlx5_rxq_release(dev, rx_queue_id); return 0; error: rte_errno = EINVAL; @@ -1032,23 +1023,21 @@ int mlx5_rx_intr_disable(struct rte_eth_dev *dev, uint16_t rx_queue_id) { struct mlx5_priv *priv = dev->data->dev_private; - struct mlx5_rxq_ctrl *rxq_ctrl; + struct mlx5_rxq_priv *rxq = mlx5_rxq_get(dev, rx_queue_id); int ret = 0; - rxq_ctrl = mlx5_rxq_get(dev, rx_queue_id); - if (!rxq_ctrl) { + if (!rxq) { rte_errno = EINVAL; return -rte_errno; } - if (!rxq_ctrl->obj) + if (!rxq->ctrl->obj) goto error; - if (rxq_ctrl->irq) { - ret = priv->obj_ops.rxq_event_get(rxq_ctrl->obj); + if (rxq->ctrl->irq) { + ret = priv->obj_ops.rxq_event_get(rxq->ctrl->obj); if (ret < 0) goto error; - rxq_ctrl->rxq.cq_arm_sn++; + rxq->ctrl->rxq.cq_arm_sn++; } - mlx5_rxq_release(dev, rx_queue_id); return 0; error: /** @@ -1059,12 +1048,9 @@ mlx5_rx_intr_disable(struct rte_eth_dev *dev, uint16_t rx_queue_id) rte_errno = errno; else rte_errno = EINVAL; - ret = rte_errno; /* Save rte_errno before cleanup. */ - mlx5_rxq_release(dev, rx_queue_id); - if (ret != EAGAIN) + if (rte_errno != EAGAIN) DRV_LOG(WARNING, "port %u unable to disable interrupt on Rx queue %d", dev->data->port_id, rx_queue_id); - rte_errno = ret; /* Restore rte_errno. */ return -rte_errno; } @@ -1611,7 +1597,7 @@ mlx5_rxq_new(struct rte_eth_dev *dev, struct mlx5_rxq_priv *rxq, tmpl->rxq.uar_lock_cq = &priv->sh->uar_lock_cq; #endif tmpl->rxq.idx = idx; - __atomic_fetch_add(&tmpl->refcnt, 1, __ATOMIC_RELAXED); + mlx5_rxq_ref(dev, idx); LIST_INSERT_HEAD(&priv->rxqsctrl, tmpl, next); return tmpl; error: @@ -1665,11 +1651,53 @@ mlx5_rxq_hairpin_new(struct rte_eth_dev *dev, struct mlx5_rxq_priv *rxq, tmpl->rxq.mr_ctrl.cache_bh = (struct mlx5_mr_btree) { 0 }; tmpl->hairpin_conf = *hairpin_conf; tmpl->rxq.idx = idx; - __atomic_fetch_add(&tmpl->refcnt, 1, __ATOMIC_RELAXED); + mlx5_rxq_ref(dev, idx); LIST_INSERT_HEAD(&priv->rxqsctrl, tmpl, next); return tmpl; } +/** + * Increase Rx queue reference count. + * + * @param dev + * Pointer to Ethernet device. + * @param idx + * RX queue index. + * + * @return + * A pointer to the queue if it exists, NULL otherwise. + */ +inline struct mlx5_rxq_priv * +mlx5_rxq_ref(struct rte_eth_dev *dev, uint16_t idx) +{ + struct mlx5_rxq_priv *rxq = mlx5_rxq_get(dev, idx); + + if (rxq != NULL) + __atomic_fetch_add(&rxq->refcnt, 1, __ATOMIC_RELAXED); + return rxq; +} + +/** + * Dereference a Rx queue. + * + * @param dev + * Pointer to Ethernet device. + * @param idx + * RX queue index. + * + * @return + * Updated reference count. + */ +inline uint32_t +mlx5_rxq_deref(struct rte_eth_dev *dev, uint16_t idx) +{ + struct mlx5_rxq_priv *rxq = mlx5_rxq_get(dev, idx); + + if (rxq == NULL) + return 0; + return __atomic_sub_fetch(&rxq->refcnt, 1, __ATOMIC_RELAXED); +} + /** * Get a Rx queue. * @@ -1681,18 +1709,52 @@ mlx5_rxq_hairpin_new(struct rte_eth_dev *dev, struct mlx5_rxq_priv *rxq, * @return * A pointer to the queue if it exists, NULL otherwise. */ -struct mlx5_rxq_ctrl * +inline struct mlx5_rxq_priv * mlx5_rxq_get(struct rte_eth_dev *dev, uint16_t idx) { struct mlx5_priv *priv = dev->data->dev_private; - struct mlx5_rxq_data *rxq_data = (*priv->rxqs)[idx]; - struct mlx5_rxq_ctrl *rxq_ctrl = NULL; - if (rxq_data) { - rxq_ctrl = container_of(rxq_data, struct mlx5_rxq_ctrl, rxq); - __atomic_fetch_add(&rxq_ctrl->refcnt, 1, __ATOMIC_RELAXED); - } - return rxq_ctrl; + if (priv->rxq_privs == NULL) + return NULL; + return (*priv->rxq_privs)[idx]; +} + +/** + * Get Rx queue shareable control. + * + * @param dev + * Pointer to Ethernet device. + * @param idx + * RX queue index. + * + * @return + * A pointer to the queue control if it exists, NULL otherwise. + */ +inline struct mlx5_rxq_ctrl * +mlx5_rxq_ctrl_get(struct rte_eth_dev *dev, uint16_t idx) +{ + struct mlx5_rxq_priv *rxq = mlx5_rxq_get(dev, idx); + + return rxq == NULL ? NULL : rxq->ctrl; +} + +/** + * Get Rx queue shareable data. + * + * @param dev + * Pointer to Ethernet device. + * @param idx + * RX queue index. + * + * @return + * A pointer to the queue data if it exists, NULL otherwise. + */ +inline struct mlx5_rxq_data * +mlx5_rxq_data_get(struct rte_eth_dev *dev, uint16_t idx) +{ + struct mlx5_rxq_priv *rxq = mlx5_rxq_get(dev, idx); + + return rxq == NULL ? NULL : &rxq->ctrl->rxq; } /** @@ -1710,13 +1772,12 @@ int mlx5_rxq_release(struct rte_eth_dev *dev, uint16_t idx) { struct mlx5_priv *priv = dev->data->dev_private; - struct mlx5_rxq_ctrl *rxq_ctrl; - struct mlx5_rxq_priv *rxq = (*priv->rxq_privs)[idx]; + struct mlx5_rxq_priv *rxq = mlx5_rxq_get(dev, idx); + struct mlx5_rxq_ctrl *rxq_ctrl = rxq->ctrl; if (priv->rxqs == NULL || (*priv->rxqs)[idx] == NULL) return 0; - rxq_ctrl = container_of((*priv->rxqs)[idx], struct mlx5_rxq_ctrl, rxq); - if (__atomic_sub_fetch(&rxq_ctrl->refcnt, 1, __ATOMIC_RELAXED) > 1) + if (mlx5_rxq_deref(dev, idx) > 1) return 1; if (rxq_ctrl->obj) { priv->obj_ops.rxq_obj_release(rxq_ctrl->obj); @@ -1728,7 +1789,7 @@ mlx5_rxq_release(struct rte_eth_dev *dev, uint16_t idx) rxq_free_elts(rxq_ctrl); dev->data->rx_queue_state[idx] = RTE_ETH_QUEUE_STATE_STOPPED; } - if (!__atomic_load_n(&rxq_ctrl->refcnt, __ATOMIC_RELAXED)) { + if (!__atomic_load_n(&rxq->refcnt, __ATOMIC_RELAXED)) { if (rxq_ctrl->type == MLX5_RXQ_TYPE_STANDARD) { mlx5_mr_btree_free(&rxq_ctrl->rxq.mr_ctrl.cache_bh); mlx5_mprq_free_mp(dev, rxq_ctrl); @@ -1908,7 +1969,7 @@ mlx5_ind_table_obj_release(struct rte_eth_dev *dev, return 1; priv->obj_ops.ind_table_destroy(ind_tbl); for (i = 0; i != ind_tbl->queues_n; ++i) - claim_nonzero(mlx5_rxq_release(dev, ind_tbl->queues[i])); + claim_nonzero(mlx5_rxq_deref(dev, ind_tbl->queues[i])); mlx5_free(ind_tbl); return 0; } @@ -1965,7 +2026,7 @@ mlx5_ind_table_obj_setup(struct rte_eth_dev *dev, log2above(priv->config.ind_table_max_size); for (i = 0; i != queues_n; ++i) { - if (!mlx5_rxq_get(dev, queues[i])) { + if (mlx5_rxq_ref(dev, queues[i]) == NULL) { ret = -rte_errno; goto error; } @@ -1978,7 +2039,7 @@ mlx5_ind_table_obj_setup(struct rte_eth_dev *dev, error: err = rte_errno; for (j = 0; j < i; j++) - mlx5_rxq_release(dev, ind_tbl->queues[j]); + mlx5_rxq_deref(dev, ind_tbl->queues[j]); rte_errno = err; DRV_LOG(DEBUG, "Port %u cannot setup indirection table.", dev->data->port_id); @@ -2074,7 +2135,7 @@ mlx5_ind_table_obj_modify(struct rte_eth_dev *dev, bool standalone) { struct mlx5_priv *priv = dev->data->dev_private; - unsigned int i, j; + unsigned int i; int ret = 0, err; const unsigned int n = rte_is_power_of_2(queues_n) ? log2above(queues_n) : @@ -2094,15 +2155,11 @@ mlx5_ind_table_obj_modify(struct rte_eth_dev *dev, ret = priv->obj_ops.ind_table_modify(dev, n, queues, queues_n, ind_tbl); if (ret) goto error; - for (j = 0; j < ind_tbl->queues_n; j++) - mlx5_rxq_release(dev, ind_tbl->queues[j]); ind_tbl->queues_n = queues_n; ind_tbl->queues = queues; return 0; error: err = rte_errno; - for (j = 0; j < i; j++) - mlx5_rxq_release(dev, queues[j]); rte_errno = err; DRV_LOG(DEBUG, "Port %u cannot setup indirection table.", dev->data->port_id); @@ -2135,7 +2192,7 @@ mlx5_ind_table_obj_attach(struct rte_eth_dev *dev, return ret; } for (i = 0; i < ind_tbl->queues_n; i++) - mlx5_rxq_get(dev, ind_tbl->queues[i]); + mlx5_rxq_ref(dev, ind_tbl->queues[i]); return 0; } @@ -2172,7 +2229,7 @@ mlx5_ind_table_obj_detach(struct rte_eth_dev *dev, return ret; } for (i = 0; i < ind_tbl->queues_n; i++) - mlx5_rxq_release(dev, ind_tbl->queues[i]); + mlx5_rxq_deref(dev, ind_tbl->queues[i]); return ret; } diff --git a/drivers/net/mlx5/mlx5_trigger.c b/drivers/net/mlx5/mlx5_trigger.c index 0753dbad053..a49254c96f6 100644 --- a/drivers/net/mlx5/mlx5_trigger.c +++ b/drivers/net/mlx5/mlx5_trigger.c @@ -143,10 +143,12 @@ mlx5_rxq_start(struct rte_eth_dev *dev) DRV_LOG(DEBUG, "Port %u device_attr.max_sge is %d.", dev->data->port_id, priv->sh->device_attr.max_sge); for (i = 0; i != priv->rxqs_n; ++i) { - struct mlx5_rxq_ctrl *rxq_ctrl = mlx5_rxq_get(dev, i); + struct mlx5_rxq_priv *rxq = mlx5_rxq_ref(dev, i); + struct mlx5_rxq_ctrl *rxq_ctrl; - if (!rxq_ctrl) + if (rxq == NULL) continue; + rxq_ctrl = rxq->ctrl; if (rxq_ctrl->type == MLX5_RXQ_TYPE_STANDARD) { if (mlx5_rxq_mprq_enabled(&rxq_ctrl->rxq)) { /* Allocate/reuse/resize mempool for MPRQ. */ @@ -215,6 +217,7 @@ mlx5_hairpin_auto_bind(struct rte_eth_dev *dev) struct mlx5_devx_modify_sq_attr sq_attr = { 0 }; struct mlx5_devx_modify_rq_attr rq_attr = { 0 }; struct mlx5_txq_ctrl *txq_ctrl; + struct mlx5_rxq_priv *rxq; struct mlx5_rxq_ctrl *rxq_ctrl; struct mlx5_devx_obj *sq; struct mlx5_devx_obj *rq; @@ -259,9 +262,8 @@ mlx5_hairpin_auto_bind(struct rte_eth_dev *dev) return -rte_errno; } sq = txq_ctrl->obj->sq; - rxq_ctrl = mlx5_rxq_get(dev, - txq_ctrl->hairpin_conf.peers[0].queue); - if (!rxq_ctrl) { + rxq = mlx5_rxq_get(dev, txq_ctrl->hairpin_conf.peers[0].queue); + if (rxq == NULL) { mlx5_txq_release(dev, i); rte_errno = EINVAL; DRV_LOG(ERR, "port %u no rxq object found: %d", @@ -269,6 +271,7 @@ mlx5_hairpin_auto_bind(struct rte_eth_dev *dev) txq_ctrl->hairpin_conf.peers[0].queue); return -rte_errno; } + rxq_ctrl = rxq->ctrl; if (rxq_ctrl->type != MLX5_RXQ_TYPE_HAIRPIN || rxq_ctrl->hairpin_conf.peers[0].queue != i) { rte_errno = ENOMEM; @@ -303,12 +306,10 @@ mlx5_hairpin_auto_bind(struct rte_eth_dev *dev) rxq_ctrl->hairpin_status = 1; txq_ctrl->hairpin_status = 1; mlx5_txq_release(dev, i); - mlx5_rxq_release(dev, txq_ctrl->hairpin_conf.peers[0].queue); } return 0; error: mlx5_txq_release(dev, i); - mlx5_rxq_release(dev, txq_ctrl->hairpin_conf.peers[0].queue); return -rte_errno; } @@ -381,27 +382,26 @@ mlx5_hairpin_queue_peer_update(struct rte_eth_dev *dev, uint16_t peer_queue, peer_info->manual_bind = txq_ctrl->hairpin_conf.manual_bind; mlx5_txq_release(dev, peer_queue); } else { /* Peer port used as ingress. */ + struct mlx5_rxq_priv *rxq = mlx5_rxq_get(dev, peer_queue); struct mlx5_rxq_ctrl *rxq_ctrl; - rxq_ctrl = mlx5_rxq_get(dev, peer_queue); - if (rxq_ctrl == NULL) { + if (rxq == NULL) { rte_errno = EINVAL; DRV_LOG(ERR, "Failed to get port %u Rx queue %d", dev->data->port_id, peer_queue); return -rte_errno; } + rxq_ctrl = rxq->ctrl; if (rxq_ctrl->type != MLX5_RXQ_TYPE_HAIRPIN) { rte_errno = EINVAL; DRV_LOG(ERR, "port %u queue %d is not a hairpin Rxq", dev->data->port_id, peer_queue); - mlx5_rxq_release(dev, peer_queue); return -rte_errno; } if (rxq_ctrl->obj == NULL || rxq_ctrl->obj->rq == NULL) { rte_errno = ENOMEM; DRV_LOG(ERR, "port %u no Rxq object found: %d", dev->data->port_id, peer_queue); - mlx5_rxq_release(dev, peer_queue); return -rte_errno; } peer_info->qp_id = rxq_ctrl->obj->rq->id; @@ -409,7 +409,6 @@ mlx5_hairpin_queue_peer_update(struct rte_eth_dev *dev, uint16_t peer_queue, peer_info->peer_q = rxq_ctrl->hairpin_conf.peers[0].queue; peer_info->tx_explicit = rxq_ctrl->hairpin_conf.tx_explicit; peer_info->manual_bind = rxq_ctrl->hairpin_conf.manual_bind; - mlx5_rxq_release(dev, peer_queue); } return 0; } @@ -508,34 +507,32 @@ mlx5_hairpin_queue_peer_bind(struct rte_eth_dev *dev, uint16_t cur_queue, txq_ctrl->hairpin_status = 1; mlx5_txq_release(dev, cur_queue); } else { + struct mlx5_rxq_priv *rxq = mlx5_rxq_get(dev, cur_queue); struct mlx5_rxq_ctrl *rxq_ctrl; struct mlx5_devx_modify_rq_attr rq_attr = { 0 }; - rxq_ctrl = mlx5_rxq_get(dev, cur_queue); - if (rxq_ctrl == NULL) { + if (rxq == NULL) { rte_errno = EINVAL; DRV_LOG(ERR, "Failed to get port %u Rx queue %d", dev->data->port_id, cur_queue); return -rte_errno; } + rxq_ctrl = rxq->ctrl; if (rxq_ctrl->type != MLX5_RXQ_TYPE_HAIRPIN) { rte_errno = EINVAL; DRV_LOG(ERR, "port %u queue %d not a hairpin Rxq", dev->data->port_id, cur_queue); - mlx5_rxq_release(dev, cur_queue); return -rte_errno; } if (rxq_ctrl->obj == NULL || rxq_ctrl->obj->rq == NULL) { rte_errno = ENOMEM; DRV_LOG(ERR, "port %u no Rxq object found: %d", dev->data->port_id, cur_queue); - mlx5_rxq_release(dev, cur_queue); return -rte_errno; } if (rxq_ctrl->hairpin_status != 0) { DRV_LOG(DEBUG, "port %u Rx queue %d is already bound", dev->data->port_id, cur_queue); - mlx5_rxq_release(dev, cur_queue); return 0; } if (peer_info->tx_explicit != @@ -543,7 +540,6 @@ mlx5_hairpin_queue_peer_bind(struct rte_eth_dev *dev, uint16_t cur_queue, rte_errno = EINVAL; DRV_LOG(ERR, "port %u Rx queue %d and peer Tx rule mode" " mismatch", dev->data->port_id, cur_queue); - mlx5_rxq_release(dev, cur_queue); return -rte_errno; } if (peer_info->manual_bind != @@ -551,7 +547,6 @@ mlx5_hairpin_queue_peer_bind(struct rte_eth_dev *dev, uint16_t cur_queue, rte_errno = EINVAL; DRV_LOG(ERR, "port %u Rx queue %d and peer binding mode" " mismatch", dev->data->port_id, cur_queue); - mlx5_rxq_release(dev, cur_queue); return -rte_errno; } rq_attr.state = MLX5_SQC_STATE_RDY; @@ -561,7 +556,6 @@ mlx5_hairpin_queue_peer_bind(struct rte_eth_dev *dev, uint16_t cur_queue, ret = mlx5_devx_cmd_modify_rq(rxq_ctrl->obj->rq, &rq_attr); if (ret == 0) rxq_ctrl->hairpin_status = 1; - mlx5_rxq_release(dev, cur_queue); } return ret; } @@ -626,34 +620,32 @@ mlx5_hairpin_queue_peer_unbind(struct rte_eth_dev *dev, uint16_t cur_queue, txq_ctrl->hairpin_status = 0; mlx5_txq_release(dev, cur_queue); } else { + struct mlx5_rxq_priv *rxq = mlx5_rxq_get(dev, cur_queue); struct mlx5_rxq_ctrl *rxq_ctrl; struct mlx5_devx_modify_rq_attr rq_attr = { 0 }; - rxq_ctrl = mlx5_rxq_get(dev, cur_queue); - if (rxq_ctrl == NULL) { + if (rxq == NULL) { rte_errno = EINVAL; DRV_LOG(ERR, "Failed to get port %u Rx queue %d", dev->data->port_id, cur_queue); return -rte_errno; } + rxq_ctrl = rxq->ctrl; if (rxq_ctrl->type != MLX5_RXQ_TYPE_HAIRPIN) { rte_errno = EINVAL; DRV_LOG(ERR, "port %u queue %d not a hairpin Rxq", dev->data->port_id, cur_queue); - mlx5_rxq_release(dev, cur_queue); return -rte_errno; } if (rxq_ctrl->hairpin_status == 0) { DRV_LOG(DEBUG, "port %u Rx queue %d is already unbound", dev->data->port_id, cur_queue); - mlx5_rxq_release(dev, cur_queue); return 0; } if (rxq_ctrl->obj == NULL || rxq_ctrl->obj->rq == NULL) { rte_errno = ENOMEM; DRV_LOG(ERR, "port %u no Rxq object found: %d", dev->data->port_id, cur_queue); - mlx5_rxq_release(dev, cur_queue); return -rte_errno; } rq_attr.state = MLX5_SQC_STATE_RST; @@ -661,7 +653,6 @@ mlx5_hairpin_queue_peer_unbind(struct rte_eth_dev *dev, uint16_t cur_queue, ret = mlx5_devx_cmd_modify_rq(rxq_ctrl->obj->rq, &rq_attr); if (ret == 0) rxq_ctrl->hairpin_status = 0; - mlx5_rxq_release(dev, cur_queue); } return ret; } @@ -963,7 +954,6 @@ mlx5_hairpin_get_peer_ports(struct rte_eth_dev *dev, uint16_t *peer_ports, { struct mlx5_priv *priv = dev->data->dev_private; struct mlx5_txq_ctrl *txq_ctrl; - struct mlx5_rxq_ctrl *rxq_ctrl; uint32_t i; uint16_t pp; uint32_t bits[(RTE_MAX_ETHPORTS + 31) / 32] = {0}; @@ -992,24 +982,23 @@ mlx5_hairpin_get_peer_ports(struct rte_eth_dev *dev, uint16_t *peer_ports, } } else { for (i = 0; i < priv->rxqs_n; i++) { - rxq_ctrl = mlx5_rxq_get(dev, i); - if (!rxq_ctrl) + struct mlx5_rxq_priv *rxq = mlx5_rxq_get(dev, i); + struct mlx5_rxq_ctrl *rxq_ctrl; + + if (rxq == NULL) continue; - if (rxq_ctrl->type != MLX5_RXQ_TYPE_HAIRPIN) { - mlx5_rxq_release(dev, i); + rxq_ctrl = rxq->ctrl; + if (rxq_ctrl->type != MLX5_RXQ_TYPE_HAIRPIN) continue; - } pp = rxq_ctrl->hairpin_conf.peers[0].port; if (pp >= RTE_MAX_ETHPORTS) { rte_errno = ERANGE; - mlx5_rxq_release(dev, i); DRV_LOG(ERR, "port %hu queue %u peer port " "out of range %hu", priv->dev_data->port_id, i, pp); return -rte_errno; } bits[pp / 32] |= 1 << (pp % 32); - mlx5_rxq_release(dev, i); } } for (i = 0; i < RTE_MAX_ETHPORTS; i++) { -- 2.33.0