From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 15798A0C53; Wed, 3 Nov 2021 09:00:15 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 2D6B4411A7; Wed, 3 Nov 2021 08:59:53 +0100 (CET) Received: from NAM04-MW2-obe.outbound.protection.outlook.com (mail-mw2nam08on2089.outbound.protection.outlook.com [40.107.101.89]) by mails.dpdk.org (Postfix) with ESMTP id 220F341183 for ; Wed, 3 Nov 2021 08:59:51 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=jl0jaeZCslKPFIM4KkkexP3LhOIHdPcf9TWf8ZBeCJklIsfkV2OxxnHl75o/fiLiBPRCn1tlbpy25SliYlf1sPkgwG6sN94PocsTxgFQa1qTV4ZuAA1v+KId+lxBKmx/bRxBj2sj5gfaWgVZmlZzXg4WQYgLVNs7zdkZb7kVqsShO8eBFhiBu8siCIhuyrOXM+fWaEU4SS/6UMN0Zao/9rSdbcSPtZUDs9AVlgY0Mz0/kHUGD7uMcl0mcetS3Fmqorr6yozqTnEEkb+mRu/y2BP3zB3gw0+JdTnVTecQDWIytmvgLrsHaa344zhPdWbhIdU7Xv3SzH4c1MHwKWMPeQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=NmfwGXVFAoFHmvrJ3XEnCtGnMXLMROEkPIHyM8NS2dw=; b=KDzDYModw9ONNke5eyz5eAzo9rO876twKKwUCcu18XoWs5zZEMYuFA0+AEjJH/iuHX+ePyAcVQj28qIkof0pVqH6xLrI2LeLW5JfIN1O7MPCWGq7/PqWFL1YE2mBPL9cZfUjP6j4OUtopoNmvaMllFFmj2xim5GOZn+onWxkkGret/K1bcqh1HcWswb91YwBYsHfpW1f7K2+W02aA/+tDMDeD8rY5Jtyjl3bPFM23kIHZxRx9tA2Eqn0Vkc4rE3kT81FdyS50u3CQUmRUPAVKeD+sykduk1wMQiIey7vJvUfyE9fSfm8R2GkrLy2/HYqtR/DubxCVt4/g697bLNWfw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.34) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=NmfwGXVFAoFHmvrJ3XEnCtGnMXLMROEkPIHyM8NS2dw=; b=s4c1zSEphGLiTAdClBOBd21hZX2T6mUBbqxidClcRcRxOmIoBUM4t5KrMXnMdJ5rzw5xsjdZkVLGpgAd6Os4Gdbj8/Q5eS6wkp6YssU3KNK+5wcF9OyTpzVEigjAR5Lkhw6Zd7AV9iblI/3CrnWRJGeFlxtzIX111ywYg1PW5XFr5DTwEIaAMCNxThM1pIpBZaXVFV7ch84jY8PTEI8fRL1fBsQs2h1+pQgGj0jjfeoZOIReqOd3cv9Ir6359o60fGRbXiWU/lCgNnpZU7SIBpO07GBFdTDuZ2TkOb7lY9prYMEcqmUb7xGnvL5WE2Cpt0JH//LhtpaQqhA7RARgnQ== Received: from MWHPR12CA0026.namprd12.prod.outlook.com (2603:10b6:301:2::12) by BY5PR12MB4259.namprd12.prod.outlook.com (2603:10b6:a03:202::17) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4669.10; Wed, 3 Nov 2021 07:59:48 +0000 Received: from CO1NAM11FT054.eop-nam11.prod.protection.outlook.com (2603:10b6:301:2:cafe::1f) by MWHPR12CA0026.outlook.office365.com (2603:10b6:301:2::12) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4669.11 via Frontend Transport; Wed, 3 Nov 2021 07:59:48 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.34) smtp.mailfrom=nvidia.com; dpdk.org; dkim=none (message not signed) header.d=none;dpdk.org; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.34 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.34; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.34) by CO1NAM11FT054.mail.protection.outlook.com (10.13.174.70) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4669.10 via Frontend Transport; Wed, 3 Nov 2021 07:59:48 +0000 Received: from nvidia.com (172.20.187.5) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Wed, 3 Nov 2021 07:59:45 +0000 From: Xueming Li To: CC: , Lior Margalit , Matan Azrad , Viacheslav Ovsiienko Date: Wed, 3 Nov 2021 15:58:32 +0800 Message-ID: <20211103075838.1486056-9-xuemingl@nvidia.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20211103075838.1486056-1-xuemingl@nvidia.com> References: <20210727034204.20649-1-xuemingl@nvidia.com> <20211103075838.1486056-1-xuemingl@nvidia.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain X-Originating-IP: [172.20.187.5] X-ClientProxiedBy: HQMAIL101.nvidia.com (172.20.187.10) To HQMAIL107.nvidia.com (172.20.187.13) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 72bcf130-d5f2-4baa-96fe-08d99e9fe849 X-MS-TrafficTypeDiagnostic: BY5PR12MB4259: X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:513; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: 9Dx/o7vI3bvylxQt7tn754n4M40w/+YhArdUU1sp0+mwcEpNWtxAVVdwXT9lGNNz4iHw8MTztvRK61K90jHVv2VWKlG6cIK7X8RiIv3rn/OrDQz045oxQ//4fCY1GzwIEeeMWI6wZpZ6dGfVXsJG8AaBXMuF8n+IOA1Fw6/SI79NJZebKPxNnZyS7r2V8+iVyiISCKh3qXUcNS4XjKY+8hQJjNzNDWKcy7bbK1N71/tYv2Y3pwJBhbWJxft3RGLZGuUFPR/cyJWJMejzji+cqMAGbLVxQHfXBuFmu8m4oeXYD0JnOPRKiVnX+S4VvTGlFZkE4s7DOark69QIpXpTi8/2wd4iwUuPTgTwNHa+eugleVvE5bfo7oYHsVlfOURI4y5YcW6uq9rxMVA7HP6NqTUhQsKVVYLzV1brnD0VsnpOSZiiajo2mYwqUcf2xWcVyQO4QRxzj/IQAFffiUSzdNiqRN/HEynF6WWSRYeKFe63FbSQ1cSbBypaWUXEGsXF3dzegWXaFdjA6kqi7AqOb1KrqNkDv7x/ti9+Iu7k9sqTbcrbL5C+xnLjh31XMxFJuq3zwTVbaGuTThZ7aqk+Qp9pp0fCSp1DG1pWsjZTKznX+Eq1pFzGSVpWZsp+3rWr/e63E+ZjTgl0Pp/7mmadb++gsfABhLS+cOjXnBFyTBrGJeBOSeb+w+4AZL6z9U1LpJB71W0sFEhQmdMgXqds3A== X-Forefront-Antispam-Report: CIP:216.228.112.34; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:schybrid03.nvidia.com; CAT:NONE; SFS:(4636009)(36840700001)(46966006)(426003)(6286002)(6666004)(8676002)(47076005)(86362001)(36860700001)(107886003)(8936002)(82310400003)(1076003)(2906002)(30864003)(508600001)(70206006)(83380400001)(356005)(7696005)(2616005)(4326008)(6916009)(7636003)(186003)(36756003)(16526019)(55016002)(316002)(26005)(336012)(70586007)(54906003)(5660300002); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 03 Nov 2021 07:59:48.1982 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 72bcf130-d5f2-4baa-96fe-08d99e9fe849 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.112.34]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT054.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: BY5PR12MB4259 Subject: [dpdk-dev] [PATCH v3 08/14] net/mlx5: move Rx queue reference count X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Rx queue reference count is counter of RQ, used to count reference to RQ object. To prepare for shared Rx queue, this patch moves it from rxq_ctrl to Rx queue private data. Signed-off-by: Xueming Li --- drivers/net/mlx5/mlx5_rx.h | 8 +- drivers/net/mlx5/mlx5_rxq.c | 169 +++++++++++++++++++++----------- drivers/net/mlx5/mlx5_trigger.c | 57 +++++------ 3 files changed, 142 insertions(+), 92 deletions(-) diff --git a/drivers/net/mlx5/mlx5_rx.h b/drivers/net/mlx5/mlx5_rx.h index fa24f5cdf3a..eccfbf1108d 100644 --- a/drivers/net/mlx5/mlx5_rx.h +++ b/drivers/net/mlx5/mlx5_rx.h @@ -149,7 +149,6 @@ enum mlx5_rxq_type { struct mlx5_rxq_ctrl { struct mlx5_rxq_data rxq; /* Data path structure. */ LIST_ENTRY(mlx5_rxq_ctrl) next; /* Pointer to the next element. */ - uint32_t refcnt; /* Reference counter. */ LIST_HEAD(priv, mlx5_rxq_priv) owners; /* Owner rxq list. */ struct mlx5_rxq_obj *obj; /* Verbs/DevX elements. */ struct mlx5_dev_ctx_shared *sh; /* Shared context. */ @@ -170,6 +169,7 @@ struct mlx5_rxq_ctrl { /* RX queue private data. */ struct mlx5_rxq_priv { uint16_t idx; /* Queue index. */ + uint32_t refcnt; /* Reference counter. */ struct mlx5_rxq_ctrl *ctrl; /* Shared Rx Queue. */ LIST_ENTRY(mlx5_rxq_priv) owner_entry; /* Entry in shared rxq_ctrl. */ struct mlx5_priv *priv; /* Back pointer to private data. */ @@ -207,7 +207,11 @@ struct mlx5_rxq_ctrl *mlx5_rxq_new(struct rte_eth_dev *dev, struct mlx5_rxq_ctrl *mlx5_rxq_hairpin_new (struct rte_eth_dev *dev, struct mlx5_rxq_priv *rxq, uint16_t desc, const struct rte_eth_hairpin_conf *hairpin_conf); -struct mlx5_rxq_ctrl *mlx5_rxq_get(struct rte_eth_dev *dev, uint16_t idx); +struct mlx5_rxq_priv *mlx5_rxq_ref(struct rte_eth_dev *dev, uint16_t idx); +uint32_t mlx5_rxq_deref(struct rte_eth_dev *dev, uint16_t idx); +struct mlx5_rxq_priv *mlx5_rxq_get(struct rte_eth_dev *dev, uint16_t idx); +struct mlx5_rxq_ctrl *mlx5_rxq_ctrl_get(struct rte_eth_dev *dev, uint16_t idx); +struct mlx5_rxq_data *mlx5_rxq_data_get(struct rte_eth_dev *dev, uint16_t idx); int mlx5_rxq_release(struct rte_eth_dev *dev, uint16_t idx); int mlx5_rxq_verify(struct rte_eth_dev *dev); int rxq_alloc_elts(struct mlx5_rxq_ctrl *rxq_ctrl); diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c index 00df245a5c6..8071ddbd61c 100644 --- a/drivers/net/mlx5/mlx5_rxq.c +++ b/drivers/net/mlx5/mlx5_rxq.c @@ -386,15 +386,13 @@ mlx5_get_rx_port_offloads(void) static int mlx5_rxq_releasable(struct rte_eth_dev *dev, uint16_t idx) { - struct mlx5_priv *priv = dev->data->dev_private; - struct mlx5_rxq_ctrl *rxq_ctrl; + struct mlx5_rxq_priv *rxq = mlx5_rxq_get(dev, idx); - if (!(*priv->rxqs)[idx]) { + if (rxq == NULL) { rte_errno = EINVAL; return -rte_errno; } - rxq_ctrl = container_of((*priv->rxqs)[idx], struct mlx5_rxq_ctrl, rxq); - return (__atomic_load_n(&rxq_ctrl->refcnt, __ATOMIC_RELAXED) == 1); + return (__atomic_load_n(&rxq->refcnt, __ATOMIC_RELAXED) == 1); } /* Fetches and drops all SW-owned and error CQEs to synchronize CQ. */ @@ -874,8 +872,8 @@ mlx5_rx_intr_vec_enable(struct rte_eth_dev *dev) for (i = 0; i != n; ++i) { /* This rxq obj must not be released in this function. */ - struct mlx5_rxq_ctrl *rxq_ctrl = mlx5_rxq_get(dev, i); - struct mlx5_rxq_obj *rxq_obj = rxq_ctrl ? rxq_ctrl->obj : NULL; + struct mlx5_rxq_priv *rxq = mlx5_rxq_get(dev, i); + struct mlx5_rxq_obj *rxq_obj = rxq ? rxq->ctrl->obj : NULL; int rc; /* Skip queues that cannot request interrupts. */ @@ -885,11 +883,9 @@ mlx5_rx_intr_vec_enable(struct rte_eth_dev *dev) if (rte_intr_vec_list_index_set(intr_handle, i, RTE_INTR_VEC_RXTX_OFFSET + RTE_MAX_RXTX_INTR_VEC_ID)) return -rte_errno; - /* Decrease the rxq_ctrl's refcnt */ - if (rxq_ctrl) - mlx5_rxq_release(dev, i); continue; } + mlx5_rxq_ref(dev, i); if (count >= RTE_MAX_RXTX_INTR_VEC_ID) { DRV_LOG(ERR, "port %u too many Rx queues for interrupt" @@ -954,7 +950,7 @@ mlx5_rx_intr_vec_disable(struct rte_eth_dev *dev) * Need to access directly the queue to release the reference * kept in mlx5_rx_intr_vec_enable(). */ - mlx5_rxq_release(dev, i); + mlx5_rxq_deref(dev, i); } free: rte_intr_free_epoll_fd(intr_handle); @@ -1003,19 +999,14 @@ mlx5_arm_cq(struct mlx5_rxq_data *rxq, int sq_n_rxq) int mlx5_rx_intr_enable(struct rte_eth_dev *dev, uint16_t rx_queue_id) { - struct mlx5_rxq_ctrl *rxq_ctrl; - - rxq_ctrl = mlx5_rxq_get(dev, rx_queue_id); - if (!rxq_ctrl) + struct mlx5_rxq_priv *rxq = mlx5_rxq_get(dev, rx_queue_id); + if (!rxq) goto error; - if (rxq_ctrl->irq) { - if (!rxq_ctrl->obj) { - mlx5_rxq_release(dev, rx_queue_id); + if (rxq->ctrl->irq) { + if (!rxq->ctrl->obj) goto error; - } - mlx5_arm_cq(&rxq_ctrl->rxq, rxq_ctrl->rxq.cq_arm_sn); + mlx5_arm_cq(&rxq->ctrl->rxq, rxq->ctrl->rxq.cq_arm_sn); } - mlx5_rxq_release(dev, rx_queue_id); return 0; error: rte_errno = EINVAL; @@ -1037,23 +1028,21 @@ int mlx5_rx_intr_disable(struct rte_eth_dev *dev, uint16_t rx_queue_id) { struct mlx5_priv *priv = dev->data->dev_private; - struct mlx5_rxq_ctrl *rxq_ctrl; + struct mlx5_rxq_priv *rxq = mlx5_rxq_get(dev, rx_queue_id); int ret = 0; - rxq_ctrl = mlx5_rxq_get(dev, rx_queue_id); - if (!rxq_ctrl) { + if (!rxq) { rte_errno = EINVAL; return -rte_errno; } - if (!rxq_ctrl->obj) + if (!rxq->ctrl->obj) goto error; - if (rxq_ctrl->irq) { - ret = priv->obj_ops.rxq_event_get(rxq_ctrl->obj); + if (rxq->ctrl->irq) { + ret = priv->obj_ops.rxq_event_get(rxq->ctrl->obj); if (ret < 0) goto error; - rxq_ctrl->rxq.cq_arm_sn++; + rxq->ctrl->rxq.cq_arm_sn++; } - mlx5_rxq_release(dev, rx_queue_id); return 0; error: /** @@ -1064,12 +1053,9 @@ mlx5_rx_intr_disable(struct rte_eth_dev *dev, uint16_t rx_queue_id) rte_errno = errno; else rte_errno = EINVAL; - ret = rte_errno; /* Save rte_errno before cleanup. */ - mlx5_rxq_release(dev, rx_queue_id); - if (ret != EAGAIN) + if (rte_errno != EAGAIN) DRV_LOG(WARNING, "port %u unable to disable interrupt on Rx queue %d", dev->data->port_id, rx_queue_id); - rte_errno = ret; /* Restore rte_errno. */ return -rte_errno; } @@ -1657,7 +1643,7 @@ mlx5_rxq_new(struct rte_eth_dev *dev, struct mlx5_rxq_priv *rxq, tmpl->rxq.uar_lock_cq = &priv->sh->uar_lock_cq; #endif tmpl->rxq.idx = idx; - __atomic_fetch_add(&tmpl->refcnt, 1, __ATOMIC_RELAXED); + mlx5_rxq_ref(dev, idx); LIST_INSERT_HEAD(&priv->rxqsctrl, tmpl, next); return tmpl; error: @@ -1711,11 +1697,53 @@ mlx5_rxq_hairpin_new(struct rte_eth_dev *dev, struct mlx5_rxq_priv *rxq, tmpl->rxq.mr_ctrl.cache_bh = (struct mlx5_mr_btree) { 0 }; tmpl->hairpin_conf = *hairpin_conf; tmpl->rxq.idx = idx; - __atomic_fetch_add(&tmpl->refcnt, 1, __ATOMIC_RELAXED); + mlx5_rxq_ref(dev, idx); LIST_INSERT_HEAD(&priv->rxqsctrl, tmpl, next); return tmpl; } +/** + * Increase Rx queue reference count. + * + * @param dev + * Pointer to Ethernet device. + * @param idx + * RX queue index. + * + * @return + * A pointer to the queue if it exists, NULL otherwise. + */ +struct mlx5_rxq_priv * +mlx5_rxq_ref(struct rte_eth_dev *dev, uint16_t idx) +{ + struct mlx5_rxq_priv *rxq = mlx5_rxq_get(dev, idx); + + if (rxq != NULL) + __atomic_fetch_add(&rxq->refcnt, 1, __ATOMIC_RELAXED); + return rxq; +} + +/** + * Dereference a Rx queue. + * + * @param dev + * Pointer to Ethernet device. + * @param idx + * RX queue index. + * + * @return + * Updated reference count. + */ +uint32_t +mlx5_rxq_deref(struct rte_eth_dev *dev, uint16_t idx) +{ + struct mlx5_rxq_priv *rxq = mlx5_rxq_get(dev, idx); + + if (rxq == NULL) + return 0; + return __atomic_sub_fetch(&rxq->refcnt, 1, __ATOMIC_RELAXED); +} + /** * Get a Rx queue. * @@ -1727,18 +1755,52 @@ mlx5_rxq_hairpin_new(struct rte_eth_dev *dev, struct mlx5_rxq_priv *rxq, * @return * A pointer to the queue if it exists, NULL otherwise. */ -struct mlx5_rxq_ctrl * +struct mlx5_rxq_priv * mlx5_rxq_get(struct rte_eth_dev *dev, uint16_t idx) { struct mlx5_priv *priv = dev->data->dev_private; - struct mlx5_rxq_data *rxq_data = (*priv->rxqs)[idx]; - struct mlx5_rxq_ctrl *rxq_ctrl = NULL; - if (rxq_data) { - rxq_ctrl = container_of(rxq_data, struct mlx5_rxq_ctrl, rxq); - __atomic_fetch_add(&rxq_ctrl->refcnt, 1, __ATOMIC_RELAXED); - } - return rxq_ctrl; + if (priv->rxq_privs == NULL) + return NULL; + return (*priv->rxq_privs)[idx]; +} + +/** + * Get Rx queue shareable control. + * + * @param dev + * Pointer to Ethernet device. + * @param idx + * RX queue index. + * + * @return + * A pointer to the queue control if it exists, NULL otherwise. + */ +struct mlx5_rxq_ctrl * +mlx5_rxq_ctrl_get(struct rte_eth_dev *dev, uint16_t idx) +{ + struct mlx5_rxq_priv *rxq = mlx5_rxq_get(dev, idx); + + return rxq == NULL ? NULL : rxq->ctrl; +} + +/** + * Get Rx queue shareable data. + * + * @param dev + * Pointer to Ethernet device. + * @param idx + * RX queue index. + * + * @return + * A pointer to the queue data if it exists, NULL otherwise. + */ +struct mlx5_rxq_data * +mlx5_rxq_data_get(struct rte_eth_dev *dev, uint16_t idx) +{ + struct mlx5_rxq_priv *rxq = mlx5_rxq_get(dev, idx); + + return rxq == NULL ? NULL : &rxq->ctrl->rxq; } /** @@ -1756,13 +1818,12 @@ int mlx5_rxq_release(struct rte_eth_dev *dev, uint16_t idx) { struct mlx5_priv *priv = dev->data->dev_private; - struct mlx5_rxq_ctrl *rxq_ctrl; - struct mlx5_rxq_priv *rxq = (*priv->rxq_privs)[idx]; + struct mlx5_rxq_priv *rxq = mlx5_rxq_get(dev, idx); + struct mlx5_rxq_ctrl *rxq_ctrl = rxq->ctrl; if (priv->rxqs == NULL || (*priv->rxqs)[idx] == NULL) return 0; - rxq_ctrl = container_of((*priv->rxqs)[idx], struct mlx5_rxq_ctrl, rxq); - if (__atomic_sub_fetch(&rxq_ctrl->refcnt, 1, __ATOMIC_RELAXED) > 1) + if (mlx5_rxq_deref(dev, idx) > 1) return 1; if (rxq_ctrl->obj) { priv->obj_ops.rxq_obj_release(rxq_ctrl->obj); @@ -1774,7 +1835,7 @@ mlx5_rxq_release(struct rte_eth_dev *dev, uint16_t idx) rxq_free_elts(rxq_ctrl); dev->data->rx_queue_state[idx] = RTE_ETH_QUEUE_STATE_STOPPED; } - if (!__atomic_load_n(&rxq_ctrl->refcnt, __ATOMIC_RELAXED)) { + if (!__atomic_load_n(&rxq->refcnt, __ATOMIC_RELAXED)) { if (rxq_ctrl->type == MLX5_RXQ_TYPE_STANDARD) mlx5_mr_btree_free(&rxq_ctrl->rxq.mr_ctrl.cache_bh); LIST_REMOVE(rxq, owner_entry); @@ -1952,7 +2013,7 @@ mlx5_ind_table_obj_release(struct rte_eth_dev *dev, return 1; priv->obj_ops.ind_table_destroy(ind_tbl); for (i = 0; i != ind_tbl->queues_n; ++i) - claim_nonzero(mlx5_rxq_release(dev, ind_tbl->queues[i])); + claim_nonzero(mlx5_rxq_deref(dev, ind_tbl->queues[i])); mlx5_free(ind_tbl); return 0; } @@ -2009,7 +2070,7 @@ mlx5_ind_table_obj_setup(struct rte_eth_dev *dev, log2above(priv->config.ind_table_max_size); for (i = 0; i != queues_n; ++i) { - if (!mlx5_rxq_get(dev, queues[i])) { + if (mlx5_rxq_ref(dev, queues[i]) == NULL) { ret = -rte_errno; goto error; } @@ -2022,7 +2083,7 @@ mlx5_ind_table_obj_setup(struct rte_eth_dev *dev, error: err = rte_errno; for (j = 0; j < i; j++) - mlx5_rxq_release(dev, ind_tbl->queues[j]); + mlx5_rxq_deref(dev, ind_tbl->queues[j]); rte_errno = err; DRV_LOG(DEBUG, "Port %u cannot setup indirection table.", dev->data->port_id); @@ -2118,7 +2179,7 @@ mlx5_ind_table_obj_modify(struct rte_eth_dev *dev, bool standalone) { struct mlx5_priv *priv = dev->data->dev_private; - unsigned int i, j; + unsigned int i; int ret = 0, err; const unsigned int n = rte_is_power_of_2(queues_n) ? log2above(queues_n) : @@ -2138,15 +2199,11 @@ mlx5_ind_table_obj_modify(struct rte_eth_dev *dev, ret = priv->obj_ops.ind_table_modify(dev, n, queues, queues_n, ind_tbl); if (ret) goto error; - for (j = 0; j < ind_tbl->queues_n; j++) - mlx5_rxq_release(dev, ind_tbl->queues[j]); ind_tbl->queues_n = queues_n; ind_tbl->queues = queues; return 0; error: err = rte_errno; - for (j = 0; j < i; j++) - mlx5_rxq_release(dev, queues[j]); rte_errno = err; DRV_LOG(DEBUG, "Port %u cannot setup indirection table.", dev->data->port_id); diff --git a/drivers/net/mlx5/mlx5_trigger.c b/drivers/net/mlx5/mlx5_trigger.c index ebeeae279e2..e5d74d275f8 100644 --- a/drivers/net/mlx5/mlx5_trigger.c +++ b/drivers/net/mlx5/mlx5_trigger.c @@ -201,10 +201,12 @@ mlx5_rxq_start(struct rte_eth_dev *dev) DRV_LOG(DEBUG, "Port %u device_attr.max_sge is %d.", dev->data->port_id, priv->sh->device_attr.max_sge); for (i = 0; i != priv->rxqs_n; ++i) { - struct mlx5_rxq_ctrl *rxq_ctrl = mlx5_rxq_get(dev, i); + struct mlx5_rxq_priv *rxq = mlx5_rxq_ref(dev, i); + struct mlx5_rxq_ctrl *rxq_ctrl; - if (!rxq_ctrl) + if (rxq == NULL) continue; + rxq_ctrl = rxq->ctrl; if (rxq_ctrl->type == MLX5_RXQ_TYPE_STANDARD) { /* * Pre-register the mempools. Regardless of whether @@ -266,6 +268,7 @@ mlx5_hairpin_auto_bind(struct rte_eth_dev *dev) struct mlx5_devx_modify_sq_attr sq_attr = { 0 }; struct mlx5_devx_modify_rq_attr rq_attr = { 0 }; struct mlx5_txq_ctrl *txq_ctrl; + struct mlx5_rxq_priv *rxq; struct mlx5_rxq_ctrl *rxq_ctrl; struct mlx5_devx_obj *sq; struct mlx5_devx_obj *rq; @@ -310,9 +313,8 @@ mlx5_hairpin_auto_bind(struct rte_eth_dev *dev) return -rte_errno; } sq = txq_ctrl->obj->sq; - rxq_ctrl = mlx5_rxq_get(dev, - txq_ctrl->hairpin_conf.peers[0].queue); - if (!rxq_ctrl) { + rxq = mlx5_rxq_get(dev, txq_ctrl->hairpin_conf.peers[0].queue); + if (rxq == NULL) { mlx5_txq_release(dev, i); rte_errno = EINVAL; DRV_LOG(ERR, "port %u no rxq object found: %d", @@ -320,6 +322,7 @@ mlx5_hairpin_auto_bind(struct rte_eth_dev *dev) txq_ctrl->hairpin_conf.peers[0].queue); return -rte_errno; } + rxq_ctrl = rxq->ctrl; if (rxq_ctrl->type != MLX5_RXQ_TYPE_HAIRPIN || rxq_ctrl->hairpin_conf.peers[0].queue != i) { rte_errno = ENOMEM; @@ -354,12 +357,10 @@ mlx5_hairpin_auto_bind(struct rte_eth_dev *dev) rxq_ctrl->hairpin_status = 1; txq_ctrl->hairpin_status = 1; mlx5_txq_release(dev, i); - mlx5_rxq_release(dev, txq_ctrl->hairpin_conf.peers[0].queue); } return 0; error: mlx5_txq_release(dev, i); - mlx5_rxq_release(dev, txq_ctrl->hairpin_conf.peers[0].queue); return -rte_errno; } @@ -432,27 +433,26 @@ mlx5_hairpin_queue_peer_update(struct rte_eth_dev *dev, uint16_t peer_queue, peer_info->manual_bind = txq_ctrl->hairpin_conf.manual_bind; mlx5_txq_release(dev, peer_queue); } else { /* Peer port used as ingress. */ + struct mlx5_rxq_priv *rxq = mlx5_rxq_get(dev, peer_queue); struct mlx5_rxq_ctrl *rxq_ctrl; - rxq_ctrl = mlx5_rxq_get(dev, peer_queue); - if (rxq_ctrl == NULL) { + if (rxq == NULL) { rte_errno = EINVAL; DRV_LOG(ERR, "Failed to get port %u Rx queue %d", dev->data->port_id, peer_queue); return -rte_errno; } + rxq_ctrl = rxq->ctrl; if (rxq_ctrl->type != MLX5_RXQ_TYPE_HAIRPIN) { rte_errno = EINVAL; DRV_LOG(ERR, "port %u queue %d is not a hairpin Rxq", dev->data->port_id, peer_queue); - mlx5_rxq_release(dev, peer_queue); return -rte_errno; } if (rxq_ctrl->obj == NULL || rxq_ctrl->obj->rq == NULL) { rte_errno = ENOMEM; DRV_LOG(ERR, "port %u no Rxq object found: %d", dev->data->port_id, peer_queue); - mlx5_rxq_release(dev, peer_queue); return -rte_errno; } peer_info->qp_id = rxq_ctrl->obj->rq->id; @@ -460,7 +460,6 @@ mlx5_hairpin_queue_peer_update(struct rte_eth_dev *dev, uint16_t peer_queue, peer_info->peer_q = rxq_ctrl->hairpin_conf.peers[0].queue; peer_info->tx_explicit = rxq_ctrl->hairpin_conf.tx_explicit; peer_info->manual_bind = rxq_ctrl->hairpin_conf.manual_bind; - mlx5_rxq_release(dev, peer_queue); } return 0; } @@ -559,34 +558,32 @@ mlx5_hairpin_queue_peer_bind(struct rte_eth_dev *dev, uint16_t cur_queue, txq_ctrl->hairpin_status = 1; mlx5_txq_release(dev, cur_queue); } else { + struct mlx5_rxq_priv *rxq = mlx5_rxq_get(dev, cur_queue); struct mlx5_rxq_ctrl *rxq_ctrl; struct mlx5_devx_modify_rq_attr rq_attr = { 0 }; - rxq_ctrl = mlx5_rxq_get(dev, cur_queue); - if (rxq_ctrl == NULL) { + if (rxq == NULL) { rte_errno = EINVAL; DRV_LOG(ERR, "Failed to get port %u Rx queue %d", dev->data->port_id, cur_queue); return -rte_errno; } + rxq_ctrl = rxq->ctrl; if (rxq_ctrl->type != MLX5_RXQ_TYPE_HAIRPIN) { rte_errno = EINVAL; DRV_LOG(ERR, "port %u queue %d not a hairpin Rxq", dev->data->port_id, cur_queue); - mlx5_rxq_release(dev, cur_queue); return -rte_errno; } if (rxq_ctrl->obj == NULL || rxq_ctrl->obj->rq == NULL) { rte_errno = ENOMEM; DRV_LOG(ERR, "port %u no Rxq object found: %d", dev->data->port_id, cur_queue); - mlx5_rxq_release(dev, cur_queue); return -rte_errno; } if (rxq_ctrl->hairpin_status != 0) { DRV_LOG(DEBUG, "port %u Rx queue %d is already bound", dev->data->port_id, cur_queue); - mlx5_rxq_release(dev, cur_queue); return 0; } if (peer_info->tx_explicit != @@ -594,7 +591,6 @@ mlx5_hairpin_queue_peer_bind(struct rte_eth_dev *dev, uint16_t cur_queue, rte_errno = EINVAL; DRV_LOG(ERR, "port %u Rx queue %d and peer Tx rule mode" " mismatch", dev->data->port_id, cur_queue); - mlx5_rxq_release(dev, cur_queue); return -rte_errno; } if (peer_info->manual_bind != @@ -602,7 +598,6 @@ mlx5_hairpin_queue_peer_bind(struct rte_eth_dev *dev, uint16_t cur_queue, rte_errno = EINVAL; DRV_LOG(ERR, "port %u Rx queue %d and peer binding mode" " mismatch", dev->data->port_id, cur_queue); - mlx5_rxq_release(dev, cur_queue); return -rte_errno; } rq_attr.state = MLX5_SQC_STATE_RDY; @@ -612,7 +607,6 @@ mlx5_hairpin_queue_peer_bind(struct rte_eth_dev *dev, uint16_t cur_queue, ret = mlx5_devx_cmd_modify_rq(rxq_ctrl->obj->rq, &rq_attr); if (ret == 0) rxq_ctrl->hairpin_status = 1; - mlx5_rxq_release(dev, cur_queue); } return ret; } @@ -677,34 +671,32 @@ mlx5_hairpin_queue_peer_unbind(struct rte_eth_dev *dev, uint16_t cur_queue, txq_ctrl->hairpin_status = 0; mlx5_txq_release(dev, cur_queue); } else { + struct mlx5_rxq_priv *rxq = mlx5_rxq_get(dev, cur_queue); struct mlx5_rxq_ctrl *rxq_ctrl; struct mlx5_devx_modify_rq_attr rq_attr = { 0 }; - rxq_ctrl = mlx5_rxq_get(dev, cur_queue); - if (rxq_ctrl == NULL) { + if (rxq == NULL) { rte_errno = EINVAL; DRV_LOG(ERR, "Failed to get port %u Rx queue %d", dev->data->port_id, cur_queue); return -rte_errno; } + rxq_ctrl = rxq->ctrl; if (rxq_ctrl->type != MLX5_RXQ_TYPE_HAIRPIN) { rte_errno = EINVAL; DRV_LOG(ERR, "port %u queue %d not a hairpin Rxq", dev->data->port_id, cur_queue); - mlx5_rxq_release(dev, cur_queue); return -rte_errno; } if (rxq_ctrl->hairpin_status == 0) { DRV_LOG(DEBUG, "port %u Rx queue %d is already unbound", dev->data->port_id, cur_queue); - mlx5_rxq_release(dev, cur_queue); return 0; } if (rxq_ctrl->obj == NULL || rxq_ctrl->obj->rq == NULL) { rte_errno = ENOMEM; DRV_LOG(ERR, "port %u no Rxq object found: %d", dev->data->port_id, cur_queue); - mlx5_rxq_release(dev, cur_queue); return -rte_errno; } rq_attr.state = MLX5_SQC_STATE_RST; @@ -712,7 +704,6 @@ mlx5_hairpin_queue_peer_unbind(struct rte_eth_dev *dev, uint16_t cur_queue, ret = mlx5_devx_cmd_modify_rq(rxq_ctrl->obj->rq, &rq_attr); if (ret == 0) rxq_ctrl->hairpin_status = 0; - mlx5_rxq_release(dev, cur_queue); } return ret; } @@ -1014,7 +1005,6 @@ mlx5_hairpin_get_peer_ports(struct rte_eth_dev *dev, uint16_t *peer_ports, { struct mlx5_priv *priv = dev->data->dev_private; struct mlx5_txq_ctrl *txq_ctrl; - struct mlx5_rxq_ctrl *rxq_ctrl; uint32_t i; uint16_t pp; uint32_t bits[(RTE_MAX_ETHPORTS + 31) / 32] = {0}; @@ -1043,24 +1033,23 @@ mlx5_hairpin_get_peer_ports(struct rte_eth_dev *dev, uint16_t *peer_ports, } } else { for (i = 0; i < priv->rxqs_n; i++) { - rxq_ctrl = mlx5_rxq_get(dev, i); - if (!rxq_ctrl) + struct mlx5_rxq_priv *rxq = mlx5_rxq_get(dev, i); + struct mlx5_rxq_ctrl *rxq_ctrl; + + if (rxq == NULL) continue; - if (rxq_ctrl->type != MLX5_RXQ_TYPE_HAIRPIN) { - mlx5_rxq_release(dev, i); + rxq_ctrl = rxq->ctrl; + if (rxq_ctrl->type != MLX5_RXQ_TYPE_HAIRPIN) continue; - } pp = rxq_ctrl->hairpin_conf.peers[0].port; if (pp >= RTE_MAX_ETHPORTS) { rte_errno = ERANGE; - mlx5_rxq_release(dev, i); DRV_LOG(ERR, "port %hu queue %u peer port " "out of range %hu", priv->dev_data->port_id, i, pp); return -rte_errno; } bits[pp / 32] |= 1 << (pp % 32); - mlx5_rxq_release(dev, i); } } for (i = 0; i < RTE_MAX_ETHPORTS; i++) { -- 2.33.0