From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id D227EA0548; Thu, 4 Nov 2021 13:35:36 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 375214271E; Thu, 4 Nov 2021 13:35:01 +0100 (CET) Received: from NAM10-DM6-obe.outbound.protection.outlook.com (mail-dm6nam10on2086.outbound.protection.outlook.com [40.107.93.86]) by mails.dpdk.org (Postfix) with ESMTP id E2CCB42701 for ; Thu, 4 Nov 2021 13:34:59 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=nMjTTb9rHKl/B8qHW3tN+4DslTDQwCHDxeObbtZL8E/Lo8PK9740hiZQ3rmqAzQuKb+rWgofSvBo7oWInHn76P9oqUbQgIPmxna4MhN0EWcc3AZ8fmdXij7KTQwliK+ye+/TkNtl9TFzcqlaP3SCS6lfyyCIURaQ3W6RnbhrKfN5AfnGYaCrZjZoLbesoqVxSaathZZmr2Q3hd/fLEBF4zFTgk6N2YHN8vKOee34G1l0U2ovSsVPJAEj/XFr/NuorzVtaGq+vcQEi4cmcIt98o82+I0v4niWcbjE6ZkHWttw0fFZElK6b59mafHm5v2WKXnb9KfutXsxS0LS5gHzTg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=4LIWylTyW/GERxwZuyWeFzXOFVjd4AM/k7IEGdtLThE=; b=M7SusJwVdffktRB7geqY94THaziIJGWvduo1WFz7Nkyci6gGEkCdGHPa7vB5Fb8dl39hpEtZ4S45KLwLyO56QazCONromCwo/gk1eafLewMeu/1q2g7hBUe8eI7v28f7pRduwW3EUlTPaNk5PSdwt7HzYzIATL8KUrE/6GLmYoI2oIyKcj+QioHQs9Y2RRuMQqwcttqsRD3xqaCaWx2/XvgEDcQlggqIPbKReo941nzBT12lKJCR66wMr7nnyWvHjaIx9vZzarhsKf3KIgIAog8Y6F6uyDS+aoh9ByNT5VGengoYQ2lJrCxg3hvmdwwMWAbbXfxprwT51KrTyHx6xQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.34) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=4LIWylTyW/GERxwZuyWeFzXOFVjd4AM/k7IEGdtLThE=; b=iBzSBcZv66Dtt7LebdzcAJSImzOIujWsJCYsrkHEDo5BvVm0Qib+IRE3EM2EoYSElxpRNhGL6mimmHRRKHB/gFa88/VKZfKnNWzkVpLasm0qoNYj8ufyG0soirLg357xtMEIaAKcxhqDIrlPiojdn0yUco6Gs6f562lc1MAN3jckkw23XtIlYNxKbqDqDilbnb1E6nkN9E3RlUuPj1E+CqMTW3ABDAxp/xMWLxhX33PHghy22G1IzWJ3PKG/d0S/P0J/PFAdLu7b3pZUFnioGHzqc2GFl/Kjq+0uXa1YeaMGRCJHZywuA1AggbBIZxVbrX7EHP3q2w5tlTorVuosPA== Received: from DM5PR1101CA0023.namprd11.prod.outlook.com (2603:10b6:4:4c::33) by DM6PR12MB3273.namprd12.prod.outlook.com (2603:10b6:5:188::17) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4628.18; Thu, 4 Nov 2021 12:34:58 +0000 Received: from DM6NAM11FT046.eop-nam11.prod.protection.outlook.com (2603:10b6:4:4c:cafe::6e) by DM5PR1101CA0023.outlook.office365.com (2603:10b6:4:4c::33) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4669.10 via Frontend Transport; Thu, 4 Nov 2021 12:34:58 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.34) smtp.mailfrom=nvidia.com; dpdk.org; dkim=none (message not signed) header.d=none;dpdk.org; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.34 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.34; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.34) by DM6NAM11FT046.mail.protection.outlook.com (10.13.172.121) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4669.10 via Frontend Transport; Thu, 4 Nov 2021 12:34:57 +0000 Received: from nvidia.com (172.20.187.6) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Thu, 4 Nov 2021 12:34:55 +0000 From: Xueming Li To: CC: , Lior Margalit , "Slava Ovsiienko" , Matan Azrad Date: Thu, 4 Nov 2021 20:33:16 +0800 Message-ID: <20211104123320.1638915-11-xuemingl@nvidia.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20211104123320.1638915-1-xuemingl@nvidia.com> References: <20210727034204.20649-1-xuemingl@nvidia.com> <20211104123320.1638915-1-xuemingl@nvidia.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain X-Originating-IP: [172.20.187.6] X-ClientProxiedBy: HQMAIL101.nvidia.com (172.20.187.10) To HQMAIL107.nvidia.com (172.20.187.13) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: f950daff-b188-4073-1085-08d99f8f8322 X-MS-TrafficTypeDiagnostic: DM6PR12MB3273: X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:23; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: qfMwunfqmwf6HkTrjjd4SsjCHZa+nflVuHU4gv6/j0wwF69pNFXYXuc1pqHhBv7EqnH5D3f92gJ4xxuM/CcqCvjgsb/rsMjHxxgTUOmS7uWNNO5QKuhtPWwxQz2YxJ3G4d90IFq1VZ47ntZ8Y0+G9cmANvcwpCoVOnYgFGG/zZyd+yJbJmm7ozVkXt7LB5rmqnzUZRaCsfr68lVpT34VvEIRVI/Lu4iYqfegBNl3JHcHb9oGbPqKRK1vdoYbFd579eZZ+Rb0ictXgmxi3NqTy/4ZwliZMwruTxH5WpDo4f7Yvfb3QfND95dB2M7m8fZqRXRLBQeCPjxfUe6fW7l2otIdmVXlk5i9E1xRyNph9b+SRFSaktwyL/BmEjcFYmZ1Bt98oW961sZfu5iz/dZwyY0LXuJNk7+vu+gKc/RcC+iZPQy29U5lJCy4bUNP+DrZNcwAjKIpItPedatBndHfLws0/bfBG6Nk0ogczdV8upXf8arp/nXbkT25i22xSOTutm9vOUIT1B3Yu4frlQ7J6yDJiswoMo94l9q8Dngnv5Sp2FLlBzim19p7HpMY6BZYLiZ0rhWRrrZtRRcACnjvguNSFJoeBPhA4UugVUJBY4Dl9uRQs0VHj8ixQqI1jUBC9tfmvM1UGGq+eyFjARC+5SNVcsls7ehwsfY+oWxs7vc7/Pd7493gXPPRAQZNya+7jkVn5EFx6JJQy2kY0tL9fw== X-Forefront-Antispam-Report: CIP:216.228.112.34; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:schybrid03.nvidia.com; CAT:NONE; SFS:(4636009)(36840700001)(46966006)(6916009)(186003)(316002)(6666004)(54906003)(36756003)(55016002)(2616005)(86362001)(70206006)(7636003)(26005)(356005)(8676002)(336012)(8936002)(426003)(82310400003)(6286002)(107886003)(508600001)(47076005)(1076003)(70586007)(83380400001)(7696005)(36860700001)(5660300002)(4326008)(16526019)(2906002); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 04 Nov 2021 12:34:57.6574 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: f950daff-b188-4073-1085-08d99f8f8322 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.112.34]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: DM6NAM11FT046.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR12MB3273 Subject: [dpdk-dev] [PATCH v4 10/14] net/mlx5: remove port info from shareable Rx queue X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" To prepare for shared Rx queue, removes port info from shareable Rx queue control. Signed-off-by: Xueming Li Acked-by: Slava Ovsiienko --- drivers/net/mlx5/mlx5_devx.c | 2 +- drivers/net/mlx5/mlx5_rx.c | 15 +++-------- drivers/net/mlx5/mlx5_rx.h | 7 ++++-- drivers/net/mlx5/mlx5_rxq.c | 43 ++++++++++++++++++++++---------- drivers/net/mlx5/mlx5_rxtx_vec.c | 2 +- drivers/net/mlx5/mlx5_trigger.c | 13 +++++----- 6 files changed, 47 insertions(+), 35 deletions(-) diff --git a/drivers/net/mlx5/mlx5_devx.c b/drivers/net/mlx5/mlx5_devx.c index 443252df05d..8b3651f5034 100644 --- a/drivers/net/mlx5/mlx5_devx.c +++ b/drivers/net/mlx5/mlx5_devx.c @@ -918,7 +918,7 @@ mlx5_rxq_devx_obj_drop_create(struct rte_eth_dev *dev) } rxq->rxq_ctrl = rxq_ctrl; rxq_ctrl->type = MLX5_RXQ_TYPE_STANDARD; - rxq_ctrl->priv = priv; + rxq_ctrl->sh = priv->sh; rxq_ctrl->obj = rxq; rxq_data = &rxq_ctrl->rxq; /* Create CQ using DevX API. */ diff --git a/drivers/net/mlx5/mlx5_rx.c b/drivers/net/mlx5/mlx5_rx.c index 258a6453144..d41905a2a04 100644 --- a/drivers/net/mlx5/mlx5_rx.c +++ b/drivers/net/mlx5/mlx5_rx.c @@ -118,15 +118,7 @@ int mlx5_rx_descriptor_status(void *rx_queue, uint16_t offset) { struct mlx5_rxq_data *rxq = rx_queue; - struct mlx5_rxq_ctrl *rxq_ctrl = - container_of(rxq, struct mlx5_rxq_ctrl, rxq); - struct rte_eth_dev *dev = ETH_DEV(rxq_ctrl->priv); - if (dev->rx_pkt_burst == NULL || - dev->rx_pkt_burst == removed_rx_burst) { - rte_errno = ENOTSUP; - return -rte_errno; - } if (offset >= (1 << rxq->cqe_n)) { rte_errno = EINVAL; return -rte_errno; @@ -438,10 +430,10 @@ mlx5_rx_err_handle(struct mlx5_rxq_data *rxq, uint8_t vec) sm.is_wq = 1; sm.queue_id = rxq->idx; sm.state = IBV_WQS_RESET; - if (mlx5_queue_state_modify(ETH_DEV(rxq_ctrl->priv), &sm)) + if (mlx5_queue_state_modify(RXQ_DEV(rxq_ctrl), &sm)) return -1; if (rxq_ctrl->dump_file_n < - rxq_ctrl->priv->config.max_dump_files_num) { + RXQ_PORT(rxq_ctrl)->config.max_dump_files_num) { MKSTR(err_str, "Unexpected CQE error syndrome " "0x%02x CQN = %u RQN = %u wqe_counter = %u" " rq_ci = %u cq_ci = %u", u.err_cqe->syndrome, @@ -478,8 +470,7 @@ mlx5_rx_err_handle(struct mlx5_rxq_data *rxq, uint8_t vec) sm.is_wq = 1; sm.queue_id = rxq->idx; sm.state = IBV_WQS_RDY; - if (mlx5_queue_state_modify(ETH_DEV(rxq_ctrl->priv), - &sm)) + if (mlx5_queue_state_modify(RXQ_DEV(rxq_ctrl), &sm)) return -1; if (vec) { const uint32_t elts_n = diff --git a/drivers/net/mlx5/mlx5_rx.h b/drivers/net/mlx5/mlx5_rx.h index b21918223b8..c04c0c73349 100644 --- a/drivers/net/mlx5/mlx5_rx.h +++ b/drivers/net/mlx5/mlx5_rx.h @@ -22,6 +22,10 @@ /* Support tunnel matching. */ #define MLX5_FLOW_TUNNEL 10 +#define RXQ_PORT(rxq_ctrl) LIST_FIRST(&(rxq_ctrl)->owners)->priv +#define RXQ_DEV(rxq_ctrl) ETH_DEV(RXQ_PORT(rxq_ctrl)) +#define RXQ_PORT_ID(rxq_ctrl) PORT_ID(RXQ_PORT(rxq_ctrl)) + /* First entry must be NULL for comparison. */ #define mlx5_mr_btree_len(bt) ((bt)->len - 1) @@ -152,7 +156,6 @@ struct mlx5_rxq_ctrl { LIST_HEAD(priv, mlx5_rxq_priv) owners; /* Owner rxq list. */ struct mlx5_rxq_obj *obj; /* Verbs/DevX elements. */ struct mlx5_dev_ctx_shared *sh; /* Shared context. */ - struct mlx5_priv *priv; /* Back pointer to private data. */ enum mlx5_rxq_type type; /* Rxq type. */ unsigned int socket; /* CPU socket ID for allocations. */ uint32_t share_group; /* Group ID of shared RXQ. */ @@ -318,7 +321,7 @@ mlx5_rx_addr2mr(struct mlx5_rxq_data *rxq, uintptr_t addr) */ rxq_ctrl = container_of(rxq, struct mlx5_rxq_ctrl, rxq); mp = mlx5_rxq_mprq_enabled(rxq) ? rxq->mprq_mp : rxq->mp; - return mlx5_mr_mempool2mr_bh(&rxq_ctrl->priv->sh->cdev->mr_scache, + return mlx5_mr_mempool2mr_bh(&rxq_ctrl->sh->cdev->mr_scache, mr_ctrl, mp, addr); } diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c index 7b637fda643..5a20966e2ca 100644 --- a/drivers/net/mlx5/mlx5_rxq.c +++ b/drivers/net/mlx5/mlx5_rxq.c @@ -148,8 +148,14 @@ rxq_alloc_elts_sprq(struct mlx5_rxq_ctrl *rxq_ctrl) buf = rte_pktmbuf_alloc(seg->mp); if (buf == NULL) { - DRV_LOG(ERR, "port %u empty mbuf pool", - PORT_ID(rxq_ctrl->priv)); + if (rxq_ctrl->share_group == 0) + DRV_LOG(ERR, "port %u queue %u empty mbuf pool", + RXQ_PORT_ID(rxq_ctrl), + rxq_ctrl->rxq.idx); + else + DRV_LOG(ERR, "share group %u queue %u empty mbuf pool", + rxq_ctrl->share_group, + rxq_ctrl->share_qid); rte_errno = ENOMEM; goto error; } @@ -193,11 +199,16 @@ rxq_alloc_elts_sprq(struct mlx5_rxq_ctrl *rxq_ctrl) for (j = 0; j < MLX5_VPMD_DESCS_PER_LOOP; ++j) (*rxq->elts)[elts_n + j] = &rxq->fake_mbuf; } - DRV_LOG(DEBUG, - "port %u SPRQ queue %u allocated and configured %u segments" - " (max %u packets)", - PORT_ID(rxq_ctrl->priv), rxq_ctrl->rxq.idx, elts_n, - elts_n / (1 << rxq_ctrl->rxq.sges_n)); + if (rxq_ctrl->share_group == 0) + DRV_LOG(DEBUG, + "port %u SPRQ queue %u allocated and configured %u segments (max %u packets)", + RXQ_PORT_ID(rxq_ctrl), rxq_ctrl->rxq.idx, elts_n, + elts_n / (1 << rxq_ctrl->rxq.sges_n)); + else + DRV_LOG(DEBUG, + "share group %u SPRQ queue %u allocated and configured %u segments (max %u packets)", + rxq_ctrl->share_group, rxq_ctrl->share_qid, elts_n, + elts_n / (1 << rxq_ctrl->rxq.sges_n)); return 0; error: err = rte_errno; /* Save rte_errno before cleanup. */ @@ -207,8 +218,12 @@ rxq_alloc_elts_sprq(struct mlx5_rxq_ctrl *rxq_ctrl) rte_pktmbuf_free_seg((*rxq_ctrl->rxq.elts)[i]); (*rxq_ctrl->rxq.elts)[i] = NULL; } - DRV_LOG(DEBUG, "port %u SPRQ queue %u failed, freed everything", - PORT_ID(rxq_ctrl->priv), rxq_ctrl->rxq.idx); + if (rxq_ctrl->share_group == 0) + DRV_LOG(DEBUG, "port %u SPRQ queue %u failed, freed everything", + RXQ_PORT_ID(rxq_ctrl), rxq_ctrl->rxq.idx); + else + DRV_LOG(DEBUG, "share group %u SPRQ queue %u failed, freed everything", + rxq_ctrl->share_group, rxq_ctrl->share_qid); rte_errno = err; /* Restore rte_errno. */ return -rte_errno; } @@ -284,8 +299,12 @@ rxq_free_elts_sprq(struct mlx5_rxq_ctrl *rxq_ctrl) uint16_t used = q_n - (elts_ci - rxq->rq_pi); uint16_t i; - DRV_LOG(DEBUG, "port %u Rx queue %u freeing %d WRs", - PORT_ID(rxq_ctrl->priv), rxq->idx, q_n); + if (rxq_ctrl->share_group == 0) + DRV_LOG(DEBUG, "port %u Rx queue %u freeing %d WRs", + RXQ_PORT_ID(rxq_ctrl), rxq->idx, q_n); + else + DRV_LOG(DEBUG, "share group %u Rx queue %u freeing %d WRs", + rxq_ctrl->share_group, rxq_ctrl->share_qid, q_n); if (rxq->elts == NULL) return; /** @@ -1630,7 +1649,6 @@ mlx5_rxq_new(struct rte_eth_dev *dev, struct mlx5_rxq_priv *rxq, (!!(dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS)); tmpl->rxq.port_id = dev->data->port_id; tmpl->sh = priv->sh; - tmpl->priv = priv; tmpl->rxq.mp = rx_seg[0].mp; tmpl->rxq.elts_n = log2above(desc); tmpl->rxq.rq_repl_thresh = @@ -1690,7 +1708,6 @@ mlx5_rxq_hairpin_new(struct rte_eth_dev *dev, struct mlx5_rxq_priv *rxq, tmpl->rxq.rss_hash = 0; tmpl->rxq.port_id = dev->data->port_id; tmpl->sh = priv->sh; - tmpl->priv = priv; tmpl->rxq.mp = NULL; tmpl->rxq.elts_n = log2above(desc); tmpl->rxq.elts = NULL; diff --git a/drivers/net/mlx5/mlx5_rxtx_vec.c b/drivers/net/mlx5/mlx5_rxtx_vec.c index ecd273e00a8..511681841ca 100644 --- a/drivers/net/mlx5/mlx5_rxtx_vec.c +++ b/drivers/net/mlx5/mlx5_rxtx_vec.c @@ -550,7 +550,7 @@ mlx5_rxq_check_vec_support(struct mlx5_rxq_data *rxq) struct mlx5_rxq_ctrl *ctrl = container_of(rxq, struct mlx5_rxq_ctrl, rxq); - if (!ctrl->priv->config.rx_vec_en || rxq->sges_n != 0) + if (!RXQ_PORT(ctrl)->config.rx_vec_en || rxq->sges_n != 0) return -ENOTSUP; if (rxq->lro) return -ENOTSUP; diff --git a/drivers/net/mlx5/mlx5_trigger.c b/drivers/net/mlx5/mlx5_trigger.c index a124f74fcda..caafdf27e8f 100644 --- a/drivers/net/mlx5/mlx5_trigger.c +++ b/drivers/net/mlx5/mlx5_trigger.c @@ -131,9 +131,11 @@ mlx5_rxq_mempool_register_cb(struct rte_mempool *mp, void *opaque, * 0 on success, (-1) on failure and rte_errno is set. */ static int -mlx5_rxq_mempool_register(struct mlx5_rxq_ctrl *rxq_ctrl) +mlx5_rxq_mempool_register(struct rte_eth_dev *dev, + struct mlx5_rxq_ctrl *rxq_ctrl) { - struct mlx5_priv *priv = rxq_ctrl->priv; + struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5_dev_ctx_shared *sh = rxq_ctrl->sh; struct rte_mempool *mp; uint32_t s; int ret = 0; @@ -148,9 +150,8 @@ mlx5_rxq_mempool_register(struct mlx5_rxq_ctrl *rxq_ctrl) } for (s = 0; s < rxq_ctrl->rxq.rxseg_n; s++) { mp = rxq_ctrl->rxq.rxseg[s].mp; - ret = mlx5_mr_mempool_register(&priv->sh->cdev->mr_scache, - priv->sh->cdev->pd, mp, - &priv->mp_id); + ret = mlx5_mr_mempool_register(&sh->cdev->mr_scache, + sh->cdev->pd, mp, &priv->mp_id); if (ret < 0 && rte_errno != EEXIST) return ret; rte_mempool_mem_iter(mp, mlx5_rxq_mempool_register_cb, @@ -213,7 +214,7 @@ mlx5_rxq_start(struct rte_eth_dev *dev) * the implicit registration is enabled or not, * Rx mempool destruction is tracked to free MRs. */ - if (mlx5_rxq_mempool_register(rxq_ctrl) < 0) + if (mlx5_rxq_mempool_register(dev, rxq_ctrl) < 0) goto error; ret = rxq_alloc_elts(rxq_ctrl); if (ret) -- 2.33.0