From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 6BE88A0C53; Wed, 3 Nov 2021 09:01:25 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 534DB41134; Wed, 3 Nov 2021 09:01:25 +0100 (CET) Received: from NAM11-BN8-obe.outbound.protection.outlook.com (mail-bn8nam11on2057.outbound.protection.outlook.com [40.107.236.57]) by mails.dpdk.org (Postfix) with ESMTP id D474640E03 for ; Wed, 3 Nov 2021 09:01:23 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=c4zlTKSlpwm0Mh6BuD79NpiXhjBycg/xdKzk5KZ9R8xQ1ObeQh2ROIixdPJedPvpurXdk+PRkJ6iqQ36+BE7pL1lvc4+FQOahjoVNwOVyIgtaNI8EzD3R5zzesPWU/8LSTPmBesb7FMtSSry2iizNcQlm/VH+mNFvFH5Zj++t+FFr4EC8CWW2nZcRaFhYyFTBWQiQqRS6GQWgX264Qq4gUw1l8bJRXGMqnbRTLxlXW7zgt9AjZ3LOKZ5DHVIIqvFdyrSJKtUoPinv9FoKfKfBSVo3rLzoY5n7QuQh62cMlu8xWQofoEmEwncziOOeCEwsptTW5aXqcgpt1mGsxQvPA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=g2UCrM+uJLKC3cX8olZ11yYUNRw+vnygYATHwBQGg4U=; b=GLyrbDcKSl2vYN1K+Qg0Hj3uspdyetdd99FPeZkA9lyEFBPPNzhD4FD1Kjpw2qdUlznZnYmT2dBpLdE4DjZMtHx602TeWAyLbHGoe848Fkrz1HevUw4NXPwm4mGg7WV9LD1v3bO87nxwm0jrx8th6afmcss8YShkokcJ66JjhMFnrxeYDXAdW15PKl/AJfYTkiWRiaJSjH2xOv5ehHHrkvkyeMJLcamE4wxTSobBxMHNdACbsTnGRuOGwMAYVMp5Hfll9Nx+Xptu9Ien95AhnF9pBh1UTXW3dg7AJABtzOj1fjpkYYOusuD+XLMXtXbYh88/93s/rxNVm/11qvKuxg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.34) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=g2UCrM+uJLKC3cX8olZ11yYUNRw+vnygYATHwBQGg4U=; b=sDrH/sX5bUxHFozGZgzWFJcDoxBINEc91EeVhvP1lPFDHjAw/2pLRTwrlEtNhzt+UaVNwx6yLHdUtR82IhiNxNumauA3+UAWbMZ+LFSS36QDwWO/g1w+oNmrKEY6/uqt3Avefll1MlVN0CUbQ8bgT7J7Owz7UNherwiIuJw4HWRbvDb4UrExT1TAncWtgSipOwmYPzeyAnyCDyHVD46G0vokXR5SCXLXEHG1iVf+2JhCslhV1Zq/MrBz8UFU36y/Iluu8MDESdlNgeKsz48uLt1ZG8RbTVzITW+jSUBROpJ4/re1eWEg39Yk5rPhHS3v+Ly5GQ+FmttPzDDzfOHfKg== Received: from MWHPR18CA0027.namprd18.prod.outlook.com (2603:10b6:320:31::13) by MW3PR12MB4505.namprd12.prod.outlook.com (2603:10b6:303:5a::24) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4669.11; Wed, 3 Nov 2021 08:01:22 +0000 Received: from CO1NAM11FT023.eop-nam11.prod.protection.outlook.com (2603:10b6:320:31:cafe::bb) by MWHPR18CA0027.outlook.office365.com (2603:10b6:320:31::13) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4649.15 via Frontend Transport; Wed, 3 Nov 2021 08:01:22 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.34) smtp.mailfrom=nvidia.com; dpdk.org; dkim=none (message not signed) header.d=none;dpdk.org; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.34 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.34; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.34) by CO1NAM11FT023.mail.protection.outlook.com (10.13.175.35) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4669.10 via Frontend Transport; Wed, 3 Nov 2021 08:01:21 +0000 Received: from nvidia.com (172.20.187.6) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Wed, 3 Nov 2021 08:00:12 +0000 From: Xueming Li To: CC: , Lior Margalit , Matan Azrad , Viacheslav Ovsiienko Date: Wed, 3 Nov 2021 15:58:34 +0800 Message-ID: <20211103075838.1486056-11-xuemingl@nvidia.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20211103075838.1486056-1-xuemingl@nvidia.com> References: <20210727034204.20649-1-xuemingl@nvidia.com> <20211103075838.1486056-1-xuemingl@nvidia.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain X-Originating-IP: [172.20.187.6] X-ClientProxiedBy: HQMAIL101.nvidia.com (172.20.187.10) To HQMAIL107.nvidia.com (172.20.187.13) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 5a4954fa-2746-41fd-2437-08d99ea0200a X-MS-TrafficTypeDiagnostic: MW3PR12MB4505: X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:23; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: 0pPDYBfCbW94IEPo7bqPlk52TCdZAaEqITtTA5HKnWG+YXs4RuoM4RfiBn97n8uDq08MXsft9wRnGpc5Yu71/Jo4GfXUroOm1ahb4IBhlysv78WDIgCImg5jiNqK3VSJz/wWIn/H7et62H3L0Tll8dyfcFtq/10jZUa3b6xZmAL2/z4Kdxg1y7jKXIXt9LZuWLu82q8n3F7arI3hPPLCGFBXfBH3Ieh/MM+3Q5toHfOgrIa99bcIz4n0EA5mNLNEQV65wj95p+VMW4nqmXl1eqGRyPV5IPePfRep3/aWouYLifDCkDWa0V/jNXuNrsuvsLGEk5du+eJ4zoBWFxyDgtR9FThxPlhKLtOAeSuvQ7zcS4psUY/qSh4O3Z6kmWbPQTatsvdVBPYCD2vF9jziB8ACJ43itV0LnYHJ9HxKZUIDMLgc0ZZn/51ucoz7Wyw2kUXBS5hJMfzbj8Kgqx687dtboIaA+TFUWq6QQCFKAfQGI3HLmv5YsLe6l+CYyCmX1h6cjA+HCaTHexLqsgo86Hn/Qdexn+2SuEUj1TGp4ymyk9PfXsI/0x5AMEGGphFMX/DYOWupIGJCy6uLfEHiAVAFEJ8EvIg1zhdAYJNbWHjjuB4jJASD+Yzq4uXxX/LfKMYr1Ej0J7ukrvE5YWE2CJ239hQPOT/k8nqssebdoSMHODVrp5WP70Zn2g7jEH+qFa4SSqZClepHAWLBHhnEzA== X-Forefront-Antispam-Report: CIP:216.228.112.34; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:schybrid03.nvidia.com; CAT:NONE; SFS:(4636009)(36840700001)(46966006)(6286002)(36756003)(16526019)(186003)(426003)(2616005)(8936002)(36860700001)(26005)(6916009)(55016002)(316002)(54906003)(107886003)(336012)(6666004)(70206006)(86362001)(508600001)(4326008)(83380400001)(7636003)(7696005)(1076003)(70586007)(47076005)(2906002)(82310400003)(5660300002)(8676002)(356005); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 03 Nov 2021 08:01:21.8027 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 5a4954fa-2746-41fd-2437-08d99ea0200a X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.112.34]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT023.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: MW3PR12MB4505 Subject: [dpdk-dev] [PATCH v3 10/14] net/mlx5: remove port info from shareable Rx queue X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" To prepare for shared Rx queue, removes port info from shareable Rx queue control. Signed-off-by: Xueming Li --- drivers/net/mlx5/mlx5_devx.c | 2 +- drivers/net/mlx5/mlx5_rx.c | 15 +++-------- drivers/net/mlx5/mlx5_rx.h | 7 ++++-- drivers/net/mlx5/mlx5_rxq.c | 43 ++++++++++++++++++++++---------- drivers/net/mlx5/mlx5_rxtx_vec.c | 2 +- drivers/net/mlx5/mlx5_trigger.c | 13 +++++----- 6 files changed, 47 insertions(+), 35 deletions(-) diff --git a/drivers/net/mlx5/mlx5_devx.c b/drivers/net/mlx5/mlx5_devx.c index 443252df05d..8b3651f5034 100644 --- a/drivers/net/mlx5/mlx5_devx.c +++ b/drivers/net/mlx5/mlx5_devx.c @@ -918,7 +918,7 @@ mlx5_rxq_devx_obj_drop_create(struct rte_eth_dev *dev) } rxq->rxq_ctrl = rxq_ctrl; rxq_ctrl->type = MLX5_RXQ_TYPE_STANDARD; - rxq_ctrl->priv = priv; + rxq_ctrl->sh = priv->sh; rxq_ctrl->obj = rxq; rxq_data = &rxq_ctrl->rxq; /* Create CQ using DevX API. */ diff --git a/drivers/net/mlx5/mlx5_rx.c b/drivers/net/mlx5/mlx5_rx.c index 258a6453144..d41905a2a04 100644 --- a/drivers/net/mlx5/mlx5_rx.c +++ b/drivers/net/mlx5/mlx5_rx.c @@ -118,15 +118,7 @@ int mlx5_rx_descriptor_status(void *rx_queue, uint16_t offset) { struct mlx5_rxq_data *rxq = rx_queue; - struct mlx5_rxq_ctrl *rxq_ctrl = - container_of(rxq, struct mlx5_rxq_ctrl, rxq); - struct rte_eth_dev *dev = ETH_DEV(rxq_ctrl->priv); - if (dev->rx_pkt_burst == NULL || - dev->rx_pkt_burst == removed_rx_burst) { - rte_errno = ENOTSUP; - return -rte_errno; - } if (offset >= (1 << rxq->cqe_n)) { rte_errno = EINVAL; return -rte_errno; @@ -438,10 +430,10 @@ mlx5_rx_err_handle(struct mlx5_rxq_data *rxq, uint8_t vec) sm.is_wq = 1; sm.queue_id = rxq->idx; sm.state = IBV_WQS_RESET; - if (mlx5_queue_state_modify(ETH_DEV(rxq_ctrl->priv), &sm)) + if (mlx5_queue_state_modify(RXQ_DEV(rxq_ctrl), &sm)) return -1; if (rxq_ctrl->dump_file_n < - rxq_ctrl->priv->config.max_dump_files_num) { + RXQ_PORT(rxq_ctrl)->config.max_dump_files_num) { MKSTR(err_str, "Unexpected CQE error syndrome " "0x%02x CQN = %u RQN = %u wqe_counter = %u" " rq_ci = %u cq_ci = %u", u.err_cqe->syndrome, @@ -478,8 +470,7 @@ mlx5_rx_err_handle(struct mlx5_rxq_data *rxq, uint8_t vec) sm.is_wq = 1; sm.queue_id = rxq->idx; sm.state = IBV_WQS_RDY; - if (mlx5_queue_state_modify(ETH_DEV(rxq_ctrl->priv), - &sm)) + if (mlx5_queue_state_modify(RXQ_DEV(rxq_ctrl), &sm)) return -1; if (vec) { const uint32_t elts_n = diff --git a/drivers/net/mlx5/mlx5_rx.h b/drivers/net/mlx5/mlx5_rx.h index b21918223b8..c04c0c73349 100644 --- a/drivers/net/mlx5/mlx5_rx.h +++ b/drivers/net/mlx5/mlx5_rx.h @@ -22,6 +22,10 @@ /* Support tunnel matching. */ #define MLX5_FLOW_TUNNEL 10 +#define RXQ_PORT(rxq_ctrl) LIST_FIRST(&(rxq_ctrl)->owners)->priv +#define RXQ_DEV(rxq_ctrl) ETH_DEV(RXQ_PORT(rxq_ctrl)) +#define RXQ_PORT_ID(rxq_ctrl) PORT_ID(RXQ_PORT(rxq_ctrl)) + /* First entry must be NULL for comparison. */ #define mlx5_mr_btree_len(bt) ((bt)->len - 1) @@ -152,7 +156,6 @@ struct mlx5_rxq_ctrl { LIST_HEAD(priv, mlx5_rxq_priv) owners; /* Owner rxq list. */ struct mlx5_rxq_obj *obj; /* Verbs/DevX elements. */ struct mlx5_dev_ctx_shared *sh; /* Shared context. */ - struct mlx5_priv *priv; /* Back pointer to private data. */ enum mlx5_rxq_type type; /* Rxq type. */ unsigned int socket; /* CPU socket ID for allocations. */ uint32_t share_group; /* Group ID of shared RXQ. */ @@ -318,7 +321,7 @@ mlx5_rx_addr2mr(struct mlx5_rxq_data *rxq, uintptr_t addr) */ rxq_ctrl = container_of(rxq, struct mlx5_rxq_ctrl, rxq); mp = mlx5_rxq_mprq_enabled(rxq) ? rxq->mprq_mp : rxq->mp; - return mlx5_mr_mempool2mr_bh(&rxq_ctrl->priv->sh->cdev->mr_scache, + return mlx5_mr_mempool2mr_bh(&rxq_ctrl->sh->cdev->mr_scache, mr_ctrl, mp, addr); } diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c index 7b637fda643..5a20966e2ca 100644 --- a/drivers/net/mlx5/mlx5_rxq.c +++ b/drivers/net/mlx5/mlx5_rxq.c @@ -148,8 +148,14 @@ rxq_alloc_elts_sprq(struct mlx5_rxq_ctrl *rxq_ctrl) buf = rte_pktmbuf_alloc(seg->mp); if (buf == NULL) { - DRV_LOG(ERR, "port %u empty mbuf pool", - PORT_ID(rxq_ctrl->priv)); + if (rxq_ctrl->share_group == 0) + DRV_LOG(ERR, "port %u queue %u empty mbuf pool", + RXQ_PORT_ID(rxq_ctrl), + rxq_ctrl->rxq.idx); + else + DRV_LOG(ERR, "share group %u queue %u empty mbuf pool", + rxq_ctrl->share_group, + rxq_ctrl->share_qid); rte_errno = ENOMEM; goto error; } @@ -193,11 +199,16 @@ rxq_alloc_elts_sprq(struct mlx5_rxq_ctrl *rxq_ctrl) for (j = 0; j < MLX5_VPMD_DESCS_PER_LOOP; ++j) (*rxq->elts)[elts_n + j] = &rxq->fake_mbuf; } - DRV_LOG(DEBUG, - "port %u SPRQ queue %u allocated and configured %u segments" - " (max %u packets)", - PORT_ID(rxq_ctrl->priv), rxq_ctrl->rxq.idx, elts_n, - elts_n / (1 << rxq_ctrl->rxq.sges_n)); + if (rxq_ctrl->share_group == 0) + DRV_LOG(DEBUG, + "port %u SPRQ queue %u allocated and configured %u segments (max %u packets)", + RXQ_PORT_ID(rxq_ctrl), rxq_ctrl->rxq.idx, elts_n, + elts_n / (1 << rxq_ctrl->rxq.sges_n)); + else + DRV_LOG(DEBUG, + "share group %u SPRQ queue %u allocated and configured %u segments (max %u packets)", + rxq_ctrl->share_group, rxq_ctrl->share_qid, elts_n, + elts_n / (1 << rxq_ctrl->rxq.sges_n)); return 0; error: err = rte_errno; /* Save rte_errno before cleanup. */ @@ -207,8 +218,12 @@ rxq_alloc_elts_sprq(struct mlx5_rxq_ctrl *rxq_ctrl) rte_pktmbuf_free_seg((*rxq_ctrl->rxq.elts)[i]); (*rxq_ctrl->rxq.elts)[i] = NULL; } - DRV_LOG(DEBUG, "port %u SPRQ queue %u failed, freed everything", - PORT_ID(rxq_ctrl->priv), rxq_ctrl->rxq.idx); + if (rxq_ctrl->share_group == 0) + DRV_LOG(DEBUG, "port %u SPRQ queue %u failed, freed everything", + RXQ_PORT_ID(rxq_ctrl), rxq_ctrl->rxq.idx); + else + DRV_LOG(DEBUG, "share group %u SPRQ queue %u failed, freed everything", + rxq_ctrl->share_group, rxq_ctrl->share_qid); rte_errno = err; /* Restore rte_errno. */ return -rte_errno; } @@ -284,8 +299,12 @@ rxq_free_elts_sprq(struct mlx5_rxq_ctrl *rxq_ctrl) uint16_t used = q_n - (elts_ci - rxq->rq_pi); uint16_t i; - DRV_LOG(DEBUG, "port %u Rx queue %u freeing %d WRs", - PORT_ID(rxq_ctrl->priv), rxq->idx, q_n); + if (rxq_ctrl->share_group == 0) + DRV_LOG(DEBUG, "port %u Rx queue %u freeing %d WRs", + RXQ_PORT_ID(rxq_ctrl), rxq->idx, q_n); + else + DRV_LOG(DEBUG, "share group %u Rx queue %u freeing %d WRs", + rxq_ctrl->share_group, rxq_ctrl->share_qid, q_n); if (rxq->elts == NULL) return; /** @@ -1630,7 +1649,6 @@ mlx5_rxq_new(struct rte_eth_dev *dev, struct mlx5_rxq_priv *rxq, (!!(dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS)); tmpl->rxq.port_id = dev->data->port_id; tmpl->sh = priv->sh; - tmpl->priv = priv; tmpl->rxq.mp = rx_seg[0].mp; tmpl->rxq.elts_n = log2above(desc); tmpl->rxq.rq_repl_thresh = @@ -1690,7 +1708,6 @@ mlx5_rxq_hairpin_new(struct rte_eth_dev *dev, struct mlx5_rxq_priv *rxq, tmpl->rxq.rss_hash = 0; tmpl->rxq.port_id = dev->data->port_id; tmpl->sh = priv->sh; - tmpl->priv = priv; tmpl->rxq.mp = NULL; tmpl->rxq.elts_n = log2above(desc); tmpl->rxq.elts = NULL; diff --git a/drivers/net/mlx5/mlx5_rxtx_vec.c b/drivers/net/mlx5/mlx5_rxtx_vec.c index ecd273e00a8..511681841ca 100644 --- a/drivers/net/mlx5/mlx5_rxtx_vec.c +++ b/drivers/net/mlx5/mlx5_rxtx_vec.c @@ -550,7 +550,7 @@ mlx5_rxq_check_vec_support(struct mlx5_rxq_data *rxq) struct mlx5_rxq_ctrl *ctrl = container_of(rxq, struct mlx5_rxq_ctrl, rxq); - if (!ctrl->priv->config.rx_vec_en || rxq->sges_n != 0) + if (!RXQ_PORT(ctrl)->config.rx_vec_en || rxq->sges_n != 0) return -ENOTSUP; if (rxq->lro) return -ENOTSUP; diff --git a/drivers/net/mlx5/mlx5_trigger.c b/drivers/net/mlx5/mlx5_trigger.c index a124f74fcda..caafdf27e8f 100644 --- a/drivers/net/mlx5/mlx5_trigger.c +++ b/drivers/net/mlx5/mlx5_trigger.c @@ -131,9 +131,11 @@ mlx5_rxq_mempool_register_cb(struct rte_mempool *mp, void *opaque, * 0 on success, (-1) on failure and rte_errno is set. */ static int -mlx5_rxq_mempool_register(struct mlx5_rxq_ctrl *rxq_ctrl) +mlx5_rxq_mempool_register(struct rte_eth_dev *dev, + struct mlx5_rxq_ctrl *rxq_ctrl) { - struct mlx5_priv *priv = rxq_ctrl->priv; + struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5_dev_ctx_shared *sh = rxq_ctrl->sh; struct rte_mempool *mp; uint32_t s; int ret = 0; @@ -148,9 +150,8 @@ mlx5_rxq_mempool_register(struct mlx5_rxq_ctrl *rxq_ctrl) } for (s = 0; s < rxq_ctrl->rxq.rxseg_n; s++) { mp = rxq_ctrl->rxq.rxseg[s].mp; - ret = mlx5_mr_mempool_register(&priv->sh->cdev->mr_scache, - priv->sh->cdev->pd, mp, - &priv->mp_id); + ret = mlx5_mr_mempool_register(&sh->cdev->mr_scache, + sh->cdev->pd, mp, &priv->mp_id); if (ret < 0 && rte_errno != EEXIST) return ret; rte_mempool_mem_iter(mp, mlx5_rxq_mempool_register_cb, @@ -213,7 +214,7 @@ mlx5_rxq_start(struct rte_eth_dev *dev) * the implicit registration is enabled or not, * Rx mempool destruction is tracked to free MRs. */ - if (mlx5_rxq_mempool_register(rxq_ctrl) < 0) + if (mlx5_rxq_mempool_register(dev, rxq_ctrl) < 0) goto error; ret = rxq_alloc_elts(rxq_ctrl); if (ret) -- 2.33.0