From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mellanox.co.il (mail-il-dmz.mellanox.com [193.47.165.129]) by dpdk.org (Postfix) with ESMTP id 0FA79239 for ; Sat, 26 May 2018 15:27:49 +0200 (CEST) Received: from Internal Mail-Server by MTLPINE1 (envelope-from xuemingl@mellanox.com) with ESMTPS (AES256-SHA encrypted); 26 May 2018 16:29:50 +0300 Received: from dev-r630-06.mtbc.labs.mlnx (dev-r630-06.mtbc.labs.mlnx [10.12.205.180]) by labmailer.mlnx (8.13.8/8.13.8) with ESMTP id w4QDRmBn006028; Sat, 26 May 2018 16:27:48 +0300 Received: from dev-r630-06.mtbc.labs.mlnx (localhost [127.0.0.1]) by dev-r630-06.mtbc.labs.mlnx (8.14.7/8.14.7) with ESMTP id w4QDRlve021799; Sat, 26 May 2018 21:27:47 +0800 Received: (from xuemingl@localhost) by dev-r630-06.mtbc.labs.mlnx (8.14.7/8.14.7/Submit) id w4QDRg2d021797; Sat, 26 May 2018 21:27:42 +0800 From: Xueming Li To: Shahaf Shuler , Yongseok Koh Cc: Xueming Li , dev@dpdk.org Date: Sat, 26 May 2018 21:27:35 +0800 Message-Id: <20180526132735.21751-1-xuemingl@mellanox.com> X-Mailer: git-send-email 2.13.3 In-Reply-To: <20180525063538.24363-1-xuemingl@mellanox.com> References: <20180525063538.24363-1-xuemingl@mellanox.com> Subject: [dpdk-dev] [PATCH v2] net/mlx5: fix memory region cache init X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 26 May 2018 13:27:50 -0000 This patch moved MR cache init from device configuration function to probe function to make sure init only once. Fixes: 974f1e7ef146 ("net/mlx5: add new memory region support") Cc: yskoh@mellanox.com Signed-off-by: Xueming Li --- drivers/net/mlx5/mlx5.c | 13 +++++++++++++ drivers/net/mlx5/mlx5_ethdev.c | 11 ----------- drivers/net/mlx5/mlx5_mr.c | 1 + 3 files changed, 14 insertions(+), 11 deletions(-) diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c index dae847493..3ef02e2d2 100644 --- a/drivers/net/mlx5/mlx5.c +++ b/drivers/net/mlx5/mlx5.c @@ -1193,6 +1193,19 @@ mlx5_pci_probe(struct rte_pci_driver *pci_drv __rte_unused, goto port_error; } priv->config.max_verbs_prio = verb_priorities; + /* + * Once the device is added to the list of memory event + * callback, its global MR cache table cannot be expanded + * on the fly because of deadlock. If it overflows, lookup + * should be done by searching MR list linearly, which is slow. + */ + err = mlx5_mr_btree_init(&priv->mr.cache, + MLX5_MR_BTREE_CACHE_N * 2, + eth_dev->device->numa_node); + if (err) { + err = rte_errno; + goto port_error; + } /* Add device to memory callback list. */ rte_rwlock_write_lock(&mlx5_shared_data->mem_event_rwlock); LIST_INSERT_HEAD(&mlx5_shared_data->mem_event_cb_list, diff --git a/drivers/net/mlx5/mlx5_ethdev.c b/drivers/net/mlx5/mlx5_ethdev.c index f6cebae41..90488af33 100644 --- a/drivers/net/mlx5/mlx5_ethdev.c +++ b/drivers/net/mlx5/mlx5_ethdev.c @@ -392,17 +392,6 @@ mlx5_dev_configure(struct rte_eth_dev *dev) if (++j == rxqs_n) j = 0; } - /* - * Once the device is added to the list of memory event callback, its - * global MR cache table cannot be expanded on the fly because of - * deadlock. If it overflows, lookup should be done by searching MR list - * linearly, which is slow. - */ - if (mlx5_mr_btree_init(&priv->mr.cache, MLX5_MR_BTREE_CACHE_N * 2, - dev->device->numa_node)) { - /* rte_errno is already set. */ - return -rte_errno; - } return 0; } diff --git a/drivers/net/mlx5/mlx5_mr.c b/drivers/net/mlx5/mlx5_mr.c index abb1f5179..08105a443 100644 --- a/drivers/net/mlx5/mlx5_mr.c +++ b/drivers/net/mlx5/mlx5_mr.c @@ -191,6 +191,7 @@ mlx5_mr_btree_init(struct mlx5_mr_btree *bt, int n, int socket) rte_errno = EINVAL; return -rte_errno; } + assert(!bt->table && !bt->size); memset(bt, 0, sizeof(*bt)); bt->table = rte_calloc_socket("B-tree table", n, sizeof(struct mlx5_mr_cache), -- 2.13.3