From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id E23B1A0542; Mon, 6 Jun 2022 13:49:08 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 27C9A42BB0; Mon, 6 Jun 2022 13:47:58 +0200 (CEST) Received: from NAM12-BN8-obe.outbound.protection.outlook.com (mail-bn8nam12on2045.outbound.protection.outlook.com [40.107.237.45]) by mails.dpdk.org (Postfix) with ESMTP id 329FD42B8E for ; Mon, 6 Jun 2022 13:47:56 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=c43vwpMb1bSJbrR1Al4lFQXFyTl8IJSGWON4tnH1pZUR9ry0BdaRlbS5m1D6MNE2kD+w9RGEC4w7P3A/CywisAZoRISasgzYCrS1wUq4n2eGDTJZIyzYJ4e0Gueul0LYgZQmwPoDUlBUWfXDRSn15TrzS3+iRR8OWhK7bqFx6m3SlcMSEosi2XMmEPRhPjA9xDaEmjc0kt3Y0PyM+ex9NtxVBcXCNqKNtiIkXdcyQoZwpUreL/Qs0C8BdPqk6KadceWiM2Opl/nbGfMFnCLrj6OVSy+bor4o8F2QHa4/1/TFkp+tgjAqs8F418z850rwn5dafjzwmVbuwg/bacvCjA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=D0Zy9/Tg4z8XhCWAE1gPUw8MnOBjkfQd3cdsZE+Y7cw=; b=H4AGkBpJ6pHrcdUwPg94sR9CrmhiCU53mqKf/nIBPSf3x8p7C3TaDsFUnFxjq8T37mSh10YNG3L2PtTKlgb+arpOe88FwdE8pNZj5QR309LDbuVTj3Q5kBoDIbYRZ1EoAPBfuU+D59ZQ/30Sv0QDSvPqwfs1x6IG4JJNdsPH5EaMNfiIItGfOFhTqRwKB0ubg39TL1/unAkN8xOjYiy+2ihMr04Apl62TDRKt9JpzPNQ03vLGCi0dCElzJqcFRql1ao5G+VEPFRrfhrxGCbMfYFHkANLCjwXA7CtyuWedQQ8yXm3IXANmMZ339/AzuwNNO+jsdGuj8mbeGgncm35fw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 12.22.5.235) smtp.rcpttodomain=monjalon.net smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=D0Zy9/Tg4z8XhCWAE1gPUw8MnOBjkfQd3cdsZE+Y7cw=; b=lUA6fEcw6T0MRH7GpINDIftYht1uc6+D7gpEv9f8OML1mSJh9QQlEyAL65Q1uorzFDFTxeqA3O/BQIvM9zA7RZLZdal0VmyLWWF9oFY+ZYn0ytlzK6iRp5i3AFLZegB9LsxBKzJu23jcNqbGnOUj3Zo3FMVuUq7IE4kEy8I03p6mNd/zGicUBU3UkWeXjS6MnSLRP45uX6dnCAcvAjSuXEh+P/NHmYuTmY8+7kATSiKEaPuQt6199xTy/l3jLjsGGYiQkm1HBoTYnGsS8mVGzti316y/6mFWo1RuPhYn4zVCrbpcbz5hdnnD8SiugtthVI8RHurwVd7MIWQr12aulg== Received: from DM5PR19CA0038.namprd19.prod.outlook.com (2603:10b6:3:9a::24) by CY4PR1201MB0264.namprd12.prod.outlook.com (2603:10b6:910:1e::17) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5314.13; Mon, 6 Jun 2022 11:47:54 +0000 Received: from DM6NAM11FT019.eop-nam11.prod.protection.outlook.com (2603:10b6:3:9a:cafe::89) by DM5PR19CA0038.outlook.office365.com (2603:10b6:3:9a::24) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5314.19 via Frontend Transport; Mon, 6 Jun 2022 11:47:54 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 12.22.5.235) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 12.22.5.235 as permitted sender) receiver=protection.outlook.com; client-ip=12.22.5.235; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (12.22.5.235) by DM6NAM11FT019.mail.protection.outlook.com (10.13.172.172) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.5314.12 via Frontend Transport; Mon, 6 Jun 2022 11:47:53 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by DRHQMAIL107.nvidia.com (10.27.9.16) with Microsoft SMTP Server (TLS) id 15.0.1497.32; Mon, 6 Jun 2022 11:47:53 +0000 Received: from nvidia.com (10.126.231.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.22; Mon, 6 Jun 2022 04:47:50 -0700 From: Li Zhang To: , , , CC: , , , Subject: [PATCH v1 13/17] vdpa/mlx5: add virtq creation task for MT management Date: Mon, 6 Jun 2022 14:46:46 +0300 Message-ID: <20220606114650.209612-14-lizh@nvidia.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220606114650.209612-1-lizh@nvidia.com> References: <20220408075606.33056-1-lizh@nvidia.com> <20220606114650.209612-1-lizh@nvidia.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain X-Originating-IP: [10.126.231.35] X-ClientProxiedBy: rnnvmail202.nvidia.com (10.129.68.7) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 4b37a1b7-d417-4bea-7a12-08da47b2646c X-MS-TrafficTypeDiagnostic: CY4PR1201MB0264:EE_ X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-Microsoft-Antispam-PRVS: X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: bY3CGF+7JRsiw4joGIbc379GnUM5VHJbzbmL+L1vJuiaKnrg2L9LYiQ+OP+VoTBTa3JPyWoUAiS9EGV0rqOfXfmgLF06QWdau1sdsOk0Y5tPvSBpXOxiPd/TPL9sPoU0zcW5bXBwWGBBpPYUJvXgYkjFz/XEVsMR5xauvOk3+7YnFws50upXPLYPRhFPuok74h28tjYPpm9CfqHvzMZviqbHJlQv/BIPjmBuqZc/Pr6eyk7X9qXkykdOnsJQTdR15F/8htyy9+fHXpKuINuxOZc9EEZhJVkjSHjQ1QAfQs5CrM63cNz9rckEK4U9Q/C8QF9ArF++zaERoYsiSLV5h0S60xUwZdx2ztQ4RUJUBHnLjazWe1JSODLb4aQos2auXPp7VHEN99ikK279jwcKqzc9tQnZQa3CYYfzKP4uVtCSzVLIFAfJiNGbGYnY5yaFrT8GUg+uB/7PshElAw1MSYwNCftgxfOVpFDmQSa4qYborQ8gMGgmQl75Sk8FTfIIMy+/LSjE07MD7NHzhHGCHiwG88lFP1PGV/gUuJSS+DIzoJ6OJ5dLrOgE+da/tIL5u3ibb3iGtS5c+zLNQs9vmSdoA1Vkc5FCyXyeQX74mNSc4TlhUsgnvkgfCE3oYhpjJK/KD5qxj6dAXJb3TTi8ih//2FbJDjCz7KU0Cgds/m54XQc4U9KE0HuUcUhkB8dqOKsx3opg9Z27LtwO3ULSvQ== X-Forefront-Antispam-Report: CIP:12.22.5.235; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:InfoNoRecords; CAT:NONE; SFS:(13230001)(4636009)(36840700001)(46966006)(40470700004)(4326008)(8676002)(356005)(316002)(86362001)(40460700003)(36756003)(30864003)(2906002)(5660300002)(110136005)(81166007)(70206006)(8936002)(2616005)(26005)(6286002)(336012)(107886003)(426003)(47076005)(1076003)(70586007)(186003)(16526019)(508600001)(55016003)(7696005)(6666004)(6636002)(82310400005)(54906003)(83380400001)(36860700001)(36900700001); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 06 Jun 2022 11:47:53.9365 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 4b37a1b7-d417-4bea-7a12-08da47b2646c X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[12.22.5.235]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: DM6NAM11FT019.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: CY4PR1201MB0264 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org The virtq object and all its sub-resources use a lot of FW commands and can be accelerated by the MT management. Split the virtqs creation between the configuration threads. This accelerates the LM process and reduces its time by 20%. Signed-off-by: Li Zhang --- drivers/vdpa/mlx5/mlx5_vdpa.h | 9 +- drivers/vdpa/mlx5/mlx5_vdpa_cthread.c | 14 +++ drivers/vdpa/mlx5/mlx5_vdpa_event.c | 2 +- drivers/vdpa/mlx5/mlx5_vdpa_virtq.c | 149 +++++++++++++++++++------- 4 files changed, 134 insertions(+), 40 deletions(-) diff --git a/drivers/vdpa/mlx5/mlx5_vdpa.h b/drivers/vdpa/mlx5/mlx5_vdpa.h index 3316ce42be..35221f5ddc 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa.h +++ b/drivers/vdpa/mlx5/mlx5_vdpa.h @@ -80,6 +80,7 @@ enum { /* Vdpa task types. */ enum mlx5_vdpa_task_type { MLX5_VDPA_TASK_REG_MR = 1, + MLX5_VDPA_TASK_SETUP_VIRTQ, }; /* Generic task information and size must be multiple of 4B. */ @@ -117,12 +118,12 @@ struct mlx5_vdpa_vmem_info { struct mlx5_vdpa_virtq { SLIST_ENTRY(mlx5_vdpa_virtq) next; - uint8_t enable; uint16_t index; uint16_t vq_size; uint8_t notifier_state; - bool stopped; uint32_t configured:1; + uint32_t enable:1; + uint32_t stopped:1; uint32_t version; pthread_mutex_t virtq_lock; struct mlx5_vdpa_priv *priv; @@ -565,11 +566,13 @@ bool mlx5_vdpa_task_add(struct mlx5_vdpa_priv *priv, uint32_t thrd_idx, enum mlx5_vdpa_task_type task_type, - uint32_t *bulk_refcnt, uint32_t *bulk_err_cnt, + uint32_t *remaining_cnt, uint32_t *err_cnt, void **task_data, uint32_t num); int mlx5_vdpa_register_mr(struct mlx5_vdpa_priv *priv, uint32_t idx); bool mlx5_vdpa_c_thread_wait_bulk_tasks_done(uint32_t *remaining_cnt, uint32_t *err_cnt, uint32_t sleep_time); +int +mlx5_vdpa_virtq_setup(struct mlx5_vdpa_priv *priv, int index, bool reg_kick); #endif /* RTE_PMD_MLX5_VDPA_H_ */ diff --git a/drivers/vdpa/mlx5/mlx5_vdpa_cthread.c b/drivers/vdpa/mlx5/mlx5_vdpa_cthread.c index 10391931ae..1389d369ae 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa_cthread.c +++ b/drivers/vdpa/mlx5/mlx5_vdpa_cthread.c @@ -100,6 +100,7 @@ mlx5_vdpa_c_thread_handle(void *arg) { struct mlx5_vdpa_conf_thread_mng *multhrd = arg; pthread_t thread_id = pthread_self(); + struct mlx5_vdpa_virtq *virtq; struct mlx5_vdpa_priv *priv; struct mlx5_vdpa_task task; struct rte_ring *rng; @@ -139,6 +140,19 @@ mlx5_vdpa_c_thread_handle(void *arg) __ATOMIC_RELAXED); } break; + case MLX5_VDPA_TASK_SETUP_VIRTQ: + virtq = &priv->virtqs[task.idx]; + pthread_mutex_lock(&virtq->virtq_lock); + ret = mlx5_vdpa_virtq_setup(priv, + task.idx, false); + if (ret) { + DRV_LOG(ERR, + "Failed to setup virtq %d.", task.idx); + __atomic_fetch_add( + task.err_cnt, 1, __ATOMIC_RELAXED); + } + pthread_mutex_unlock(&virtq->virtq_lock); + break; default: DRV_LOG(ERR, "Invalid vdpa task type %d.", task.type); diff --git a/drivers/vdpa/mlx5/mlx5_vdpa_event.c b/drivers/vdpa/mlx5/mlx5_vdpa_event.c index b45fbac146..f782b6b832 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa_event.c +++ b/drivers/vdpa/mlx5/mlx5_vdpa_event.c @@ -371,7 +371,7 @@ mlx5_vdpa_err_interrupt_handler(void *cb_arg __rte_unused) goto unlock; if (rte_rdtsc() / rte_get_tsc_hz() < MLX5_VDPA_ERROR_TIME_SEC) goto unlock; - virtq->stopped = true; + virtq->stopped = 1; /* Query error info. */ if (mlx5_vdpa_virtq_query(priv, vq_index)) goto log; diff --git a/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c b/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c index 0b317655db..db05220e76 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c +++ b/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c @@ -111,8 +111,9 @@ mlx5_vdpa_virtqs_cleanup(struct mlx5_vdpa_priv *priv) for (i = 0; i < priv->caps.max_num_virtio_queues; i++) { struct mlx5_vdpa_virtq *virtq = &priv->virtqs[i]; + if (virtq->index != i) + continue; pthread_mutex_lock(&virtq->virtq_lock); - virtq->configured = 0; for (j = 0; j < RTE_DIM(virtq->umems); ++j) { if (virtq->umems[j].obj) { claim_zero(mlx5_glue->devx_umem_dereg @@ -131,7 +132,6 @@ mlx5_vdpa_virtqs_cleanup(struct mlx5_vdpa_priv *priv) } } - static int mlx5_vdpa_virtq_unset(struct mlx5_vdpa_virtq *virtq) { @@ -191,7 +191,7 @@ mlx5_vdpa_virtq_stop(struct mlx5_vdpa_priv *priv, int index) ret = mlx5_vdpa_virtq_modify(virtq, 0); if (ret) return -1; - virtq->stopped = true; + virtq->stopped = 1; DRV_LOG(DEBUG, "vid %u virtq %u was stopped.", priv->vid, index); return mlx5_vdpa_virtq_query(priv, index); } @@ -411,7 +411,38 @@ mlx5_vdpa_is_modify_virtq_supported(struct mlx5_vdpa_priv *priv) } static int -mlx5_vdpa_virtq_setup(struct mlx5_vdpa_priv *priv, int index) +mlx5_vdpa_virtq_doorbell_setup(struct mlx5_vdpa_virtq *virtq, + struct rte_vhost_vring *vq, int index) +{ + virtq->intr_handle = + rte_intr_instance_alloc(RTE_INTR_INSTANCE_F_SHARED); + if (virtq->intr_handle == NULL) { + DRV_LOG(ERR, "Fail to allocate intr_handle"); + return -1; + } + if (rte_intr_fd_set(virtq->intr_handle, vq->kickfd)) + return -1; + if (rte_intr_fd_get(virtq->intr_handle) == -1) { + DRV_LOG(WARNING, "Virtq %d kickfd is invalid.", index); + } else { + if (rte_intr_type_set(virtq->intr_handle, + RTE_INTR_HANDLE_EXT)) + return -1; + if (rte_intr_callback_register(virtq->intr_handle, + mlx5_vdpa_virtq_kick_handler, virtq)) { + (void)rte_intr_fd_set(virtq->intr_handle, -1); + DRV_LOG(ERR, "Failed to register virtq %d interrupt.", + index); + return -1; + } + DRV_LOG(DEBUG, "Register fd %d interrupt for virtq %d.", + rte_intr_fd_get(virtq->intr_handle), index); + } + return 0; +} + +int +mlx5_vdpa_virtq_setup(struct mlx5_vdpa_priv *priv, int index, bool reg_kick) { struct mlx5_vdpa_virtq *virtq = &priv->virtqs[index]; struct rte_vhost_vring vq; @@ -455,33 +486,11 @@ mlx5_vdpa_virtq_setup(struct mlx5_vdpa_priv *priv, int index) rte_write32(virtq->index, priv->virtq_db_addr); rte_spinlock_unlock(&priv->db_lock); /* Setup doorbell mapping. */ - virtq->intr_handle = - rte_intr_instance_alloc(RTE_INTR_INSTANCE_F_SHARED); - if (virtq->intr_handle == NULL) { - DRV_LOG(ERR, "Fail to allocate intr_handle"); - goto error; - } - - if (rte_intr_fd_set(virtq->intr_handle, vq.kickfd)) - goto error; - - if (rte_intr_fd_get(virtq->intr_handle) == -1) { - DRV_LOG(WARNING, "Virtq %d kickfd is invalid.", index); - } else { - if (rte_intr_type_set(virtq->intr_handle, RTE_INTR_HANDLE_EXT)) - goto error; - - if (rte_intr_callback_register(virtq->intr_handle, - mlx5_vdpa_virtq_kick_handler, - virtq)) { - (void)rte_intr_fd_set(virtq->intr_handle, -1); + if (reg_kick) { + if (mlx5_vdpa_virtq_doorbell_setup(virtq, &vq, index)) { DRV_LOG(ERR, "Failed to register virtq %d interrupt.", index); goto error; - } else { - DRV_LOG(DEBUG, "Register fd %d interrupt for virtq %d.", - rte_intr_fd_get(virtq->intr_handle), - index); } } /* Subscribe virtq error event. */ @@ -497,7 +506,6 @@ mlx5_vdpa_virtq_setup(struct mlx5_vdpa_priv *priv, int index) rte_errno = errno; goto error; } - virtq->stopped = false; /* Initial notification to ask Qemu handling completed buffers. */ if (virtq->eqp.cq.callfd != -1) eventfd_write(virtq->eqp.cq.callfd, (eventfd_t)1); @@ -567,10 +575,12 @@ mlx5_vdpa_features_validate(struct mlx5_vdpa_priv *priv) int mlx5_vdpa_virtqs_prepare(struct mlx5_vdpa_priv *priv) { - uint32_t i; - uint16_t nr_vring = rte_vhost_get_vring_num(priv->vid); int ret = rte_vhost_get_negotiated_features(priv->vid, &priv->features); + uint16_t nr_vring = rte_vhost_get_vring_num(priv->vid); + uint32_t remaining_cnt = 0, err_cnt = 0, task_num = 0; + uint32_t i, thrd_idx, data[1]; struct mlx5_vdpa_virtq *virtq; + struct rte_vhost_vring vq; if (ret || mlx5_vdpa_features_validate(priv)) { DRV_LOG(ERR, "Failed to configure negotiated features."); @@ -590,16 +600,83 @@ mlx5_vdpa_virtqs_prepare(struct mlx5_vdpa_priv *priv) return -1; } priv->nr_virtqs = nr_vring; - for (i = 0; i < nr_vring; i++) { - virtq = &priv->virtqs[i]; - if (virtq->enable) { + if (priv->use_c_thread) { + uint32_t main_task_idx[nr_vring]; + + for (i = 0; i < nr_vring; i++) { + virtq = &priv->virtqs[i]; + if (!virtq->enable) + continue; + thrd_idx = i % (conf_thread_mng.max_thrds + 1); + if (!thrd_idx) { + main_task_idx[task_num] = i; + task_num++; + continue; + } + thrd_idx = priv->last_c_thrd_idx + 1; + if (thrd_idx >= conf_thread_mng.max_thrds) + thrd_idx = 0; + priv->last_c_thrd_idx = thrd_idx; + data[0] = i; + if (mlx5_vdpa_task_add(priv, thrd_idx, + MLX5_VDPA_TASK_SETUP_VIRTQ, + &remaining_cnt, &err_cnt, + (void **)&data, 1)) { + DRV_LOG(ERR, "Fail to add " + "task setup virtq (%d).", i); + main_task_idx[task_num] = i; + task_num++; + } + } + for (i = 0; i < task_num; i++) { + virtq = &priv->virtqs[main_task_idx[i]]; pthread_mutex_lock(&virtq->virtq_lock); - if (mlx5_vdpa_virtq_setup(priv, i)) { + if (mlx5_vdpa_virtq_setup(priv, + main_task_idx[i], false)) { pthread_mutex_unlock(&virtq->virtq_lock); goto error; } pthread_mutex_unlock(&virtq->virtq_lock); } + if (mlx5_vdpa_c_thread_wait_bulk_tasks_done(&remaining_cnt, + &err_cnt, 2000)) { + DRV_LOG(ERR, + "Failed to wait virt-queue setup tasks ready."); + goto error; + } + for (i = 0; i < nr_vring; i++) { + /* Setup doorbell mapping in order for Qume. */ + virtq = &priv->virtqs[i]; + pthread_mutex_lock(&virtq->virtq_lock); + if (!virtq->enable || !virtq->configured) { + pthread_mutex_unlock(&virtq->virtq_lock); + continue; + } + if (rte_vhost_get_vhost_vring(priv->vid, i, &vq)) { + pthread_mutex_unlock(&virtq->virtq_lock); + goto error; + } + if (mlx5_vdpa_virtq_doorbell_setup(virtq, &vq, i)) { + pthread_mutex_unlock(&virtq->virtq_lock); + DRV_LOG(ERR, + "Failed to register virtq %d interrupt.", i); + goto error; + } + pthread_mutex_unlock(&virtq->virtq_lock); + } + } else { + for (i = 0; i < nr_vring; i++) { + virtq = &priv->virtqs[i]; + pthread_mutex_lock(&virtq->virtq_lock); + if (virtq->enable) { + if (mlx5_vdpa_virtq_setup(priv, i, true)) { + pthread_mutex_unlock( + &virtq->virtq_lock); + goto error; + } + } + pthread_mutex_unlock(&virtq->virtq_lock); + } } return 0; error: @@ -663,7 +740,7 @@ mlx5_vdpa_virtq_enable(struct mlx5_vdpa_priv *priv, int index, int enable) mlx5_vdpa_virtq_unset(virtq); } if (enable) { - ret = mlx5_vdpa_virtq_setup(priv, index); + ret = mlx5_vdpa_virtq_setup(priv, index, true); if (ret) { DRV_LOG(ERR, "Failed to setup virtq %d.", index); return ret; -- 2.31.1