From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 54F08A0543; Mon, 6 Jun 2022 13:25:01 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id E392A42BEF; Mon, 6 Jun 2022 13:22:57 +0200 (CEST) Received: from NAM12-MW2-obe.outbound.protection.outlook.com (mail-mw2nam12on2086.outbound.protection.outlook.com [40.107.244.86]) by mails.dpdk.org (Postfix) with ESMTP id 9F5E742BC5 for ; Mon, 6 Jun 2022 13:22:56 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=bbRVOLnXY7IKN2g5iLXmkJJsi8R7hS48iEDkfVqRdlBSGTMyo0nh6vYY1/tlZg2oMaqYp3zMpd7lBfZcNIBEp61yctFB9TFNikEVpb0kDbw+/nRnybsY14Q1+N2OF6i72okwmtAmMIdX9fQ4S+fMsAV6t5kD3cENQ4FXwiOZaEV1F2FukHC4aUM5bNbrV5AoJ0+IaYf/rwKrjPCaTEqVhlKrbV7KLB07DXIY1ZNW3gqPS5pm4cEkw4lv7XJ8k2k28RxM7HnOb09+YGgIZ/OVQQvgvXDXwE6xx/E5mb3aoog0U4tGtdPH/BVlMS4GSP9WERAgCg8aaEIMBAh/CSOYeg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=D0Zy9/Tg4z8XhCWAE1gPUw8MnOBjkfQd3cdsZE+Y7cw=; b=kVB43znSfbzsx0q5UQsTtybUuKA9W7+GWw5NwE2nMoyWgD/macSftLaFfNvMTEFYKI4br61emUmDOVK9Z+JCxJIeSDk5b+/05WieTVl6IWNZpNq9Pyp9GWPec9laUPISV1cBIaUkkl7jgL3teegZFjQm4j9CcDeorBl2s8Fj9uqavkhsRA3Bb4M5EPMxG9ndoksZGR9+DweEe0gGv3Qh3apwZ6gArkelAxZ8SPJ5dCVMlMh2eB7BljR6hOyNRdCSnfmkwpW2VMR8fvMFigEqUVe1r25zn+kUnXfXiixpiov0UQ4MDgw189CU+av94YOIdofO+me+Ag2utEho8EUxrw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 12.22.5.234) smtp.rcpttodomain=monjalon.net smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=D0Zy9/Tg4z8XhCWAE1gPUw8MnOBjkfQd3cdsZE+Y7cw=; b=fCVrr/FH7VYUqyggh4ZZ+GAJog6MKSyUHtiqltbE0kuQL6MenK2nkzj0BeazzrsU07eNs67g5VaX4z5zOAw/WP/V2jfpc8mQGJXJLwYluf1LAvtbSVuJ0AX75WLk11bCSHmj99MBRPOxQjweSwfv7y38mX32lFm409ciPrb+yumAJKgCANco72IhDET0Ltm0hILMmIRA0r+sbXY3PL2siNCj/HiAqRqIehxQaWi8x2/1Nf7PlDEcVAWqt5r3wO1hzwcPPcrf2QFubE1Myy9a3pm35qHdxm08MFtgJse1TqcylusO1z3QCKd0xqij0NqO4epTF2G4SvtWGRsJECeQoA== Received: from BN0PR02CA0028.namprd02.prod.outlook.com (2603:10b6:408:e4::33) by CH2PR12MB4184.namprd12.prod.outlook.com (2603:10b6:610:a7::9) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5314.13; Mon, 6 Jun 2022 11:22:50 +0000 Received: from BN8NAM11FT045.eop-nam11.prod.protection.outlook.com (2603:10b6:408:e4:cafe::6f) by BN0PR02CA0028.outlook.office365.com (2603:10b6:408:e4::33) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5314.13 via Frontend Transport; Mon, 6 Jun 2022 11:22:50 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 12.22.5.234) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 12.22.5.234 as permitted sender) receiver=protection.outlook.com; client-ip=12.22.5.234; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (12.22.5.234) by BN8NAM11FT045.mail.protection.outlook.com (10.13.177.47) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.5314.12 via Frontend Transport; Mon, 6 Jun 2022 11:22:49 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by DRHQMAIL101.nvidia.com (10.27.9.10) with Microsoft SMTP Server (TLS) id 15.0.1497.32; Mon, 6 Jun 2022 11:22:49 +0000 Received: from nvidia.com (10.126.231.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.22; Mon, 6 Jun 2022 04:22:46 -0700 From: Li Zhang To: , , , CC: , , , Subject: [PATCH v1 13/17] vdpa/mlx5: add virtq creation task for MT management Date: Mon, 6 Jun 2022 14:21:02 +0300 Message-ID: <20220606112109.208873-26-lizh@nvidia.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220606112109.208873-1-lizh@nvidia.com> References: <20220408075606.33056-1-lizh@nvidia.com> <20220606112109.208873-1-lizh@nvidia.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain X-Originating-IP: [10.126.231.35] X-ClientProxiedBy: rnnvmail202.nvidia.com (10.129.68.7) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 5674efed-b9a1-48de-c643-08da47aee3f2 X-MS-TrafficTypeDiagnostic: CH2PR12MB4184:EE_ X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-Microsoft-Antispam-PRVS: X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: gShoDfE7pVCYVDUO6SmYB7CdbDTqnY2abVp1qAICpn/EmUdmbhuk38p2JDCDjDMjhjNRZ+BqjwJp9ZnqfmPGqdKkOK/BY4vcIyBuh4T2fTk3sR5Xdxk4/9pMe8f75wOYQH47AX5+xWBj5sMbk2hHMKLauYod8hu9/MNvbORGJihGZBFWNBvLK2Vl4WvwiMrJQFrIPrvYiZAbLn2ZffEQsz8S7YdrDsgsmhWkfsUajIkCw/O40ue1N42qdHhzrXa1VZDrn/wRyIQe5L0YTkwcaqqxJQS6boVPTVxiKaKdq9I7Y9G796jMhXBPJY81yOHegBYFHguUbyvbZxtQxb3T+ORvMG12MEYUyb3nRPDGBhjri0UVb25iwN1hbXODGmCT93AibcDyLQoq5nn5GH9LTRNpufTr9wPXfckLka1VBGOMN6KshrJV132QzGH12U3HpVbxRREdFzARO2GDgbWMOd/xnKdIdVv+DUJpAmPguxOKCYdlZFNOO6Qq4JZdQR0M3rpiy03xkMH3XOenuun886PGA97xX/oZeM7Vs0kS4eTUjz1EXBoC0kFfrlRgqNrXDC+us86rWsXA6ebct6tE3ve9Wq1hU/ehCPOmEzq8R1488KqpL57tLFzEts0hXti1c/7OFmlMqc1a4JIimad+ybx80Q2nvpoQV8dJk3XR5/HuM0SFGBO1NHji2GLOHaoSOYhkI7/5NlUaILOiM4LN1w== X-Forefront-Antispam-Report: CIP:12.22.5.234; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:InfoNoRecords; CAT:NONE; SFS:(13230001)(4636009)(46966006)(36840700001)(40470700004)(55016003)(316002)(82310400005)(336012)(47076005)(426003)(2616005)(30864003)(5660300002)(86362001)(16526019)(1076003)(186003)(2906002)(356005)(107886003)(8936002)(8676002)(4326008)(36756003)(70586007)(6666004)(70206006)(36860700001)(110136005)(6636002)(40460700003)(508600001)(54906003)(7696005)(81166007)(26005)(6286002)(83380400001)(36900700001); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 06 Jun 2022 11:22:49.8223 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 5674efed-b9a1-48de-c643-08da47aee3f2 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[12.22.5.234]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT045.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: CH2PR12MB4184 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org The virtq object and all its sub-resources use a lot of FW commands and can be accelerated by the MT management. Split the virtqs creation between the configuration threads. This accelerates the LM process and reduces its time by 20%. Signed-off-by: Li Zhang --- drivers/vdpa/mlx5/mlx5_vdpa.h | 9 +- drivers/vdpa/mlx5/mlx5_vdpa_cthread.c | 14 +++ drivers/vdpa/mlx5/mlx5_vdpa_event.c | 2 +- drivers/vdpa/mlx5/mlx5_vdpa_virtq.c | 149 +++++++++++++++++++------- 4 files changed, 134 insertions(+), 40 deletions(-) diff --git a/drivers/vdpa/mlx5/mlx5_vdpa.h b/drivers/vdpa/mlx5/mlx5_vdpa.h index 3316ce42be..35221f5ddc 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa.h +++ b/drivers/vdpa/mlx5/mlx5_vdpa.h @@ -80,6 +80,7 @@ enum { /* Vdpa task types. */ enum mlx5_vdpa_task_type { MLX5_VDPA_TASK_REG_MR = 1, + MLX5_VDPA_TASK_SETUP_VIRTQ, }; /* Generic task information and size must be multiple of 4B. */ @@ -117,12 +118,12 @@ struct mlx5_vdpa_vmem_info { struct mlx5_vdpa_virtq { SLIST_ENTRY(mlx5_vdpa_virtq) next; - uint8_t enable; uint16_t index; uint16_t vq_size; uint8_t notifier_state; - bool stopped; uint32_t configured:1; + uint32_t enable:1; + uint32_t stopped:1; uint32_t version; pthread_mutex_t virtq_lock; struct mlx5_vdpa_priv *priv; @@ -565,11 +566,13 @@ bool mlx5_vdpa_task_add(struct mlx5_vdpa_priv *priv, uint32_t thrd_idx, enum mlx5_vdpa_task_type task_type, - uint32_t *bulk_refcnt, uint32_t *bulk_err_cnt, + uint32_t *remaining_cnt, uint32_t *err_cnt, void **task_data, uint32_t num); int mlx5_vdpa_register_mr(struct mlx5_vdpa_priv *priv, uint32_t idx); bool mlx5_vdpa_c_thread_wait_bulk_tasks_done(uint32_t *remaining_cnt, uint32_t *err_cnt, uint32_t sleep_time); +int +mlx5_vdpa_virtq_setup(struct mlx5_vdpa_priv *priv, int index, bool reg_kick); #endif /* RTE_PMD_MLX5_VDPA_H_ */ diff --git a/drivers/vdpa/mlx5/mlx5_vdpa_cthread.c b/drivers/vdpa/mlx5/mlx5_vdpa_cthread.c index 10391931ae..1389d369ae 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa_cthread.c +++ b/drivers/vdpa/mlx5/mlx5_vdpa_cthread.c @@ -100,6 +100,7 @@ mlx5_vdpa_c_thread_handle(void *arg) { struct mlx5_vdpa_conf_thread_mng *multhrd = arg; pthread_t thread_id = pthread_self(); + struct mlx5_vdpa_virtq *virtq; struct mlx5_vdpa_priv *priv; struct mlx5_vdpa_task task; struct rte_ring *rng; @@ -139,6 +140,19 @@ mlx5_vdpa_c_thread_handle(void *arg) __ATOMIC_RELAXED); } break; + case MLX5_VDPA_TASK_SETUP_VIRTQ: + virtq = &priv->virtqs[task.idx]; + pthread_mutex_lock(&virtq->virtq_lock); + ret = mlx5_vdpa_virtq_setup(priv, + task.idx, false); + if (ret) { + DRV_LOG(ERR, + "Failed to setup virtq %d.", task.idx); + __atomic_fetch_add( + task.err_cnt, 1, __ATOMIC_RELAXED); + } + pthread_mutex_unlock(&virtq->virtq_lock); + break; default: DRV_LOG(ERR, "Invalid vdpa task type %d.", task.type); diff --git a/drivers/vdpa/mlx5/mlx5_vdpa_event.c b/drivers/vdpa/mlx5/mlx5_vdpa_event.c index b45fbac146..f782b6b832 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa_event.c +++ b/drivers/vdpa/mlx5/mlx5_vdpa_event.c @@ -371,7 +371,7 @@ mlx5_vdpa_err_interrupt_handler(void *cb_arg __rte_unused) goto unlock; if (rte_rdtsc() / rte_get_tsc_hz() < MLX5_VDPA_ERROR_TIME_SEC) goto unlock; - virtq->stopped = true; + virtq->stopped = 1; /* Query error info. */ if (mlx5_vdpa_virtq_query(priv, vq_index)) goto log; diff --git a/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c b/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c index 0b317655db..db05220e76 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c +++ b/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c @@ -111,8 +111,9 @@ mlx5_vdpa_virtqs_cleanup(struct mlx5_vdpa_priv *priv) for (i = 0; i < priv->caps.max_num_virtio_queues; i++) { struct mlx5_vdpa_virtq *virtq = &priv->virtqs[i]; + if (virtq->index != i) + continue; pthread_mutex_lock(&virtq->virtq_lock); - virtq->configured = 0; for (j = 0; j < RTE_DIM(virtq->umems); ++j) { if (virtq->umems[j].obj) { claim_zero(mlx5_glue->devx_umem_dereg @@ -131,7 +132,6 @@ mlx5_vdpa_virtqs_cleanup(struct mlx5_vdpa_priv *priv) } } - static int mlx5_vdpa_virtq_unset(struct mlx5_vdpa_virtq *virtq) { @@ -191,7 +191,7 @@ mlx5_vdpa_virtq_stop(struct mlx5_vdpa_priv *priv, int index) ret = mlx5_vdpa_virtq_modify(virtq, 0); if (ret) return -1; - virtq->stopped = true; + virtq->stopped = 1; DRV_LOG(DEBUG, "vid %u virtq %u was stopped.", priv->vid, index); return mlx5_vdpa_virtq_query(priv, index); } @@ -411,7 +411,38 @@ mlx5_vdpa_is_modify_virtq_supported(struct mlx5_vdpa_priv *priv) } static int -mlx5_vdpa_virtq_setup(struct mlx5_vdpa_priv *priv, int index) +mlx5_vdpa_virtq_doorbell_setup(struct mlx5_vdpa_virtq *virtq, + struct rte_vhost_vring *vq, int index) +{ + virtq->intr_handle = + rte_intr_instance_alloc(RTE_INTR_INSTANCE_F_SHARED); + if (virtq->intr_handle == NULL) { + DRV_LOG(ERR, "Fail to allocate intr_handle"); + return -1; + } + if (rte_intr_fd_set(virtq->intr_handle, vq->kickfd)) + return -1; + if (rte_intr_fd_get(virtq->intr_handle) == -1) { + DRV_LOG(WARNING, "Virtq %d kickfd is invalid.", index); + } else { + if (rte_intr_type_set(virtq->intr_handle, + RTE_INTR_HANDLE_EXT)) + return -1; + if (rte_intr_callback_register(virtq->intr_handle, + mlx5_vdpa_virtq_kick_handler, virtq)) { + (void)rte_intr_fd_set(virtq->intr_handle, -1); + DRV_LOG(ERR, "Failed to register virtq %d interrupt.", + index); + return -1; + } + DRV_LOG(DEBUG, "Register fd %d interrupt for virtq %d.", + rte_intr_fd_get(virtq->intr_handle), index); + } + return 0; +} + +int +mlx5_vdpa_virtq_setup(struct mlx5_vdpa_priv *priv, int index, bool reg_kick) { struct mlx5_vdpa_virtq *virtq = &priv->virtqs[index]; struct rte_vhost_vring vq; @@ -455,33 +486,11 @@ mlx5_vdpa_virtq_setup(struct mlx5_vdpa_priv *priv, int index) rte_write32(virtq->index, priv->virtq_db_addr); rte_spinlock_unlock(&priv->db_lock); /* Setup doorbell mapping. */ - virtq->intr_handle = - rte_intr_instance_alloc(RTE_INTR_INSTANCE_F_SHARED); - if (virtq->intr_handle == NULL) { - DRV_LOG(ERR, "Fail to allocate intr_handle"); - goto error; - } - - if (rte_intr_fd_set(virtq->intr_handle, vq.kickfd)) - goto error; - - if (rte_intr_fd_get(virtq->intr_handle) == -1) { - DRV_LOG(WARNING, "Virtq %d kickfd is invalid.", index); - } else { - if (rte_intr_type_set(virtq->intr_handle, RTE_INTR_HANDLE_EXT)) - goto error; - - if (rte_intr_callback_register(virtq->intr_handle, - mlx5_vdpa_virtq_kick_handler, - virtq)) { - (void)rte_intr_fd_set(virtq->intr_handle, -1); + if (reg_kick) { + if (mlx5_vdpa_virtq_doorbell_setup(virtq, &vq, index)) { DRV_LOG(ERR, "Failed to register virtq %d interrupt.", index); goto error; - } else { - DRV_LOG(DEBUG, "Register fd %d interrupt for virtq %d.", - rte_intr_fd_get(virtq->intr_handle), - index); } } /* Subscribe virtq error event. */ @@ -497,7 +506,6 @@ mlx5_vdpa_virtq_setup(struct mlx5_vdpa_priv *priv, int index) rte_errno = errno; goto error; } - virtq->stopped = false; /* Initial notification to ask Qemu handling completed buffers. */ if (virtq->eqp.cq.callfd != -1) eventfd_write(virtq->eqp.cq.callfd, (eventfd_t)1); @@ -567,10 +575,12 @@ mlx5_vdpa_features_validate(struct mlx5_vdpa_priv *priv) int mlx5_vdpa_virtqs_prepare(struct mlx5_vdpa_priv *priv) { - uint32_t i; - uint16_t nr_vring = rte_vhost_get_vring_num(priv->vid); int ret = rte_vhost_get_negotiated_features(priv->vid, &priv->features); + uint16_t nr_vring = rte_vhost_get_vring_num(priv->vid); + uint32_t remaining_cnt = 0, err_cnt = 0, task_num = 0; + uint32_t i, thrd_idx, data[1]; struct mlx5_vdpa_virtq *virtq; + struct rte_vhost_vring vq; if (ret || mlx5_vdpa_features_validate(priv)) { DRV_LOG(ERR, "Failed to configure negotiated features."); @@ -590,16 +600,83 @@ mlx5_vdpa_virtqs_prepare(struct mlx5_vdpa_priv *priv) return -1; } priv->nr_virtqs = nr_vring; - for (i = 0; i < nr_vring; i++) { - virtq = &priv->virtqs[i]; - if (virtq->enable) { + if (priv->use_c_thread) { + uint32_t main_task_idx[nr_vring]; + + for (i = 0; i < nr_vring; i++) { + virtq = &priv->virtqs[i]; + if (!virtq->enable) + continue; + thrd_idx = i % (conf_thread_mng.max_thrds + 1); + if (!thrd_idx) { + main_task_idx[task_num] = i; + task_num++; + continue; + } + thrd_idx = priv->last_c_thrd_idx + 1; + if (thrd_idx >= conf_thread_mng.max_thrds) + thrd_idx = 0; + priv->last_c_thrd_idx = thrd_idx; + data[0] = i; + if (mlx5_vdpa_task_add(priv, thrd_idx, + MLX5_VDPA_TASK_SETUP_VIRTQ, + &remaining_cnt, &err_cnt, + (void **)&data, 1)) { + DRV_LOG(ERR, "Fail to add " + "task setup virtq (%d).", i); + main_task_idx[task_num] = i; + task_num++; + } + } + for (i = 0; i < task_num; i++) { + virtq = &priv->virtqs[main_task_idx[i]]; pthread_mutex_lock(&virtq->virtq_lock); - if (mlx5_vdpa_virtq_setup(priv, i)) { + if (mlx5_vdpa_virtq_setup(priv, + main_task_idx[i], false)) { pthread_mutex_unlock(&virtq->virtq_lock); goto error; } pthread_mutex_unlock(&virtq->virtq_lock); } + if (mlx5_vdpa_c_thread_wait_bulk_tasks_done(&remaining_cnt, + &err_cnt, 2000)) { + DRV_LOG(ERR, + "Failed to wait virt-queue setup tasks ready."); + goto error; + } + for (i = 0; i < nr_vring; i++) { + /* Setup doorbell mapping in order for Qume. */ + virtq = &priv->virtqs[i]; + pthread_mutex_lock(&virtq->virtq_lock); + if (!virtq->enable || !virtq->configured) { + pthread_mutex_unlock(&virtq->virtq_lock); + continue; + } + if (rte_vhost_get_vhost_vring(priv->vid, i, &vq)) { + pthread_mutex_unlock(&virtq->virtq_lock); + goto error; + } + if (mlx5_vdpa_virtq_doorbell_setup(virtq, &vq, i)) { + pthread_mutex_unlock(&virtq->virtq_lock); + DRV_LOG(ERR, + "Failed to register virtq %d interrupt.", i); + goto error; + } + pthread_mutex_unlock(&virtq->virtq_lock); + } + } else { + for (i = 0; i < nr_vring; i++) { + virtq = &priv->virtqs[i]; + pthread_mutex_lock(&virtq->virtq_lock); + if (virtq->enable) { + if (mlx5_vdpa_virtq_setup(priv, i, true)) { + pthread_mutex_unlock( + &virtq->virtq_lock); + goto error; + } + } + pthread_mutex_unlock(&virtq->virtq_lock); + } } return 0; error: @@ -663,7 +740,7 @@ mlx5_vdpa_virtq_enable(struct mlx5_vdpa_priv *priv, int index, int enable) mlx5_vdpa_virtq_unset(virtq); } if (enable) { - ret = mlx5_vdpa_virtq_setup(priv, index); + ret = mlx5_vdpa_virtq_setup(priv, index, true); if (ret) { DRV_LOG(ERR, "Failed to setup virtq %d.", index); return ret; -- 2.31.1