From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id C0603A0543; Mon, 6 Jun 2022 13:23:44 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id BC47142BB3; Mon, 6 Jun 2022 13:22:30 +0200 (CEST) Received: from NAM10-DM6-obe.outbound.protection.outlook.com (mail-dm6nam10on2078.outbound.protection.outlook.com [40.107.93.78]) by mails.dpdk.org (Postfix) with ESMTP id B9F6142BB0 for ; Mon, 6 Jun 2022 13:22:28 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=Qw4rAkwl6tyrhU5XQcyYBYp9dEPWrIKcsCDSO4HqSDE756ad+vm2PJIZqommeEyyXXoaCWKFRoo4neJM5XmgakFxPrGuJkRIY2t7m2x8X4o/quMEI4BBH2hZhnMP4CUU6yw63L5O9LCAS+NtxFz7D5G2a3eroNOQshFNWwtnzlp/nZ9VSV7BDjwGctHbWY01A1g2KJJfth0X3mefKSulWlUUAgj70QlSHkeUNHeDUIZZBlR/nHCnE66l8BjLroI/pEf9MmSG7LuDcKi02qqtOrwclf7cOr9RURvWOIjzdMN25NAfmsyA50joHN5/0svXnCDCb547AkO3J1IrpaS04g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=isW+ckUr9Ps+iVzLqK0DbzwxpE7EafRBgztliaixwJE=; b=a8xWFP/pIbn96QJ06wxFF7rIrfRZXeWy0XQfYLSgI5aakXTv2dsRhHFB908JqDzPji7aYxieZSLvO/d8inQKNugNclBW+A87xyPUod3y7iIQ+HeDsE9LNfRaogmLv01rauvoWSghupPo6AdNumItzhyKXF84CSrwtqnkGGUEeG/d1KLdtPsXUbUQDCbJj/pU1OnR4Zekm/b2rWLbZQdlArGnUWCguV1aaK02cEx8xA3t0ygofkxn+Dene6DGmRypntTgsvAo7nVOVzNdmAc0ERb2ad4DDlcIRYMo/8M1lqpq8I8Eh8TxxMSgLkLkDWFFJ8xg+48/21fJQSrdQ8i5ew== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 12.22.5.234) smtp.rcpttodomain=monjalon.net smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=isW+ckUr9Ps+iVzLqK0DbzwxpE7EafRBgztliaixwJE=; b=X6G2xko0ooxZCQXhRYBm9Ii2Bq/jPmkiARrzm+QjhOD+6ikyBqs+JGUQKPvlk6xGqNZZ9o9aO1m59Zteyvn21X0vfTYgDfADREQEZeCKXNA40/PKisZKFzpz8T5DJPJjql/b4FOVkCNpjhvJpEkEsQ+6dHN95GG+oeY6rQ12hRv75lbg9L3IMK7+EfCWwqzVtH91OSL5/TvbvpI2ULhTP98BMm0bgKhwPIgOv8I44Ed0qk0tlj+u0TbEybroybbR1NUbnn/uAMMzvqknXL9hLRKiSgRqeUlzwAwrJKTrOOCRrmHSS6gIL8RDO9YOIEAgda/f+F4zj/gxCtnVYHjn4g== Received: from BN7PR02CA0020.namprd02.prod.outlook.com (2603:10b6:408:20::33) by CY4PR12MB1925.namprd12.prod.outlook.com (2603:10b6:903:120::7) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5314.15; Mon, 6 Jun 2022 11:22:25 +0000 Received: from BN8NAM11FT043.eop-nam11.prod.protection.outlook.com (2603:10b6:408:20:cafe::31) by BN7PR02CA0020.outlook.office365.com (2603:10b6:408:20::33) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5314.18 via Frontend Transport; Mon, 6 Jun 2022 11:22:25 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 12.22.5.234) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 12.22.5.234 as permitted sender) receiver=protection.outlook.com; client-ip=12.22.5.234; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (12.22.5.234) by BN8NAM11FT043.mail.protection.outlook.com (10.13.177.218) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.5314.12 via Frontend Transport; Mon, 6 Jun 2022 11:22:24 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by DRHQMAIL101.nvidia.com (10.27.9.10) with Microsoft SMTP Server (TLS) id 15.0.1497.32; Mon, 6 Jun 2022 11:22:18 +0000 Received: from nvidia.com (10.126.231.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.22; Mon, 6 Jun 2022 04:22:15 -0700 From: Li Zhang To: , , , CC: , , , Subject: [PATCH 08/16] vdpa/mlx5: optimize datapath-control synchronization Date: Mon, 6 Jun 2022 14:20:51 +0300 Message-ID: <20220606112109.208873-15-lizh@nvidia.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220606112109.208873-1-lizh@nvidia.com> References: <20220408075606.33056-1-lizh@nvidia.com> <20220606112109.208873-1-lizh@nvidia.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain X-Originating-IP: [10.126.231.35] X-ClientProxiedBy: rnnvmail202.nvidia.com (10.129.68.7) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: a42ab781-3251-49aa-a3b5-08da47aed539 X-MS-TrafficTypeDiagnostic: CY4PR12MB1925:EE_ X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-Microsoft-Antispam-PRVS: X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: Ox3O33d8sEeXnxftP8K/DWuK6n4rEQtUEae6sWEZbKmyRCE89s6lOy+fU7SSL1uLqDZhTwxtSl3icf5iQTMh2BIe/Y1G8kpPmv6zGLL+O/j4Pg1pLZqq+Ptqm5o9dTSgKXkTie+q2Im+QCXQlxOfN97+vGOXyGuRJ7uEiYeAP+eHAqjff43o/BbiITL7P77sJsSNBKtcghf11qZEaLna81ZU7UUI6FxPF5UCscplq0hWAGUBTeL+Pwx3xOkNquJF549lWhQdVgYGGY6Cxp9s6vNqRO/yrvRS12YQMwaVLk/4YEDX9V4w0SPJbAsLu/VZNGZbASBeMfai4Lcc3w2zNMa0CankmPzqAPzHp/p/JL7+KwFgVenWAika0hxKXfhyrtLr+dF94dGB7Bx+zVV6mvvX29FZPLHM50Rw20HihaHql5jkX1armvX3adq/qGqck0uavdQQycG/ipFBl2XgNG9WEatz3dZOdPNq3w0tDd8s52XFK1H/4x20MeUkwkQxDXa/80ZfjSW+Rp7A/nSWjSBj/+o/HpD3vwRGN5ccubrbLOdbr0EIbN/MWDRMaxpbhpaxPBdaQy1AT0RwqX6Gkp0a//Bmky4dDVI2aKLkUi2V51ZolZVbyuQPa5gGnpHHlWDNVzcxOXmfRDJlCGqJQKf1VzJa0WeW00m7d8mdaVWFHxEaK2XgQy4jhF4iW1R4LjgyZVeWcDcwEzwz26aB2w== X-Forefront-Antispam-Report: CIP:12.22.5.234; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:InfoNoRecords; CAT:NONE; SFS:(13230001)(4636009)(46966006)(36840700001)(40470700004)(4326008)(316002)(8676002)(356005)(86362001)(30864003)(2906002)(5660300002)(40460700003)(110136005)(8936002)(81166007)(70206006)(70586007)(55016003)(26005)(6286002)(426003)(1076003)(47076005)(107886003)(336012)(16526019)(508600001)(186003)(2616005)(6666004)(6636002)(7696005)(82310400005)(54906003)(36860700001)(83380400001)(36756003)(36900700001); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 06 Jun 2022 11:22:24.7459 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: a42ab781-3251-49aa-a3b5-08da47aed539 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[12.22.5.234]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT043.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: CY4PR12MB1925 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org The driver used a single global lock for any synchronization needed for the datapath and control path. It is better to group the critical sections with the other ones that should be synchronized. Replace the global lock with the following locks: 1.virtq locks(per virtq) synchronize datapath polling and parallel configurations on the same virtq. 2.A doorbell lock synchronizes doorbell update, which is shared for all the virtqs in the device. 3.A steering lock for the shared steering objects updates. Signed-off-by: Li Zhang --- drivers/vdpa/mlx5/mlx5_vdpa.c | 24 ++++--- drivers/vdpa/mlx5/mlx5_vdpa.h | 13 ++-- drivers/vdpa/mlx5/mlx5_vdpa_event.c | 97 ++++++++++++++++++----------- drivers/vdpa/mlx5/mlx5_vdpa_lm.c | 34 +++++++--- drivers/vdpa/mlx5/mlx5_vdpa_steer.c | 7 ++- drivers/vdpa/mlx5/mlx5_vdpa_virtq.c | 88 +++++++++++++++++++------- 6 files changed, 184 insertions(+), 79 deletions(-) diff --git a/drivers/vdpa/mlx5/mlx5_vdpa.c b/drivers/vdpa/mlx5/mlx5_vdpa.c index ee99952e11..e5a11f72fd 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa.c +++ b/drivers/vdpa/mlx5/mlx5_vdpa.c @@ -135,6 +135,7 @@ mlx5_vdpa_set_vring_state(int vid, int vring, int state) struct rte_vdpa_device *vdev = rte_vhost_get_vdpa_device(vid); struct mlx5_vdpa_priv *priv = mlx5_vdpa_find_priv_resource_by_vdev(vdev); + struct mlx5_vdpa_virtq *virtq; int ret; if (priv == NULL) { @@ -145,9 +146,10 @@ mlx5_vdpa_set_vring_state(int vid, int vring, int state) DRV_LOG(ERR, "Too big vring id: %d.", vring); return -E2BIG; } - pthread_mutex_lock(&priv->vq_config_lock); + virtq = &priv->virtqs[vring]; + pthread_mutex_lock(&virtq->virtq_lock); ret = mlx5_vdpa_virtq_enable(priv, vring, state); - pthread_mutex_unlock(&priv->vq_config_lock); + pthread_mutex_unlock(&virtq->virtq_lock); return ret; } @@ -267,7 +269,9 @@ mlx5_vdpa_dev_close(int vid) ret |= mlx5_vdpa_lm_log(priv); priv->state = MLX5_VDPA_STATE_IN_PROGRESS; } + pthread_mutex_lock(&priv->steer_update_lock); mlx5_vdpa_steer_unset(priv); + pthread_mutex_unlock(&priv->steer_update_lock); mlx5_vdpa_virtqs_release(priv); mlx5_vdpa_drain_cq(priv); if (priv->lm_mr.addr) @@ -276,8 +280,6 @@ mlx5_vdpa_dev_close(int vid) if (!priv->connected) mlx5_vdpa_dev_cache_clean(priv); priv->vid = 0; - /* The mutex may stay locked after event thread cancel - initiate it. */ - pthread_mutex_init(&priv->vq_config_lock, NULL); DRV_LOG(INFO, "vDPA device %d was closed.", vid); return ret; } @@ -549,15 +551,21 @@ mlx5_vdpa_config_get(struct mlx5_kvargs_ctrl *mkvlist, static int mlx5_vdpa_virtq_resource_prepare(struct mlx5_vdpa_priv *priv) { + struct mlx5_vdpa_virtq *virtq; uint32_t index; uint32_t i; + for (index = 0; index < priv->caps.max_num_virtio_queues * 2; + index++) { + virtq = &priv->virtqs[index]; + pthread_mutex_init(&virtq->virtq_lock, NULL); + } if (!priv->queues) return 0; for (index = 0; index < (priv->queues * 2); ++index) { - struct mlx5_vdpa_virtq *virtq = &priv->virtqs[index]; + virtq = &priv->virtqs[index]; int ret = mlx5_vdpa_event_qp_prepare(priv, priv->queue_size, - -1, &virtq->eqp); + -1, virtq); if (ret) { DRV_LOG(ERR, "Failed to create event QPs for virtq %d.", @@ -713,7 +721,8 @@ mlx5_vdpa_dev_probe(struct mlx5_common_device *cdev, priv->num_lag_ports = attr->num_lag_ports; if (attr->num_lag_ports == 0) priv->num_lag_ports = 1; - pthread_mutex_init(&priv->vq_config_lock, NULL); + rte_spinlock_init(&priv->db_lock); + pthread_mutex_init(&priv->steer_update_lock, NULL); priv->cdev = cdev; mlx5_vdpa_config_get(mkvlist, priv); if (mlx5_vdpa_create_dev_resources(priv)) @@ -797,7 +806,6 @@ mlx5_vdpa_dev_release(struct mlx5_vdpa_priv *priv) mlx5_vdpa_release_dev_resources(priv); if (priv->vdev) rte_vdpa_unregister_device(priv->vdev); - pthread_mutex_destroy(&priv->vq_config_lock); rte_free(priv); } diff --git a/drivers/vdpa/mlx5/mlx5_vdpa.h b/drivers/vdpa/mlx5/mlx5_vdpa.h index e5553079fe..3fd5eefc5e 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa.h +++ b/drivers/vdpa/mlx5/mlx5_vdpa.h @@ -82,6 +82,7 @@ struct mlx5_vdpa_virtq { bool stopped; uint32_t configured:1; uint32_t version; + pthread_mutex_t virtq_lock; struct mlx5_vdpa_priv *priv; struct mlx5_devx_obj *virtq; struct mlx5_devx_obj *counters; @@ -126,7 +127,8 @@ struct mlx5_vdpa_priv { TAILQ_ENTRY(mlx5_vdpa_priv) next; bool connected; enum mlx5_dev_state state; - pthread_mutex_t vq_config_lock; + rte_spinlock_t db_lock; + pthread_mutex_t steer_update_lock; uint64_t no_traffic_counter; pthread_t timer_tid; int event_mode; @@ -222,14 +224,15 @@ int mlx5_vdpa_mem_register(struct mlx5_vdpa_priv *priv); * Number of descriptors. * @param[in] callfd * The guest notification file descriptor. - * @param[in/out] eqp - * Pointer to the event QP structure. + * @param[in/out] virtq + * Pointer to the virt-queue structure. * * @return * 0 on success, -1 otherwise and rte_errno is set. */ -int mlx5_vdpa_event_qp_prepare(struct mlx5_vdpa_priv *priv, uint16_t desc_n, - int callfd, struct mlx5_vdpa_event_qp *eqp); +int +mlx5_vdpa_event_qp_prepare(struct mlx5_vdpa_priv *priv, uint16_t desc_n, + int callfd, struct mlx5_vdpa_virtq *virtq); /** * Destroy an event QP and all its related resources. diff --git a/drivers/vdpa/mlx5/mlx5_vdpa_event.c b/drivers/vdpa/mlx5/mlx5_vdpa_event.c index b43dca9255..2b0f5936d1 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa_event.c +++ b/drivers/vdpa/mlx5/mlx5_vdpa_event.c @@ -85,12 +85,13 @@ mlx5_vdpa_cq_arm(struct mlx5_vdpa_priv *priv, struct mlx5_vdpa_cq *cq) static int mlx5_vdpa_cq_create(struct mlx5_vdpa_priv *priv, uint16_t log_desc_n, - int callfd, struct mlx5_vdpa_cq *cq) + int callfd, struct mlx5_vdpa_virtq *virtq) { struct mlx5_devx_cq_attr attr = { .use_first_only = 1, .uar_page_id = mlx5_os_get_devx_uar_page_id(priv->uar.obj), }; + struct mlx5_vdpa_cq *cq = &virtq->eqp.cq; uint16_t event_nums[1] = {0}; int ret; @@ -102,10 +103,11 @@ mlx5_vdpa_cq_create(struct mlx5_vdpa_priv *priv, uint16_t log_desc_n, cq->log_desc_n = log_desc_n; rte_spinlock_init(&cq->sl); /* Subscribe CQ event to the event channel controlled by the driver. */ - ret = mlx5_os_devx_subscribe_devx_event(priv->eventc, - cq->cq_obj.cq->obj, - sizeof(event_nums), event_nums, - (uint64_t)(uintptr_t)cq); + ret = mlx5_glue->devx_subscribe_devx_event(priv->eventc, + cq->cq_obj.cq->obj, + sizeof(event_nums), + event_nums, + (uint64_t)(uintptr_t)virtq); if (ret) { DRV_LOG(ERR, "Failed to subscribe CQE event."); rte_errno = errno; @@ -167,13 +169,17 @@ mlx5_vdpa_cq_poll(struct mlx5_vdpa_cq *cq) static void mlx5_vdpa_arm_all_cqs(struct mlx5_vdpa_priv *priv) { + struct mlx5_vdpa_virtq *virtq; struct mlx5_vdpa_cq *cq; int i; for (i = 0; i < priv->nr_virtqs; i++) { + virtq = &priv->virtqs[i]; + pthread_mutex_lock(&virtq->virtq_lock); cq = &priv->virtqs[i].eqp.cq; if (cq->cq_obj.cq && !cq->armed) mlx5_vdpa_cq_arm(priv, cq); + pthread_mutex_unlock(&virtq->virtq_lock); } } @@ -220,13 +226,18 @@ mlx5_vdpa_queue_complete(struct mlx5_vdpa_cq *cq) static uint32_t mlx5_vdpa_queues_complete(struct mlx5_vdpa_priv *priv) { - int i; + struct mlx5_vdpa_virtq *virtq; + struct mlx5_vdpa_cq *cq; uint32_t max = 0; + uint32_t comp; + int i; for (i = 0; i < priv->nr_virtqs; i++) { - struct mlx5_vdpa_cq *cq = &priv->virtqs[i].eqp.cq; - uint32_t comp = mlx5_vdpa_queue_complete(cq); - + virtq = &priv->virtqs[i]; + pthread_mutex_lock(&virtq->virtq_lock); + cq = &virtq->eqp.cq; + comp = mlx5_vdpa_queue_complete(cq); + pthread_mutex_unlock(&virtq->virtq_lock); if (comp > max) max = comp; } @@ -253,7 +264,7 @@ mlx5_vdpa_drain_cq(struct mlx5_vdpa_priv *priv) } /* Wait on all CQs channel for completion event. */ -static struct mlx5_vdpa_cq * +static struct mlx5_vdpa_virtq * mlx5_vdpa_event_wait(struct mlx5_vdpa_priv *priv __rte_unused) { #ifdef HAVE_IBV_DEVX_EVENT @@ -265,7 +276,8 @@ mlx5_vdpa_event_wait(struct mlx5_vdpa_priv *priv __rte_unused) sizeof(out.buf)); if (ret >= 0) - return (struct mlx5_vdpa_cq *)(uintptr_t)out.event_resp.cookie; + return (struct mlx5_vdpa_virtq *) + (uintptr_t)out.event_resp.cookie; DRV_LOG(INFO, "Got error in devx_get_event, ret = %d, errno = %d.", ret, errno); #endif @@ -276,7 +288,7 @@ static void * mlx5_vdpa_event_handle(void *arg) { struct mlx5_vdpa_priv *priv = arg; - struct mlx5_vdpa_cq *cq; + struct mlx5_vdpa_virtq *virtq; uint32_t max; switch (priv->event_mode) { @@ -284,7 +296,6 @@ mlx5_vdpa_event_handle(void *arg) case MLX5_VDPA_EVENT_MODE_FIXED_TIMER: priv->timer_delay_us = priv->event_us; while (1) { - pthread_mutex_lock(&priv->vq_config_lock); max = mlx5_vdpa_queues_complete(priv); if (max == 0 && priv->no_traffic_counter++ >= priv->no_traffic_max) { @@ -292,32 +303,37 @@ mlx5_vdpa_event_handle(void *arg) priv->vdev->device->name); mlx5_vdpa_arm_all_cqs(priv); do { - pthread_mutex_unlock - (&priv->vq_config_lock); - cq = mlx5_vdpa_event_wait(priv); - pthread_mutex_lock - (&priv->vq_config_lock); - if (cq == NULL || - mlx5_vdpa_queue_complete(cq) > 0) + virtq = mlx5_vdpa_event_wait(priv); + if (virtq == NULL) break; + pthread_mutex_lock( + &virtq->virtq_lock); + if (mlx5_vdpa_queue_complete( + &virtq->eqp.cq) > 0) { + pthread_mutex_unlock( + &virtq->virtq_lock); + break; + } + pthread_mutex_unlock( + &virtq->virtq_lock); } while (1); priv->timer_delay_us = priv->event_us; priv->no_traffic_counter = 0; } else if (max != 0) { priv->no_traffic_counter = 0; } - pthread_mutex_unlock(&priv->vq_config_lock); mlx5_vdpa_timer_sleep(priv, max); } return NULL; case MLX5_VDPA_EVENT_MODE_ONLY_INTERRUPT: do { - cq = mlx5_vdpa_event_wait(priv); - if (cq != NULL) { - pthread_mutex_lock(&priv->vq_config_lock); - if (mlx5_vdpa_queue_complete(cq) > 0) - mlx5_vdpa_cq_arm(priv, cq); - pthread_mutex_unlock(&priv->vq_config_lock); + virtq = mlx5_vdpa_event_wait(priv); + if (virtq != NULL) { + pthread_mutex_lock(&virtq->virtq_lock); + if (mlx5_vdpa_queue_complete( + &virtq->eqp.cq) > 0) + mlx5_vdpa_cq_arm(priv, &virtq->eqp.cq); + pthread_mutex_unlock(&virtq->virtq_lock); } } while (1); return NULL; @@ -339,7 +355,6 @@ mlx5_vdpa_err_interrupt_handler(void *cb_arg __rte_unused) struct mlx5_vdpa_virtq *virtq; uint64_t sec; - pthread_mutex_lock(&priv->vq_config_lock); while (mlx5_glue->devx_get_event(priv->err_chnl, &out.event_resp, sizeof(out.buf)) >= (ssize_t)sizeof(out.event_resp.cookie)) { @@ -351,10 +366,11 @@ mlx5_vdpa_err_interrupt_handler(void *cb_arg __rte_unused) continue; } virtq = &priv->virtqs[vq_index]; + pthread_mutex_lock(&virtq->virtq_lock); if (!virtq->enable || virtq->version != version) - continue; + goto unlock; if (rte_rdtsc() / rte_get_tsc_hz() < MLX5_VDPA_ERROR_TIME_SEC) - continue; + goto unlock; virtq->stopped = true; /* Query error info. */ if (mlx5_vdpa_virtq_query(priv, vq_index)) @@ -384,8 +400,9 @@ mlx5_vdpa_err_interrupt_handler(void *cb_arg __rte_unused) for (i = 1; i < RTE_DIM(virtq->err_time); i++) virtq->err_time[i - 1] = virtq->err_time[i]; virtq->err_time[RTE_DIM(virtq->err_time) - 1] = rte_rdtsc(); +unlock: + pthread_mutex_unlock(&virtq->virtq_lock); } - pthread_mutex_unlock(&priv->vq_config_lock); #endif } @@ -533,11 +550,18 @@ mlx5_vdpa_cqe_event_setup(struct mlx5_vdpa_priv *priv) void mlx5_vdpa_cqe_event_unset(struct mlx5_vdpa_priv *priv) { + struct mlx5_vdpa_virtq *virtq; void *status; + int i; if (priv->timer_tid) { pthread_cancel(priv->timer_tid); pthread_join(priv->timer_tid, &status); + /* The mutex may stay locked after event thread cancel, initiate it. */ + for (i = 0; i < priv->nr_virtqs; i++) { + virtq = &priv->virtqs[i]; + pthread_mutex_init(&virtq->virtq_lock, NULL); + } } priv->timer_tid = 0; } @@ -614,8 +638,9 @@ mlx5_vdpa_qps2rst2rts(struct mlx5_vdpa_event_qp *eqp) int mlx5_vdpa_event_qp_prepare(struct mlx5_vdpa_priv *priv, uint16_t desc_n, - int callfd, struct mlx5_vdpa_event_qp *eqp) + int callfd, struct mlx5_vdpa_virtq *virtq) { + struct mlx5_vdpa_event_qp *eqp = &virtq->eqp; struct mlx5_devx_qp_attr attr = {0}; uint16_t log_desc_n = rte_log2_u32(desc_n); uint32_t ret; @@ -632,7 +657,8 @@ mlx5_vdpa_event_qp_prepare(struct mlx5_vdpa_priv *priv, uint16_t desc_n, } if (eqp->fw_qp) mlx5_vdpa_event_qp_destroy(eqp); - if (mlx5_vdpa_cq_create(priv, log_desc_n, callfd, &eqp->cq)) + if (mlx5_vdpa_cq_create(priv, log_desc_n, callfd, virtq) || + !eqp->cq.cq_obj.cq) return -1; attr.pd = priv->cdev->pdn; attr.ts_format = @@ -650,8 +676,8 @@ mlx5_vdpa_event_qp_prepare(struct mlx5_vdpa_priv *priv, uint16_t desc_n, attr.ts_format = mlx5_ts_format_conv(priv->cdev->config.hca_attr.qp_ts_format); ret = mlx5_devx_qp_create(priv->cdev->ctx, &(eqp->sw_qp), - attr.num_of_receive_wqes * - MLX5_WSEG_SIZE, &attr, SOCKET_ID_ANY); + attr.num_of_receive_wqes * MLX5_WSEG_SIZE, + &attr, SOCKET_ID_ANY); if (ret) { DRV_LOG(ERR, "Failed to create SW QP(%u).", rte_errno); goto error; @@ -668,3 +694,4 @@ mlx5_vdpa_event_qp_prepare(struct mlx5_vdpa_priv *priv, uint16_t desc_n, mlx5_vdpa_event_qp_destroy(eqp); return -1; } + diff --git a/drivers/vdpa/mlx5/mlx5_vdpa_lm.c b/drivers/vdpa/mlx5/mlx5_vdpa_lm.c index a8faf0c116..efebf364d0 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa_lm.c +++ b/drivers/vdpa/mlx5/mlx5_vdpa_lm.c @@ -25,11 +25,18 @@ mlx5_vdpa_logging_enable(struct mlx5_vdpa_priv *priv, int enable) if (!virtq->configured) { DRV_LOG(DEBUG, "virtq %d is invalid for dirty bitmap " "enabling.", i); - } else if (mlx5_devx_cmd_modify_virtq(priv->virtqs[i].virtq, + } else { + struct mlx5_vdpa_virtq *virtq = &priv->virtqs[i]; + + pthread_mutex_lock(&virtq->virtq_lock); + if (mlx5_devx_cmd_modify_virtq(priv->virtqs[i].virtq, &attr)) { - DRV_LOG(ERR, "Failed to modify virtq %d for dirty " + pthread_mutex_unlock(&virtq->virtq_lock); + DRV_LOG(ERR, "Failed to modify virtq %d for dirty " "bitmap enabling.", i); - return -1; + return -1; + } + pthread_mutex_unlock(&virtq->virtq_lock); } } return 0; @@ -61,10 +68,19 @@ mlx5_vdpa_dirty_bitmap_set(struct mlx5_vdpa_priv *priv, uint64_t log_base, virtq = &priv->virtqs[i]; if (!virtq->configured) { DRV_LOG(DEBUG, "virtq %d is invalid for LM.", i); - } else if (mlx5_devx_cmd_modify_virtq(priv->virtqs[i].virtq, - &attr)) { - DRV_LOG(ERR, "Failed to modify virtq %d for LM.", i); - goto err; + } else { + struct mlx5_vdpa_virtq *virtq = &priv->virtqs[i]; + + pthread_mutex_lock(&virtq->virtq_lock); + if (mlx5_devx_cmd_modify_virtq( + priv->virtqs[i].virtq, + &attr)) { + pthread_mutex_unlock(&virtq->virtq_lock); + DRV_LOG(ERR, + "Failed to modify virtq %d for LM.", i); + goto err; + } + pthread_mutex_unlock(&virtq->virtq_lock); } } return 0; @@ -79,6 +95,7 @@ mlx5_vdpa_dirty_bitmap_set(struct mlx5_vdpa_priv *priv, uint64_t log_base, int mlx5_vdpa_lm_log(struct mlx5_vdpa_priv *priv) { + struct mlx5_vdpa_virtq *virtq; uint64_t features; int ret = rte_vhost_get_negotiated_features(priv->vid, &features); int i; @@ -90,10 +107,13 @@ mlx5_vdpa_lm_log(struct mlx5_vdpa_priv *priv) if (!RTE_VHOST_NEED_LOG(features)) return 0; for (i = 0; i < priv->nr_virtqs; ++i) { + virtq = &priv->virtqs[i]; if (!priv->virtqs[i].virtq) { DRV_LOG(DEBUG, "virtq %d is invalid for LM log.", i); } else { + pthread_mutex_lock(&virtq->virtq_lock); ret = mlx5_vdpa_virtq_stop(priv, i); + pthread_mutex_unlock(&virtq->virtq_lock); if (ret) { DRV_LOG(ERR, "Failed to stop virtq %d for LM " "log.", i); diff --git a/drivers/vdpa/mlx5/mlx5_vdpa_steer.c b/drivers/vdpa/mlx5/mlx5_vdpa_steer.c index d4b4375c88..4cbf09784e 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa_steer.c +++ b/drivers/vdpa/mlx5/mlx5_vdpa_steer.c @@ -237,19 +237,24 @@ mlx5_vdpa_rss_flows_create(struct mlx5_vdpa_priv *priv) int mlx5_vdpa_steer_update(struct mlx5_vdpa_priv *priv) { - int ret = mlx5_vdpa_rqt_prepare(priv); + int ret; + pthread_mutex_lock(&priv->steer_update_lock); + ret = mlx5_vdpa_rqt_prepare(priv); if (ret == 0) { mlx5_vdpa_steer_unset(priv); } else if (ret < 0) { + pthread_mutex_unlock(&priv->steer_update_lock); return ret; } else if (!priv->steer.rss[0].flow) { ret = mlx5_vdpa_rss_flows_create(priv); if (ret) { DRV_LOG(ERR, "Cannot create RSS flows."); + pthread_mutex_unlock(&priv->steer_update_lock); return -1; } } + pthread_mutex_unlock(&priv->steer_update_lock); return 0; } diff --git a/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c b/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c index 55cbc9fad2..138b7bdbc5 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c +++ b/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c @@ -24,13 +24,17 @@ mlx5_vdpa_virtq_kick_handler(void *cb_arg) int nbytes; int retry; + pthread_mutex_lock(&virtq->virtq_lock); if (priv->state != MLX5_VDPA_STATE_CONFIGURED && !virtq->enable) { + pthread_mutex_unlock(&virtq->virtq_lock); DRV_LOG(ERR, "device %d queue %d down, skip kick handling", priv->vid, virtq->index); return; } - if (rte_intr_fd_get(virtq->intr_handle) < 0) + if (rte_intr_fd_get(virtq->intr_handle) < 0) { + pthread_mutex_unlock(&virtq->virtq_lock); return; + } for (retry = 0; retry < 3; ++retry) { nbytes = read(rte_intr_fd_get(virtq->intr_handle), &buf, 8); @@ -44,9 +48,14 @@ mlx5_vdpa_virtq_kick_handler(void *cb_arg) } break; } - if (nbytes < 0) + if (nbytes < 0) { + pthread_mutex_unlock(&virtq->virtq_lock); return; + } + rte_spinlock_lock(&priv->db_lock); rte_write32(virtq->index, priv->virtq_db_addr); + rte_spinlock_unlock(&priv->db_lock); + pthread_mutex_unlock(&virtq->virtq_lock); if (priv->state != MLX5_VDPA_STATE_CONFIGURED && !virtq->enable) { DRV_LOG(ERR, "device %d queue %d down, skip kick handling", priv->vid, virtq->index); @@ -66,6 +75,33 @@ mlx5_vdpa_virtq_kick_handler(void *cb_arg) DRV_LOG(DEBUG, "Ring virtq %u doorbell.", virtq->index); } +/* Virtq must be locked before calling this function. */ +static void +mlx5_vdpa_virtq_unregister_intr_handle(struct mlx5_vdpa_virtq *virtq) +{ + int ret = -EAGAIN; + + if (!virtq->intr_handle) + return; + if (rte_intr_fd_get(virtq->intr_handle) >= 0) { + while (ret == -EAGAIN) { + ret = rte_intr_callback_unregister(virtq->intr_handle, + mlx5_vdpa_virtq_kick_handler, virtq); + if (ret == -EAGAIN) { + DRV_LOG(DEBUG, "Try again to unregister fd %d of virtq %hu interrupt", + rte_intr_fd_get(virtq->intr_handle), + virtq->index); + pthread_mutex_unlock(&virtq->virtq_lock); + usleep(MLX5_VDPA_INTR_RETRIES_USEC); + pthread_mutex_lock(&virtq->virtq_lock); + } + } + (void)rte_intr_fd_set(virtq->intr_handle, -1); + } + rte_intr_instance_free(virtq->intr_handle); + virtq->intr_handle = NULL; +} + /* Release cached VQ resources. */ void mlx5_vdpa_virtqs_cleanup(struct mlx5_vdpa_priv *priv) @@ -75,6 +111,7 @@ mlx5_vdpa_virtqs_cleanup(struct mlx5_vdpa_priv *priv) for (i = 0; i < priv->caps.max_num_virtio_queues; i++) { struct mlx5_vdpa_virtq *virtq = &priv->virtqs[i]; + pthread_mutex_lock(&virtq->virtq_lock); virtq->configured = 0; for (j = 0; j < RTE_DIM(virtq->umems); ++j) { if (virtq->umems[j].obj) { @@ -90,28 +127,17 @@ mlx5_vdpa_virtqs_cleanup(struct mlx5_vdpa_priv *priv) } if (virtq->eqp.fw_qp) mlx5_vdpa_event_qp_destroy(&virtq->eqp); + pthread_mutex_unlock(&virtq->virtq_lock); } } + static int mlx5_vdpa_virtq_unset(struct mlx5_vdpa_virtq *virtq) { int ret = -EAGAIN; - if (rte_intr_fd_get(virtq->intr_handle) >= 0) { - while (ret == -EAGAIN) { - ret = rte_intr_callback_unregister(virtq->intr_handle, - mlx5_vdpa_virtq_kick_handler, virtq); - if (ret == -EAGAIN) { - DRV_LOG(DEBUG, "Try again to unregister fd %d of virtq %hu interrupt", - rte_intr_fd_get(virtq->intr_handle), - virtq->index); - usleep(MLX5_VDPA_INTR_RETRIES_USEC); - } - } - rte_intr_fd_set(virtq->intr_handle, -1); - } - rte_intr_instance_free(virtq->intr_handle); + mlx5_vdpa_virtq_unregister_intr_handle(virtq); if (virtq->configured) { ret = mlx5_vdpa_virtq_stop(virtq->priv, virtq->index); if (ret) @@ -128,10 +154,15 @@ mlx5_vdpa_virtq_unset(struct mlx5_vdpa_virtq *virtq) void mlx5_vdpa_virtqs_release(struct mlx5_vdpa_priv *priv) { + struct mlx5_vdpa_virtq *virtq; int i; - for (i = 0; i < priv->nr_virtqs; i++) - mlx5_vdpa_virtq_unset(&priv->virtqs[i]); + for (i = 0; i < priv->nr_virtqs; i++) { + virtq = &priv->virtqs[i]; + pthread_mutex_lock(&virtq->virtq_lock); + mlx5_vdpa_virtq_unset(virtq); + pthread_mutex_unlock(&virtq->virtq_lock); + } priv->features = 0; priv->nr_virtqs = 0; } @@ -250,7 +281,7 @@ mlx5_vdpa_virtq_sub_objs_prepare(struct mlx5_vdpa_priv *priv, MLX5_VIRTQ_EVENT_MODE_QP : MLX5_VIRTQ_EVENT_MODE_NO_MSIX; if (attr->event_mode == MLX5_VIRTQ_EVENT_MODE_QP) { ret = mlx5_vdpa_event_qp_prepare(priv, - vq->size, vq->callfd, &virtq->eqp); + vq->size, vq->callfd, virtq); if (ret) { DRV_LOG(ERR, "Failed to create event QPs for virtq %d.", @@ -420,7 +451,9 @@ mlx5_vdpa_virtq_setup(struct mlx5_vdpa_priv *priv, int index) } claim_zero(rte_vhost_enable_guest_notification(priv->vid, index, 1)); virtq->configured = 1; + rte_spinlock_lock(&priv->db_lock); rte_write32(virtq->index, priv->virtq_db_addr); + rte_spinlock_unlock(&priv->db_lock); /* Setup doorbell mapping. */ virtq->intr_handle = rte_intr_instance_alloc(RTE_INTR_INSTANCE_F_SHARED); @@ -441,7 +474,7 @@ mlx5_vdpa_virtq_setup(struct mlx5_vdpa_priv *priv, int index) if (rte_intr_callback_register(virtq->intr_handle, mlx5_vdpa_virtq_kick_handler, virtq)) { - rte_intr_fd_set(virtq->intr_handle, -1); + (void)rte_intr_fd_set(virtq->intr_handle, -1); DRV_LOG(ERR, "Failed to register virtq %d interrupt.", index); goto error; @@ -537,6 +570,7 @@ mlx5_vdpa_virtqs_prepare(struct mlx5_vdpa_priv *priv) uint32_t i; uint16_t nr_vring = rte_vhost_get_vring_num(priv->vid); int ret = rte_vhost_get_negotiated_features(priv->vid, &priv->features); + struct mlx5_vdpa_virtq *virtq; if (ret || mlx5_vdpa_features_validate(priv)) { DRV_LOG(ERR, "Failed to configure negotiated features."); @@ -556,9 +590,17 @@ mlx5_vdpa_virtqs_prepare(struct mlx5_vdpa_priv *priv) return -1; } priv->nr_virtqs = nr_vring; - for (i = 0; i < nr_vring; i++) - if (priv->virtqs[i].enable && mlx5_vdpa_virtq_setup(priv, i)) - goto error; + for (i = 0; i < nr_vring; i++) { + virtq = &priv->virtqs[i]; + if (virtq->enable) { + pthread_mutex_lock(&virtq->virtq_lock); + if (mlx5_vdpa_virtq_setup(priv, i)) { + pthread_mutex_unlock(&virtq->virtq_lock); + goto error; + } + pthread_mutex_unlock(&virtq->virtq_lock); + } + } return 0; error: mlx5_vdpa_virtqs_release(priv); -- 2.31.1