From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id D78EEA00BE; Fri, 8 Apr 2022 09:58:41 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 4CB7B42887; Fri, 8 Apr 2022 09:57:36 +0200 (CEST) Received: from NAM12-MW2-obe.outbound.protection.outlook.com (mail-mw2nam12on2089.outbound.protection.outlook.com [40.107.244.89]) by mails.dpdk.org (Postfix) with ESMTP id D77C442868 for ; Fri, 8 Apr 2022 09:57:34 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=Lll+tjiEi8xVnb1AGxYCVHITKGlKWC4F2aSJspq61mF78BYfS4N3I6d8PRxsy14AdTkgBR0hUaCizKEw+dY3BnyyUBzVI2kJLuk8YjTB3cTnCmIYOy6S9omzibWZwxDFDgz+DOi0JafcF+bgxYWwkqiaOGO0yYDi/6iWhXF7rJ2Dh+wN8cWLjBJxu88jf5MYE7z5W1vLIK+S3jJsBYKOEheBV9537ZbSGdTJblqvuu4JsVUCwILsciCzMYFUjj0aVUfx7QH/HIdSIhIRKhlQOlyx9h+QdoSfgqNEVcm761qVbt6dmIWZrbjVIcysr4TUi/HqdoXOCudYP5Z/iCJa8g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=2MK+apYgcEhDbEKKaOp6cOGnkSUBspB7IAJy0Qcphzw=; b=LyPxxRf4nuNDDQqgPOMO9THcOn4+OBci9vY+l1jCD/qGNsAsLOi7tVNeA0JRcadcyxAlTLBsqIcpl/ogOx6goQtvHbv5uV7ewDG9u9IN17LgD9pXD1i3QR9rt0teYr9bM58rxZr5R/LDW8STcq6E1Zh8YNbjB+dhDzc+bcP2K2wmEnfU8HAVjAF3snZB4dfcNjEKPtjV4UMk93y/tScvuTmKmqPy0OD5pLcz+VlXggcUwNlIysBWvL/VQFyKpkbJb6Zl15Gtc+Wf5KM6fvfbDzQlIBXqnpJ8J+Za7W6CZmQfdD8vLSSYnmzdOz276bCiIEVBQPhTe5DHz7PQwub+IQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 12.22.5.234) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=2MK+apYgcEhDbEKKaOp6cOGnkSUBspB7IAJy0Qcphzw=; b=N5/rHNDDQ6ak/AONLPMGmxPfGZFtgyc0guKn1KGAYtQvTxvFGtb7+2IYmbNKs/61P8ZQsZtUaa+Cf3ajnYPtjh3gqwOkhRE23EzPRqK1RvVFyesvZC9UOpsOU6/CDbQ8zQ73rldAtNS131ovHjDQ/U1+ANlr45JL3SPZ7058t/BxoKtU0VvF5su3yliAwTAiFu530IuTqJ6MNCACGPJHAaSDpyRKIMzYz4ysFD3l/tunIMaeFIuM7ufbV9SufbYZE0+vnAKgYIMVj8nLeyL/opdATegeOT/f0Z5G9xPzNJDIRWYyQvgXKeBIyybsepTeIpAFsx2EBacVO7LiDxjGew== Received: from DM6PR02CA0086.namprd02.prod.outlook.com (2603:10b6:5:1f4::27) by DM5PR12MB2519.namprd12.prod.outlook.com (2603:10b6:4:b5::22) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5144.22; Fri, 8 Apr 2022 07:57:32 +0000 Received: from DM6NAM11FT066.eop-nam11.prod.protection.outlook.com (2603:10b6:5:1f4:cafe::e7) by DM6PR02CA0086.outlook.office365.com (2603:10b6:5:1f4::27) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5123.31 via Frontend Transport; Fri, 8 Apr 2022 07:57:32 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 12.22.5.234) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 12.22.5.234 as permitted sender) receiver=protection.outlook.com; client-ip=12.22.5.234; helo=mail.nvidia.com; Received: from mail.nvidia.com (12.22.5.234) by DM6NAM11FT066.mail.protection.outlook.com (10.13.173.179) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.5144.20 via Frontend Transport; Fri, 8 Apr 2022 07:57:32 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by DRHQMAIL101.nvidia.com (10.27.9.10) with Microsoft SMTP Server (TLS) id 15.0.1497.32; Fri, 8 Apr 2022 07:57:32 +0000 Received: from nvidia.com (10.126.231.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.22; Fri, 8 Apr 2022 00:57:29 -0700 From: Li Zhang To: , , , CC: , Subject: [RFC 15/15] vdpa/mlx5: prepare virtqueue resource creation Date: Fri, 8 Apr 2022 10:56:05 +0300 Message-ID: <20220408075606.33056-16-lizh@nvidia.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20220408075606.33056-1-lizh@nvidia.com> References: <20220408075606.33056-1-lizh@nvidia.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain X-Originating-IP: [10.126.231.35] X-ClientProxiedBy: rnnvmail203.nvidia.com (10.129.68.9) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 92129734-208c-464c-1181-08da19356fe3 X-MS-TrafficTypeDiagnostic: DM5PR12MB2519:EE_ X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-Microsoft-Antispam-PRVS: X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: VIj2OWc0mx6dXGKueQaX/bT7W4a0g06zNSWhEy3Jmuwi5mBUiUBVUCXSCwMt6wVQobYk5RQlRkRlKhgGY6LPywnR2XvDVGrruzH/YgUJXKzPp2fYNhwosXJKEqhaCm0Qiij7ev/kR6HQQjSYhp7bLhHbr4KmksbzXMT+QezvPijsKAnWNeJY3LMpY+tJcfjAIo+Bvke0jp9wt8cKWk6tEQdkH62xRWq9IRKdPf+dEiRaft1GWLxP1KtZk7qmftXq7TZuWaxVWR2FMpHh3NqwcCp5Wzjp2K+BDsFSBaizK8ih9t4jG1VDtjjMOU+NX5UgwP831Bc7J52wwD7fJM8r/qAtaoGtYaUDCjiToU6iByWIO8QBXCdtGrFFTldsUH3lPLR3bjUskSIxXpGN0rCgq4SHfhSkF2ZxZszHlOnGVRFaJg58FOlELt0EiYywt9iGR/6mc/ECnC+Zrqhw15EKu4tLkJz6W8/wJxfnU4EXXUS5o0zsu7qJX60/YKPPF00ssOGrov0wo+Un2OrE+cmivD6mKqrTEixeO0RUYhMJctHls9WVhKAHKpAvfVgAM/A5EKK80ke//afqf7wvugxgUdMezaIMN22yTttnnYD1Kxy4ZjMNap/skGLN/T+LrZYtip7E9q8u37cbR8LDZQAMK7FSWrEvGBeHzvRqSs70hQ8cPyfr/hgL/dgS/fxB/6kLBWPnb6J+PzF9CjqnKPuTtg== X-Forefront-Antispam-Report: CIP:12.22.5.234; CTRY:US; LANG:en; SCL:1; SRV:; IPV:CAL; SFV:NSPM; H:mail.nvidia.com; PTR:InfoNoRecords; CAT:NONE; SFS:(13230001)(4636009)(40470700004)(46966006)(36840700001)(2906002)(508600001)(4326008)(426003)(336012)(86362001)(83380400001)(47076005)(70206006)(40460700003)(1076003)(107886003)(6286002)(16526019)(8936002)(2616005)(7696005)(26005)(186003)(5660300002)(81166007)(70586007)(8676002)(356005)(36756003)(55016003)(82310400005)(6666004)(54906003)(316002)(110136005)(36860700001)(36900700001); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 08 Apr 2022 07:57:32.5741 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 92129734-208c-464c-1181-08da19356fe3 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[12.22.5.234]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: DM6NAM11FT066.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM5PR12MB2519 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Split the virtqs virt-queue resource between the configuration threads. Also need pre-created virt-queue resource after virtq destruction. This accelerates the LM process and reduces its time by 30%. Signed-off-by: Li Zhang --- drivers/vdpa/mlx5/mlx5_vdpa.c | 69 +++++++++++++++++++++++---- drivers/vdpa/mlx5/mlx5_vdpa.h | 7 ++- drivers/vdpa/mlx5/mlx5_vdpa_cthread.c | 14 +++++- drivers/vdpa/mlx5/mlx5_vdpa_virtq.c | 35 ++++++++++---- 4 files changed, 104 insertions(+), 21 deletions(-) diff --git a/drivers/vdpa/mlx5/mlx5_vdpa.c b/drivers/vdpa/mlx5/mlx5_vdpa.c index eaca571e3e..15ce30bc49 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa.c +++ b/drivers/vdpa/mlx5/mlx5_vdpa.c @@ -275,13 +275,17 @@ mlx5_vdpa_wait_dev_close_tasks_done(struct mlx5_vdpa_priv *priv) } static int -mlx5_vdpa_dev_close(int vid) +_internal_mlx5_vdpa_dev_close(int vid, bool release_resource) { struct rte_vdpa_device *vdev = rte_vhost_get_vdpa_device(vid); - struct mlx5_vdpa_priv *priv = - mlx5_vdpa_find_priv_resource_by_vdev(vdev); + struct mlx5_vdpa_priv *priv; int ret = 0; + if (!vdev) { + DRV_LOG(ERR, "Invalid vDPA device."); + return -1; + } + priv = mlx5_vdpa_find_priv_resource_by_vdev(vdev); if (priv == NULL) { DRV_LOG(ERR, "Invalid vDPA device: %s.", vdev->device->name); return -1; @@ -291,7 +295,7 @@ mlx5_vdpa_dev_close(int vid) ret |= mlx5_vdpa_lm_log(priv); priv->state = MLX5_VDPA_STATE_IN_PROGRESS; } - if (priv->use_c_thread) { + if (priv->use_c_thread && !release_resource) { if (priv->last_c_thrd_idx >= (conf_thread_mng.max_thrds - 1)) priv->last_c_thrd_idx = 0; @@ -315,7 +319,7 @@ mlx5_vdpa_dev_close(int vid) pthread_mutex_lock(&priv->steer_update_lock); mlx5_vdpa_steer_unset(priv); pthread_mutex_unlock(&priv->steer_update_lock); - mlx5_vdpa_virtqs_release(priv); + mlx5_vdpa_virtqs_release(priv, release_resource); mlx5_vdpa_drain_cq(priv); if (priv->lm_mr.addr) mlx5_os_wrapped_mkey_destroy(&priv->lm_mr); @@ -329,6 +333,12 @@ mlx5_vdpa_dev_close(int vid) return ret; } +static int +mlx5_vdpa_dev_close(int vid) +{ + return _internal_mlx5_vdpa_dev_close(vid, false); +} + static int mlx5_vdpa_dev_config(int vid) { @@ -624,8 +634,9 @@ mlx5_vdpa_config_get(struct mlx5_kvargs_ctrl *mkvlist, static int mlx5_vdpa_virtq_resource_prepare(struct mlx5_vdpa_priv *priv) { + uint32_t remaining_cnt = 0, err_cnt = 0, task_num = 0; uint32_t max_queues = priv->queues * 2; - uint32_t index; + uint32_t index, thrd_idx, data[1]; struct mlx5_vdpa_virtq *virtq; for (index = 0; index < priv->caps.max_num_virtio_queues * 2; @@ -635,10 +646,48 @@ mlx5_vdpa_virtq_resource_prepare(struct mlx5_vdpa_priv *priv) } if (!priv->queues || !priv->queue_size) return 0; - for (index = 0; index < max_queues; ++index) - if (mlx5_vdpa_virtq_single_resource_prepare(priv, - index)) + if (priv->use_c_thread) { + uint32_t main_task_idx[max_queues]; + + for (index = 0; index < max_queues; ++index) { + virtq = &priv->virtqs[index]; + thrd_idx = index % (conf_thread_mng.max_thrds + 1); + if (!thrd_idx) { + main_task_idx[task_num] = index; + task_num++; + continue; + } + thrd_idx = priv->last_c_thrd_idx + 1; + if (thrd_idx >= conf_thread_mng.max_thrds) + thrd_idx = 0; + priv->last_c_thrd_idx = thrd_idx; + data[0] = index; + if (mlx5_vdpa_task_add(priv, thrd_idx, + MLX5_VDPA_TASK_PREPARE_VIRTQ, + &remaining_cnt, &err_cnt, + (void **)&data, 1)) { + DRV_LOG(ERR, "Fail to add " + "task prepare virtq (%d).", index); + main_task_idx[task_num] = index; + task_num++; + } + } + for (index = 0; index < task_num; ++index) + if (mlx5_vdpa_virtq_single_resource_prepare(priv, + main_task_idx[index])) + goto error; + if (mlx5_vdpa_c_thread_wait_bulk_tasks_done(&remaining_cnt, + &err_cnt, 2000)) { + DRV_LOG(ERR, + "Failed to wait virt-queue prepare tasks ready."); goto error; + } + } else { + for (index = 0; index < max_queues; ++index) + if (mlx5_vdpa_virtq_single_resource_prepare(priv, + index)) + goto error; + } if (mlx5_vdpa_is_modify_virtq_supported(priv)) if (mlx5_vdpa_steer_update(priv, true)) goto error; @@ -855,7 +904,7 @@ static void mlx5_vdpa_dev_release(struct mlx5_vdpa_priv *priv) { if (priv->state == MLX5_VDPA_STATE_CONFIGURED) - mlx5_vdpa_dev_close(priv->vid); + _internal_mlx5_vdpa_dev_close(priv->vid, true); if (priv->use_c_thread) mlx5_vdpa_wait_dev_close_tasks_done(priv); mlx5_vdpa_release_dev_resources(priv); diff --git a/drivers/vdpa/mlx5/mlx5_vdpa.h b/drivers/vdpa/mlx5/mlx5_vdpa.h index 00700261ec..477f2fdde0 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa.h +++ b/drivers/vdpa/mlx5/mlx5_vdpa.h @@ -85,6 +85,7 @@ enum mlx5_vdpa_task_type { MLX5_VDPA_TASK_SETUP_VIRTQ, MLX5_VDPA_TASK_STOP_VIRTQ, MLX5_VDPA_TASK_DEV_CLOSE_NOWAIT, + MLX5_VDPA_TASK_PREPARE_VIRTQ, }; /* Generic task information and size must be multiple of 4B. */ @@ -355,8 +356,12 @@ void mlx5_vdpa_err_event_unset(struct mlx5_vdpa_priv *priv); * * @param[in] priv * The vdpa driver private structure. + * @param[in] release_resource + * The vdpa driver realease resource without prepare resource. */ -void mlx5_vdpa_virtqs_release(struct mlx5_vdpa_priv *priv); +void +mlx5_vdpa_virtqs_release(struct mlx5_vdpa_priv *priv, + bool release_resource); /** * Cleanup cached resources of all virtqs. diff --git a/drivers/vdpa/mlx5/mlx5_vdpa_cthread.c b/drivers/vdpa/mlx5/mlx5_vdpa_cthread.c index 07efa0cb16..97109206d2 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa_cthread.c +++ b/drivers/vdpa/mlx5/mlx5_vdpa_cthread.c @@ -196,7 +196,7 @@ mlx5_vdpa_c_thread_handle(void *arg) pthread_mutex_lock(&priv->steer_update_lock); mlx5_vdpa_steer_unset(priv); pthread_mutex_unlock(&priv->steer_update_lock); - mlx5_vdpa_virtqs_release(priv); + mlx5_vdpa_virtqs_release(priv, false); mlx5_vdpa_drain_cq(priv); if (priv->lm_mr.addr) mlx5_os_wrapped_mkey_destroy( @@ -208,6 +208,18 @@ mlx5_vdpa_c_thread_handle(void *arg) &priv->dev_close_progress, 0, __ATOMIC_RELAXED); break; + case MLX5_VDPA_TASK_PREPARE_VIRTQ: + ret = mlx5_vdpa_virtq_single_resource_prepare( + priv, task.idx); + if (ret) { + DRV_LOG(ERR, + "Failed to prepare virtq %d.", + task.idx); + __atomic_fetch_add( + task.err_cnt, 1, + __ATOMIC_RELAXED); + } + break; default: DRV_LOG(ERR, "Invalid vdpa task type %d.", task.type); diff --git a/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c b/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c index 4a74738d9c..de6eab9bc6 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c +++ b/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c @@ -113,6 +113,16 @@ mlx5_vdpa_virtq_unreg_intr_handle_all(struct mlx5_vdpa_priv *priv) } } +static void +mlx5_vdpa_vq_destroy(struct mlx5_vdpa_virtq *virtq) +{ + /* Clean pre-created resource in dev removal only */ + claim_zero(mlx5_devx_cmd_destroy(virtq->virtq)); + virtq->index = 0; + virtq->virtq = NULL; + virtq->configured = 0; +} + /* Release cached VQ resources. */ void mlx5_vdpa_virtqs_cleanup(struct mlx5_vdpa_priv *priv) @@ -125,6 +135,8 @@ mlx5_vdpa_virtqs_cleanup(struct mlx5_vdpa_priv *priv) if (virtq->index != i) continue; pthread_mutex_lock(&virtq->virtq_lock); + if (virtq->virtq) + mlx5_vdpa_vq_destroy(virtq); for (j = 0; j < RTE_DIM(virtq->umems); ++j) { if (virtq->umems[j].obj) { claim_zero(mlx5_glue->devx_umem_dereg @@ -154,29 +166,34 @@ mlx5_vdpa_virtq_unset(struct mlx5_vdpa_virtq *virtq) if (ret) DRV_LOG(WARNING, "Failed to stop virtq %d.", virtq->index); - claim_zero(mlx5_devx_cmd_destroy(virtq->virtq)); - virtq->index = 0; - virtq->virtq = NULL; - virtq->configured = 0; + mlx5_vdpa_vq_destroy(virtq); } virtq->notifier_state = MLX5_VDPA_NOTIFIER_STATE_DISABLED; } void -mlx5_vdpa_virtqs_release(struct mlx5_vdpa_priv *priv) +mlx5_vdpa_virtqs_release(struct mlx5_vdpa_priv *priv, + bool release_resource) { struct mlx5_vdpa_virtq *virtq; - int i; + uint32_t i, max_virtq; - for (i = 0; i < priv->nr_virtqs; i++) { + max_virtq = (release_resource && + (priv->queues * 2) > priv->nr_virtqs) ? + (priv->queues * 2) : priv->nr_virtqs; + for (i = 0; i < max_virtq; i++) { virtq = &priv->virtqs[i]; pthread_mutex_lock(&virtq->virtq_lock); mlx5_vdpa_virtq_unset(virtq); - if (i < (priv->queues * 2)) + if (!release_resource && i < (priv->queues * 2)) mlx5_vdpa_virtq_single_resource_prepare( priv, i); pthread_mutex_unlock(&virtq->virtq_lock); } + if (!release_resource && priv->queues && + mlx5_vdpa_is_modify_virtq_supported(priv)) + if (mlx5_vdpa_steer_update(priv, true)) + mlx5_vdpa_steer_unset(priv); priv->features = 0; priv->nr_virtqs = 0; } @@ -733,7 +750,7 @@ mlx5_vdpa_virtqs_prepare(struct mlx5_vdpa_priv *priv) } return 0; error: - mlx5_vdpa_virtqs_release(priv); + mlx5_vdpa_virtqs_release(priv, true); return -1; } -- 2.27.0