From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id DEADAA00BE; Fri, 8 Apr 2022 09:58:27 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 260844288C; Fri, 8 Apr 2022 09:57:33 +0200 (CEST) Received: from NAM11-CO1-obe.outbound.protection.outlook.com (mail-co1nam11on2071.outbound.protection.outlook.com [40.107.220.71]) by mails.dpdk.org (Postfix) with ESMTP id 250534288B for ; Fri, 8 Apr 2022 09:57:32 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=hGPHvofd7ohHOhdNNSuhY/3DpF2rBHhhUIwK7bN/aXgYDQ48Ec8liYW6lY0O40kkMPscvX/yHQd/hxECkza4awG09+o5AoipJs+PvuBMxeA9F0q09poAeVHugJUx9Fen4O8GuhompNYjfDCycKMnxG0SLZ2dmL3ApUL61tPwmqgMXMj33CxnwcAyZDOXQds1c5gbdfqA7rQAGbmFmzP961P7SYog+x51pQbQ8lFhAcx/LIfmw7uLze4eYdhfUr/uyAe/OPpQ/LDomALNJ6aAid+LNVULPQIVFtCN6qpjsrDuPwe/KqvuW/CoqJXZXMFKZZ7e2NhoA4u4M69YMh833g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=y6PISlh1hm1ixwGQge6nycggmcHRm1xqokR102+9UNA=; b=PyA73sZcSLrG00iYIRVIE7rHkDFAlFP+tkisDyKzTwCrliKjK/6D2Oh6OYiTe17xkldI6kmBTZ8WhzvV6DDCp2+Llj/uwjf8CS7Pro13Gbh8/LMlmoH3HAcCbdEkIfNR5RSxpl08E1ON58MGqafATBhARsrX4/Nusklt1A+Ns4dnnBRia2714CJCGB1D9KI0jHyvQg73ylgd6vhgJRL2OX3cMJ03/tdHxCIaymOOgs5NrJ1lhV9/CstfD+gH79ofW/ugEij45O/GKQYY1o0dzfLwgy1ifbEsHvA2Y+QCDl8wFzHaScATCKZsUeSpn5o5E65wi9kaSOjBcSnsSA46XQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 12.22.5.235) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=y6PISlh1hm1ixwGQge6nycggmcHRm1xqokR102+9UNA=; b=kyyhlpQeMv80OoTnZfvuTd3a2WVuVtg3+zEEtyfu8LaGGmCKN4P1NrGCLdusDWr/j+3maVgzNH2hic9Zpf/BM8qRXC5lrviAQzv+fcWOKkkHbwhsmBNaXK+1tWJ70pOf1qRRQzp1CUVrwCzMicebX/WfblvFKsfbUsljNrMbsJ4zpBD2R7SqIjLFNTrTr1WNd5R95Pp842yZqtBRG5RYoMY9W9k/71ti7Ji9t8LVOLnwxymBhiXr307jUtGkb0+8YOGXnhrzz3LEjrrSYV0eUs52QQwv+3Xkds66LnkAuFgU63p1I7bfORTDPFaXPhG0PHc0Ctk1ovIBLQUJVf/exg== Received: from BN9PR03CA0404.namprd03.prod.outlook.com (2603:10b6:408:111::19) by DM6PR12MB4330.namprd12.prod.outlook.com (2603:10b6:5:21d::20) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5144.26; Fri, 8 Apr 2022 07:57:28 +0000 Received: from BN8NAM11FT043.eop-nam11.prod.protection.outlook.com (2603:10b6:408:111:cafe::ef) by BN9PR03CA0404.outlook.office365.com (2603:10b6:408:111::19) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5144.26 via Frontend Transport; Fri, 8 Apr 2022 07:57:27 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 12.22.5.235) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 12.22.5.235 as permitted sender) receiver=protection.outlook.com; client-ip=12.22.5.235; helo=mail.nvidia.com; Received: from mail.nvidia.com (12.22.5.235) by BN8NAM11FT043.mail.protection.outlook.com (10.13.177.218) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.5144.20 via Frontend Transport; Fri, 8 Apr 2022 07:57:27 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by DRHQMAIL107.nvidia.com (10.27.9.16) with Microsoft SMTP Server (TLS) id 15.0.1497.32; Fri, 8 Apr 2022 07:57:26 +0000 Received: from nvidia.com (10.126.231.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.22; Fri, 8 Apr 2022 00:57:24 -0700 From: Li Zhang To: , , , CC: , Subject: [RFC 13/15] vdpa/mlx5: add device close task Date: Fri, 8 Apr 2022 10:56:03 +0300 Message-ID: <20220408075606.33056-14-lizh@nvidia.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20220408075606.33056-1-lizh@nvidia.com> References: <20220408075606.33056-1-lizh@nvidia.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain X-Originating-IP: [10.126.231.35] X-ClientProxiedBy: rnnvmail203.nvidia.com (10.129.68.9) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 250641da-1995-4e5b-72e8-08da19356cfd X-MS-TrafficTypeDiagnostic: DM6PR12MB4330:EE_ X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-Microsoft-Antispam-PRVS: X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: gGKKWZf1HrH35FKSlng1V3qI8Y0kCbCCu33J/zB14nPa3Kyjmj/oVk+bXCgGLcCO2g6stpSw+AVVzE+o8UMmKjsNCSZP2a3zd++Ruw+/xGhTmGHfmNZxUrqObc3YUdUJAoi+wFEkXK2HXoLI1Iv+aXc5YXQKbnvsOnkFnttWaw9jYhp84iQQBlP0L0UN/YNm0A04+XjV/z5FTECjPpsdDSL3loqJVpx14sHAOzW9gD/QDKY0MUm+gBstDJtB2RsjN+TYXz7y32iV64Bghv/xebrlUqtlwrF2NmrVXWOAXdWHQUfaggI9K14B2Ok0rIq3R9eXfon5QwllcInH9Vp2CwEtMsIEha3XqYmrVO4iShvunEBLjpQraG+H/G4nhaOm/NErwmK4RW8E4Etz3MhlATKXw98EPPrsx2mNo7HZBzisSlCha3+qFnogqtcaZ2TTBuwILWcmM9lWqB3gL8tgVfEznid+kORTPcOa1JHWfpEMbHDmufI7Td5Wq0dcWzwdvhiejSPZpWm/imZZ4Huhnnu5D1qpMbEZwE5quFz3BND8zmSU5kePEzaBr75Eq5dX4Cf1w+Cl375mzotiW4J/oET+c9AFlDJPloaCwnsxYogjPfH2vgnRxvaD8VMsvEaQCMO0cnWEoQyftYrwT7NfiAtJrY719YmhAm4ff6gB40MVSJsFPmxXk7d1ABB1y3PC3Q3ecCGd9FBXC32+VFb4bg== X-Forefront-Antispam-Report: CIP:12.22.5.235; CTRY:US; LANG:en; SCL:1; SRV:; IPV:CAL; SFV:NSPM; H:mail.nvidia.com; PTR:InfoNoRecords; CAT:NONE; SFS:(13230001)(4636009)(46966006)(36840700001)(40470700004)(7696005)(356005)(8936002)(81166007)(2616005)(107886003)(316002)(5660300002)(26005)(110136005)(54906003)(1076003)(186003)(508600001)(6666004)(16526019)(6286002)(336012)(36860700001)(40460700003)(4326008)(8676002)(83380400001)(47076005)(426003)(70586007)(70206006)(36756003)(86362001)(82310400005)(2906002)(55016003)(36900700001); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 08 Apr 2022 07:57:27.7138 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 250641da-1995-4e5b-72e8-08da19356cfd X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[12.22.5.235]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT043.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR12MB4330 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Split the virtqs device close tasks after stopping virt-queue between the configuration threads. This accelerates the LM process and reduces its time by 50%. Signed-off-by: Li Zhang --- drivers/vdpa/mlx5/mlx5_vdpa.c | 51 +++++++++++++++++++++++++-- drivers/vdpa/mlx5/mlx5_vdpa.h | 8 +++++ drivers/vdpa/mlx5/mlx5_vdpa_cthread.c | 20 ++++++++++- drivers/vdpa/mlx5/mlx5_vdpa_virtq.c | 14 ++++++++ 4 files changed, 90 insertions(+), 3 deletions(-) diff --git a/drivers/vdpa/mlx5/mlx5_vdpa.c b/drivers/vdpa/mlx5/mlx5_vdpa.c index 8dd8e6a2a0..d349682a83 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa.c +++ b/drivers/vdpa/mlx5/mlx5_vdpa.c @@ -245,7 +245,7 @@ mlx5_vdpa_mtu_set(struct mlx5_vdpa_priv *priv) return kern_mtu == vhost_mtu ? 0 : -1; } -static void +void mlx5_vdpa_dev_cache_clean(struct mlx5_vdpa_priv *priv) { /* Clean pre-created resource in dev removal only. */ @@ -254,6 +254,26 @@ mlx5_vdpa_dev_cache_clean(struct mlx5_vdpa_priv *priv) mlx5_vdpa_mem_dereg(priv); } +static bool +mlx5_vdpa_wait_dev_close_tasks_done(struct mlx5_vdpa_priv *priv) +{ + uint32_t timeout = 0; + + /* Check and wait all close tasks done. */ + while (__atomic_load_n(&priv->dev_close_progress, + __ATOMIC_RELAXED) != 0 && timeout < 1000) { + rte_delay_us_sleep(10000); + timeout++; + } + if (priv->dev_close_progress) { + DRV_LOG(ERR, + "Failed to wait close device tasks done vid %d.", + priv->vid); + return true; + } + return false; +} + static int mlx5_vdpa_dev_close(int vid) { @@ -271,6 +291,27 @@ mlx5_vdpa_dev_close(int vid) ret |= mlx5_vdpa_lm_log(priv); priv->state = MLX5_VDPA_STATE_IN_PROGRESS; } + if (priv->use_c_thread) { + if (priv->last_c_thrd_idx >= + (conf_thread_mng.max_thrds - 1)) + priv->last_c_thrd_idx = 0; + else + priv->last_c_thrd_idx++; + __atomic_store_n(&priv->dev_close_progress, + 1, __ATOMIC_RELAXED); + if (mlx5_vdpa_task_add(priv, + priv->last_c_thrd_idx, + MLX5_VDPA_TASK_DEV_CLOSE_NOWAIT, + NULL, NULL, NULL, 1)) { + DRV_LOG(ERR, + "Fail to add dev close task. "); + goto single_thrd; + } + priv->state = MLX5_VDPA_STATE_PROBED; + DRV_LOG(INFO, "vDPA device %d was closed.", vid); + return ret; + } +single_thrd: pthread_mutex_lock(&priv->steer_update_lock); mlx5_vdpa_steer_unset(priv); pthread_mutex_unlock(&priv->steer_update_lock); @@ -278,10 +319,12 @@ mlx5_vdpa_dev_close(int vid) mlx5_vdpa_drain_cq(priv); if (priv->lm_mr.addr) mlx5_os_wrapped_mkey_destroy(&priv->lm_mr); - priv->state = MLX5_VDPA_STATE_PROBED; if (!priv->connected) mlx5_vdpa_dev_cache_clean(priv); priv->vid = 0; + __atomic_store_n(&priv->dev_close_progress, 0, + __ATOMIC_RELAXED); + priv->state = MLX5_VDPA_STATE_PROBED; DRV_LOG(INFO, "vDPA device %d was closed.", vid); return ret; } @@ -302,6 +345,8 @@ mlx5_vdpa_dev_config(int vid) DRV_LOG(ERR, "Failed to reconfigure vid %d.", vid); return -1; } + if (mlx5_vdpa_wait_dev_close_tasks_done(priv)) + return -1; priv->vid = vid; priv->connected = true; if (mlx5_vdpa_mtu_set(priv)) @@ -839,6 +884,8 @@ mlx5_vdpa_dev_release(struct mlx5_vdpa_priv *priv) { if (priv->state == MLX5_VDPA_STATE_CONFIGURED) mlx5_vdpa_dev_close(priv->vid); + if (priv->use_c_thread) + mlx5_vdpa_wait_dev_close_tasks_done(priv); mlx5_vdpa_release_dev_resources(priv); if (priv->vdev) rte_vdpa_unregister_device(priv->vdev); diff --git a/drivers/vdpa/mlx5/mlx5_vdpa.h b/drivers/vdpa/mlx5/mlx5_vdpa.h index e08931719f..b6392b9d66 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa.h +++ b/drivers/vdpa/mlx5/mlx5_vdpa.h @@ -84,6 +84,7 @@ enum mlx5_vdpa_task_type { MLX5_VDPA_TASK_REG_MR = 1, MLX5_VDPA_TASK_SETUP_VIRTQ, MLX5_VDPA_TASK_STOP_VIRTQ, + MLX5_VDPA_TASK_DEV_CLOSE_NOWAIT, }; /* Generic task information and size must be multiple of 4B. */ @@ -206,6 +207,7 @@ struct mlx5_vdpa_priv { uint64_t features; /* Negotiated features. */ uint16_t log_max_rqt_size; uint16_t last_c_thrd_idx; + uint16_t dev_close_progress; uint16_t num_mrs; /* Number of memory regions. */ struct mlx5_vdpa_steer steer; struct mlx5dv_var *var; @@ -578,4 +580,10 @@ mlx5_vdpa_c_thread_wait_bulk_tasks_done(uint32_t *remaining_cnt, uint32_t *err_cnt, uint32_t sleep_time); int mlx5_vdpa_virtq_setup(struct mlx5_vdpa_priv *priv, int index, bool reg_kick); +void +mlx5_vdpa_vq_destroy(struct mlx5_vdpa_virtq *virtq); +void +mlx5_vdpa_dev_cache_clean(struct mlx5_vdpa_priv *priv); +void +mlx5_vdpa_virtq_unreg_intr_handle_all(struct mlx5_vdpa_priv *priv); #endif /* RTE_PMD_MLX5_VDPA_H_ */ diff --git a/drivers/vdpa/mlx5/mlx5_vdpa_cthread.c b/drivers/vdpa/mlx5/mlx5_vdpa_cthread.c index 0e54226a90..07efa0cb16 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa_cthread.c +++ b/drivers/vdpa/mlx5/mlx5_vdpa_cthread.c @@ -63,7 +63,8 @@ mlx5_vdpa_task_add(struct mlx5_vdpa_priv *priv, task[i].type = task_type; task[i].remaining_cnt = remaining_cnt; task[i].err_cnt = err_cnt; - task[i].idx = data[i]; + if (data) + task[i].idx = data[i]; } if (!mlx5_vdpa_c_thrd_ring_enqueue_bulk(rng, (void **)&task, num, NULL)) return -1; @@ -190,6 +191,23 @@ mlx5_vdpa_c_thread_handle(void *arg) MLX5_VDPA_USED_RING_LEN(virtq->vq_size)); pthread_mutex_unlock(&virtq->virtq_lock); break; + case MLX5_VDPA_TASK_DEV_CLOSE_NOWAIT: + mlx5_vdpa_virtq_unreg_intr_handle_all(priv); + pthread_mutex_lock(&priv->steer_update_lock); + mlx5_vdpa_steer_unset(priv); + pthread_mutex_unlock(&priv->steer_update_lock); + mlx5_vdpa_virtqs_release(priv); + mlx5_vdpa_drain_cq(priv); + if (priv->lm_mr.addr) + mlx5_os_wrapped_mkey_destroy( + &priv->lm_mr); + if (!priv->connected) + mlx5_vdpa_dev_cache_clean(priv); + priv->vid = 0; + __atomic_store_n( + &priv->dev_close_progress, 0, + __ATOMIC_RELAXED); + break; default: DRV_LOG(ERR, "Invalid vdpa task type %d.", task.type); diff --git a/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c b/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c index 127b1cee7f..c1281be5f2 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c +++ b/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c @@ -99,6 +99,20 @@ mlx5_vdpa_virtq_unregister_intr_handle(struct mlx5_vdpa_virtq *virtq) rte_intr_instance_free(virtq->intr_handle); } +void +mlx5_vdpa_virtq_unreg_intr_handle_all(struct mlx5_vdpa_priv *priv) +{ + uint32_t i; + struct mlx5_vdpa_virtq *virtq; + + for (i = 0; i < priv->nr_virtqs; i++) { + virtq = &priv->virtqs[i]; + pthread_mutex_lock(&virtq->virtq_lock); + mlx5_vdpa_virtq_unregister_intr_handle(virtq); + pthread_mutex_unlock(&virtq->virtq_lock); + } +} + /* Release cached VQ resources. */ void mlx5_vdpa_virtqs_cleanup(struct mlx5_vdpa_priv *priv) -- 2.27.0