From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 0F861A00BE; Fri, 8 Apr 2022 09:57:04 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 004D6427EB; Fri, 8 Apr 2022 09:57:03 +0200 (CEST) Received: from NAM02-BN1-obe.outbound.protection.outlook.com (mail-bn1nam07on2077.outbound.protection.outlook.com [40.107.212.77]) by mails.dpdk.org (Postfix) with ESMTP id 0A2E24003F for ; Fri, 8 Apr 2022 09:57:02 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=Wh7EKP3K/Eacyt0ED19LiOPucvyezR1XoOP4c2C+UAsYseVBlpTrkeo8Lv3Ub88J6ZiQcR4go40yuDVH1ssEWqmI7kjTG6JGRJCEhOByUEb/5LuEkqKZg/d5NAQrA8st1LYz7+QvUHYufgTNGK7wVbt9jrajyLgNnF+m4FM8YaV6iNAdXbb69ZKoZBQmRK692+olJXrRJvbS69qEZg8aHq0uwX3yOMjtaOJxd4iyuDd/ssauIHdNTKXvfP2akN+0Tye5BgNOZy2NWYL8j9pmsdIbNYExFQbAjT62EzCdyE4Y35/Q9gPGQSfmUl1zbh4yznoYz4q1rYSURQJZbOqUsA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=XleKiUL9wEUT8UmdOrfk3ZBc0m3+sP8RQrGwJ9XLRKQ=; b=kH0KboktvKabrvzzfexruOhxkNPvgPD0bPIDzV6PXxWFVCCB9XFi6V72fUgV/TdXcV959F1GLwPZJS2aMSvvUgWAnjBYrjiDlsuZHs/qNRaLAJ7ZWvdY9DRGbXnxQtkEFaNoHeYYhnBvS29XvkDgutPyYRCqWLHskjocryJ7qyhvU85h3fF2HMSIktFMbsNwS/hBtP6FFFlE8UmZmj/rHGQGKiEIYyN4maPA2CQGxWFukJaR+Xuo16umxOt66YaEPJYcp04kAys3DkzRPJ87tAVstR2n71en6FIqEx5JmuKRXIYBbagscRdgMLOZoefSDmQ2XwdUjJFqLPQ5wtZrOA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 12.22.5.235) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=XleKiUL9wEUT8UmdOrfk3ZBc0m3+sP8RQrGwJ9XLRKQ=; b=kZ5Qo0hXIhOOojMtWQC///9PeA3lnIqjvVDHdNcLA6ZLUrLRilmagsRuwfPEpjEBHNXlxBoHkJMCRUbbV7XYwUy8s/W5AkLXXUGMHBnNFi/euQQBoL8DbmccwTzDyvAcgZrLpDcrXVfPqgt2Hxq846lP/bb1k1VCdS+PvX8f48TaB2w3r94Qla1Ck6iqNsndfT57mDhlYpDnk2Ik3A/bq8t9kAPZssx3k1duYrK8H4JTOk3YaGq5JJHlxzefD30FhV+F87jQI8/uGGzysi25UH9AfrUQ8eu6lpoJ7hzNEW/klAZqi6fwqtDTMG+ICMvm6YPOBY5nZSyDoEG8eBt6wg== Received: from BN9PR03CA0576.namprd03.prod.outlook.com (2603:10b6:408:10d::11) by BN9PR12MB5195.namprd12.prod.outlook.com (2603:10b6:408:11c::19) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5144.22; Fri, 8 Apr 2022 07:57:00 +0000 Received: from BN8NAM11FT017.eop-nam11.prod.protection.outlook.com (2603:10b6:408:10d:cafe::34) by BN9PR03CA0576.outlook.office365.com (2603:10b6:408:10d::11) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5144.21 via Frontend Transport; Fri, 8 Apr 2022 07:57:00 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 12.22.5.235) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 12.22.5.235 as permitted sender) receiver=protection.outlook.com; client-ip=12.22.5.235; helo=mail.nvidia.com; Received: from mail.nvidia.com (12.22.5.235) by BN8NAM11FT017.mail.protection.outlook.com (10.13.177.93) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.5144.20 via Frontend Transport; Fri, 8 Apr 2022 07:57:00 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by DRHQMAIL107.nvidia.com (10.27.9.16) with Microsoft SMTP Server (TLS) id 15.0.1497.32; Fri, 8 Apr 2022 07:56:59 +0000 Received: from nvidia.com (10.126.231.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.22; Fri, 8 Apr 2022 00:56:57 -0700 From: Li Zhang To: , , , CC: , , Yajun Wu Subject: [RFC 02/15] vdpa/mlx5: support pre create virtq resource Date: Fri, 8 Apr 2022 10:55:52 +0300 Message-ID: <20220408075606.33056-3-lizh@nvidia.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20220408075606.33056-1-lizh@nvidia.com> References: <20220408075606.33056-1-lizh@nvidia.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain X-Originating-IP: [10.126.231.35] X-ClientProxiedBy: rnnvmail203.nvidia.com (10.129.68.9) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 1435a725-9e60-43f5-81f8-08da19355cac X-MS-TrafficTypeDiagnostic: BN9PR12MB5195:EE_ X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-Microsoft-Antispam-PRVS: X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: jq1bRD8VslL5P6sEem/8KpK3yNMlagqLrxQNSdyhrvclKh4MluvvEZ3JQnBf4mxvJNCPagrym5siXs0v7f2CwWLxn5BCFGdjRgwNaC/RcSR+YTX3LHQsCdDgffm7mMgs3jHw+fC8/+IdTY3xV0+DoTHnkgtrV/ld2I5pSE1gzcgHLz25k00TqZmmOsM6gSZoOl4xadXo1xPYklEL/WXlMJnS6feVbrNCIh3LMd9LJeKTmMEYnZ0An8VN+03aN5mjzgkXJT2dBMEGb9XUeIpxHrkWVq8F6tco3hB0mp3G6DYe7oSkn089jCmZpEZviCoXs1aZ1Cz9ccYJqjTEeKlPxGlfZxJX9tsxykQu5OMEu9C/h9i6CEfoJek6i0ZtYdHomTLeQFQwLZkZ/e5PlkgXK+DkfPSeXL82X4FdmK1/gUM2RnhRwalHIlz5FnLbbfOd2HfAdTLnTEe2Q7BEVqmCsKTpk0N2ycRlstv3So8XxrzcJJpU9x69khkBxEGDCvXFBeDb2ykXQqV2aOhcVTODGe/DOu/y9GEonM6nAlQEihnK+P96czNr6YyNj0XYMTEwvy/Zujs6F9fokix1vWG4rvpdZAVQKDU94ZJbas4Ns3digXbv5WLsdRrET3qH/unrOZLBtngv8pxHEXGpQmWx6ChY8flZ18dorRV6cRY9a+mKHBqD3x/MhT7MyJx2xfJRuyZqmJfFPdHGTqVf+PS7SA== X-Forefront-Antispam-Report: CIP:12.22.5.235; CTRY:US; LANG:en; SCL:1; SRV:; IPV:CAL; SFV:NSPM; H:mail.nvidia.com; PTR:InfoNoRecords; CAT:NONE; SFS:(13230001)(4636009)(36840700001)(46966006)(40470700004)(5660300002)(4326008)(82310400005)(40460700003)(7696005)(86362001)(186003)(6666004)(16526019)(70206006)(81166007)(8936002)(2906002)(36756003)(8676002)(70586007)(356005)(47076005)(6286002)(26005)(2616005)(83380400001)(336012)(426003)(54906003)(110136005)(508600001)(316002)(36860700001)(1076003)(55016003)(107886003)(36900700001); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 08 Apr 2022 07:57:00.3242 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 1435a725-9e60-43f5-81f8-08da19355cac X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[12.22.5.235]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT017.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: BN9PR12MB5195 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Yajun Wu The motivation of this change is to reduce vDPA device queue creation time by create some queue resource in vDPA device probe stage. In VM live migration scenario, this can reduce 0.8ms for each queue creation, thus reduce LM network downtime. To create queue resource(umem/counter) in advance, we need to know virtio queue depth and max number of queue VM will use. Introduce two new devargs: queues(max queue pair number) and queue_size (queue depth). Two args must be both provided, if only one argument provided, the argument will be ignored and no pre-creation. The queues and queue_size must also be identical to vhost configurtion driver later receive. Otherwise either the pre-create resource is wasted or missing or the resource need destroy and recreate(in case queue_size mismatch). Pre-create umem/counter will keep alive until vDPA device removal. Signed-off-by: Yajun Wu --- doc/guides/vdpadevs/mlx5.rst | 14 +++++++ drivers/vdpa/mlx5/mlx5_vdpa.c | 75 ++++++++++++++++++++++++++++++++++- drivers/vdpa/mlx5/mlx5_vdpa.h | 2 + 3 files changed, 89 insertions(+), 2 deletions(-) diff --git a/doc/guides/vdpadevs/mlx5.rst b/doc/guides/vdpadevs/mlx5.rst index 3ded142311..0ad77bf535 100644 --- a/doc/guides/vdpadevs/mlx5.rst +++ b/doc/guides/vdpadevs/mlx5.rst @@ -101,6 +101,20 @@ for an additional list of options shared with other mlx5 drivers. - 0, HW default. +- ``queue_size`` parameter [int] + + - 1 - 1024, Virio Queue depth for pre-creating queue resource to speed up + first time queue creation. Set it together with queues devarg. + + - 0, default value, no pre-create virtq resource. + +- ``queues`` parameter [int] + + - 1 - 128, Max number of virio queue pair(including 1 rx queue and 1 tx queue) + for pre-create queue resource to speed up first time queue creation. Set it + together with queue_size devarg. + + - 0, default value, no pre-create virtq resource. Error handling ^^^^^^^^^^^^^^ diff --git a/drivers/vdpa/mlx5/mlx5_vdpa.c b/drivers/vdpa/mlx5/mlx5_vdpa.c index 534ba64b02..57f9b05e35 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa.c +++ b/drivers/vdpa/mlx5/mlx5_vdpa.c @@ -244,7 +244,9 @@ mlx5_vdpa_mtu_set(struct mlx5_vdpa_priv *priv) static void mlx5_vdpa_dev_cache_clean(struct mlx5_vdpa_priv *priv) { - mlx5_vdpa_virtqs_cleanup(priv); + /* Clean pre-created resource in dev removal only. */ + if (!priv->queues) + mlx5_vdpa_virtqs_cleanup(priv); mlx5_vdpa_mem_dereg(priv); } @@ -494,6 +496,12 @@ mlx5_vdpa_args_check_handler(const char *key, const char *val, void *opaque) priv->hw_max_latency_us = (uint32_t)tmp; } else if (strcmp(key, "hw_max_pending_comp") == 0) { priv->hw_max_pending_comp = (uint32_t)tmp; + } else if (strcmp(key, "queue_size") == 0) { + priv->queue_size = (uint16_t)tmp; + } else if (strcmp(key, "queues") == 0) { + priv->queues = (uint16_t)tmp; + } else { + DRV_LOG(WARNING, "Invalid key %s.", key); } return 0; } @@ -524,9 +532,68 @@ mlx5_vdpa_config_get(struct mlx5_kvargs_ctrl *mkvlist, if (!priv->event_us && priv->event_mode == MLX5_VDPA_EVENT_MODE_DYNAMIC_TIMER) priv->event_us = MLX5_VDPA_DEFAULT_TIMER_STEP_US; + if ((priv->queue_size && !priv->queues) || + (!priv->queue_size && priv->queues)) { + priv->queue_size = 0; + priv->queues = 0; + DRV_LOG(WARNING, "Please provide both queue_size and queues."); + } DRV_LOG(DEBUG, "event mode is %d.", priv->event_mode); DRV_LOG(DEBUG, "event_us is %u us.", priv->event_us); DRV_LOG(DEBUG, "no traffic max is %u.", priv->no_traffic_max); + DRV_LOG(DEBUG, "queues is %u, queue_size is %u.", priv->queues, + priv->queue_size); +} + +static int +mlx5_vdpa_virtq_resource_prepare(struct mlx5_vdpa_priv *priv) +{ + uint32_t index; + uint32_t i; + + if (!priv->queues) + return 0; + for (index = 0; index < (priv->queues * 2); ++index) { + struct mlx5_vdpa_virtq *virtq = &priv->virtqs[index]; + + if (priv->caps.queue_counters_valid) { + if (!virtq->counters) + virtq->counters = + mlx5_devx_cmd_create_virtio_q_counters + (priv->cdev->ctx); + if (!virtq->counters) { + DRV_LOG(ERR, "Failed to create virtq couners for virtq" + " %d.", index); + return -1; + } + } + for (i = 0; i < RTE_DIM(virtq->umems); ++i) { + uint32_t size; + void *buf; + struct mlx5dv_devx_umem *obj; + + size = priv->caps.umems[i].a * priv->queue_size + + priv->caps.umems[i].b; + buf = rte_zmalloc(__func__, size, 4096); + if (buf == NULL) { + DRV_LOG(ERR, "Cannot allocate umem %d memory for virtq" + " %u.", i, index); + return -1; + } + obj = mlx5_glue->devx_umem_reg(priv->cdev->ctx, buf, + size, IBV_ACCESS_LOCAL_WRITE); + if (obj == NULL) { + rte_free(buf); + DRV_LOG(ERR, "Failed to register umem %d for virtq %u.", + i, index); + return -1; + } + virtq->umems[i].size = size; + virtq->umems[i].buf = buf; + virtq->umems[i].obj = obj; + } + } + return 0; } static int @@ -604,6 +671,8 @@ mlx5_vdpa_create_dev_resources(struct mlx5_vdpa_priv *priv) return -rte_errno; if (mlx5_vdpa_event_qp_global_prepare(priv)) return -rte_errno; + if (mlx5_vdpa_virtq_resource_prepare(priv)) + return -rte_errno; return 0; } @@ -638,6 +707,7 @@ mlx5_vdpa_dev_probe(struct mlx5_common_device *cdev, priv->num_lag_ports = 1; pthread_mutex_init(&priv->vq_config_lock, NULL); priv->cdev = cdev; + mlx5_vdpa_config_get(mkvlist, priv); if (mlx5_vdpa_create_dev_resources(priv)) goto error; priv->vdev = rte_vdpa_register_device(cdev->dev, &mlx5_vdpa_ops); @@ -646,7 +716,6 @@ mlx5_vdpa_dev_probe(struct mlx5_common_device *cdev, rte_errno = rte_errno ? rte_errno : EINVAL; goto error; } - mlx5_vdpa_config_get(mkvlist, priv); SLIST_INIT(&priv->mr_list); pthread_mutex_lock(&priv_list_lock); TAILQ_INSERT_TAIL(&priv_list, priv, next); @@ -684,6 +753,8 @@ mlx5_vdpa_release_dev_resources(struct mlx5_vdpa_priv *priv) { uint32_t i; + if (priv->queues) + mlx5_vdpa_virtqs_cleanup(priv); mlx5_vdpa_dev_cache_clean(priv); for (i = 0; i < priv->caps.max_num_virtio_queues * 2; i++) { if (!priv->virtqs[i].counters) diff --git a/drivers/vdpa/mlx5/mlx5_vdpa.h b/drivers/vdpa/mlx5/mlx5_vdpa.h index e7f3319f89..f6719a3c60 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa.h +++ b/drivers/vdpa/mlx5/mlx5_vdpa.h @@ -135,6 +135,8 @@ struct mlx5_vdpa_priv { uint8_t hw_latency_mode; /* Hardware CQ moderation mode. */ uint16_t hw_max_latency_us; /* Hardware CQ moderation period in usec. */ uint16_t hw_max_pending_comp; /* Hardware CQ moderation counter. */ + uint16_t queue_size; /* virtq depth for pre-creating virtq resource */ + uint16_t queues; /* Max virtq pair for pre-creating virtq resource */ struct rte_vdpa_device *vdev; /* vDPA device. */ struct mlx5_common_device *cdev; /* Backend mlx5 device. */ int vid; /* vhost device id. */ -- 2.27.0