From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 4C609A0C45; Thu, 28 Oct 2021 09:57:15 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 3922F4113D; Thu, 28 Oct 2021 09:57:03 +0200 (CEST) Received: from NAM11-BN8-obe.outbound.protection.outlook.com (mail-bn8nam11on2046.outbound.protection.outlook.com [40.107.236.46]) by mails.dpdk.org (Postfix) with ESMTP id DA2EA41104 for ; Thu, 28 Oct 2021 09:57:01 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=PoZndCNtt5rci85baC1P5ixP6A2610lBdJqtKGCKq2xfYbgavfHKx+H9OfWyjfn1J1C5czyq1wkKxYDI8QCEGXmzxK7yOIkmK/5OTDPDECrLBS/V2gm6/u/ByMtAt4QJRS9aAhQI+oVxYFfm55Qya0FwXDd/R5XrMhDoySv5pyXxG1LplXyyVwl9loGihHGTuWEUe0gUP3KrYD9LFPSs5EBfogyqI+6Ev9OUV7trimrwQrg3YPk+LIWD9mlhmugCB2fXgzj/1xdSFvI2QUSJg6CkyzImEm+Baix4gASI4Kg+KnlpzntT+RP999PLNDqBwC/FjlFbSdSpyoeSgHmC/w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=CJoNr+Qo3DLpLabUGoYWVI61iMF3n/3VGQVWVjL3V24=; b=WmhxxLI6tpTv3PzJasQyJTGpEO++ewVVOX/jRHCuCZuS8gWfR+WvpQuwpq/y9pIx6ehgPxdEM7ldP3b7PD7kefp9b82YdbXhbBaAnds/DEm+lE6h5PG/jQXD/iZdU6DE5X/EvmPdz3z00Foz7Je0MQP2AoPIo5lCT412rEszARKLokzXT0LB6JbQsc92cbO+h83afwmalb2iIyL7AdC0Pa7Vge2vDVW5u2jWpNoEa6OlKRNr+IMYiLhFul+S1PAcNkfsCfpof4K4WHwzVMhK+M0fRnc99LKlteJBKZp98tlY6JcqosxQMu55jp7ryml4efzho7Ti4MDtAEpzy17clA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 149.199.62.198) smtp.rcpttodomain=dpdk.org smtp.mailfrom=xilinx.com; dmarc=pass (p=none sp=none pct=100) action=none header.from=xilinx.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=xilinx.onmicrosoft.com; s=selector2-xilinx-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=CJoNr+Qo3DLpLabUGoYWVI61iMF3n/3VGQVWVjL3V24=; b=i5pv7aJH8EeNDQ2JyaEXqjriSAeNpq3WuJV7ucx6/aq+t7tZQIDbf8PSoyg638Nl10c4eqyEHLB7W3I8tYV3ZPQmRhcLTWNU8xwAKruF4W3RN+1AhXoIOqYZP3YqQD7/JYj/yLWpoGyh8YzR724LVYJRNECQciPFifq5TW9aBB0= Received: from BN9PR03CA0734.namprd03.prod.outlook.com (2603:10b6:408:110::19) by CH2PR02MB6182.namprd02.prod.outlook.com (2603:10b6:610:d::29) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4649.15; Thu, 28 Oct 2021 07:56:59 +0000 Received: from BN1NAM02FT045.eop-nam02.prod.protection.outlook.com (2603:10b6:408:110:cafe::5f) by BN9PR03CA0734.outlook.office365.com (2603:10b6:408:110::19) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4649.15 via Frontend Transport; Thu, 28 Oct 2021 07:56:59 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 149.199.62.198) smtp.mailfrom=xilinx.com; dpdk.org; dkim=none (message not signed) header.d=none;dpdk.org; dmarc=pass action=none header.from=xilinx.com; Received-SPF: Pass (protection.outlook.com: domain of xilinx.com designates 149.199.62.198 as permitted sender) receiver=protection.outlook.com; client-ip=149.199.62.198; helo=xsj-pvapexch02.xlnx.xilinx.com; Received: from xsj-pvapexch02.xlnx.xilinx.com (149.199.62.198) by BN1NAM02FT045.mail.protection.outlook.com (10.13.2.156) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.20.4649.14 via Frontend Transport; Thu, 28 Oct 2021 07:56:58 +0000 Received: from xsj-pvapexch02.xlnx.xilinx.com (172.19.86.41) by xsj-pvapexch02.xlnx.xilinx.com (172.19.86.41) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2176.14; Thu, 28 Oct 2021 00:56:57 -0700 Received: from smtp.xilinx.com (172.19.127.95) by xsj-pvapexch02.xlnx.xilinx.com (172.19.86.41) with Microsoft SMTP Server id 15.1.2176.14 via Frontend Transport; Thu, 28 Oct 2021 00:56:57 -0700 Envelope-to: dev@dpdk.org, maxime.coquelin@redhat.com, chenbo.xia@intel.com, andrew.rybchenko@oktetlabs.ru Received: from [10.170.66.108] (port=35870 helo=xndengvm004108.xilinx.com) by smtp.xilinx.com with esmtp (Exim 4.90) (envelope-from ) id 1mg0HQ-0005p5-Fr; Thu, 28 Oct 2021 00:56:57 -0700 From: Vijay Srivastava To: CC: , , , Vijay Kumar Srivastava Date: Thu, 28 Oct 2021 13:24:49 +0530 Message-ID: <20211028075452.11804-8-vsrivast@xilinx.com> X-Mailer: git-send-email 2.25.0 In-Reply-To: <20211028075452.11804-1-vsrivast@xilinx.com> References: <20210706164418.32615-1-vsrivast@xilinx.com> <20211028075452.11804-1-vsrivast@xilinx.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: cb183c53-1841-4303-cad8-08d999e884e3 X-MS-TrafficTypeDiagnostic: CH2PR02MB6182: X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:3044; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: 6bmJvkjwTvmGYEWHSk8xx4xt3AYeZnYwXh2qsDdj/jVKpYKz9PWll41JFW4kTZNDIVOO9Zc58uGHtK6tv2qur5NvOoyr1ZuVPLdSgDi2MXMMtmTUAZVwnmfNnC/qEhZK8yOQIVGMJ5qjfVf312yaTE+LMbahrRq++1Ecp8OEv9tj31HliAeP3oU8sVBlYzqfFH2NZIRHD2nqAVImmYELoXFLHNcCETy53evfvqZ+HCUn9LECTvT4dqYE2FjmSQMqVn9QVpqvFRvcdIUsPWl2cqnOIFydq/pksLrsQS2QbcjQScBMk/7e89C7rD5qxBlKg1SfkIs8Z7BIb/EQrom7/BGJIWrG5Bvy1zukDYu9m/GSbtiJ8w9lb05bsoIVRdCJNjV/xGWqqFjUAxbz/kGF53+vWGr3xStonza+l++8sIAHEvSn819NJwKLyElSiOTCqULt29tib+vFqb6BPlSdGjTpPl9o+ML4cR9JQneRhfkhKV/KPoERHyI5Qohge8aT9V4Kgk4+FBDKyXMceDzUrbd9bBfU7GojrCWnR67mg9vaSoOB/k30pEDvyrrbx+O8amB6grFwanaVZt2CsGq+IfMYd1M5PdkOngrUUCIG0qTUHC/0ygA8YFJX/OX6eZvJvd8HbOQGu2gq8ZsdQyeMv1jGOz7kSFPyJUUDDCJ79+RwEWla+7Upk9arLKl+0EItZdpbH8sOnDN0OIP/JpIEos/EgsnsahM/XJRqSNXTAJI= X-Forefront-Antispam-Report: CIP:149.199.62.198; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:xsj-pvapexch02.xlnx.xilinx.com; PTR:unknown-62-198.xilinx.com; CAT:NONE; SFS:(36840700001)(46966006)(36860700001)(82310400003)(2616005)(44832011)(426003)(47076005)(336012)(36756003)(6916009)(70206006)(70586007)(2906002)(5660300002)(316002)(54906003)(6666004)(508600001)(36906005)(107886003)(1076003)(7696005)(186003)(26005)(8676002)(8936002)(4326008)(9786002)(7636003)(83380400001)(356005)(102446001); DIR:OUT; SFP:1101; X-OriginatorOrg: xilinx.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 28 Oct 2021 07:56:58.9299 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: cb183c53-1841-4303-cad8-08d999e884e3 X-MS-Exchange-CrossTenant-Id: 657af505-d5df-48d0-8300-c31994686c5c X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=657af505-d5df-48d0-8300-c31994686c5c; Ip=[149.199.62.198]; Helo=[xsj-pvapexch02.xlnx.xilinx.com] X-MS-Exchange-CrossTenant-AuthSource: BN1NAM02FT045.eop-nam02.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: CH2PR02MB6182 Subject: [dpdk-dev] [PATCH v2 07/10] vdpa/sfc: add support to get queue notify area info X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Vijay Kumar Srivastava Implement the vDPA ops get_notify_area to get the notify area info of the queue. Signed-off-by: Vijay Kumar Srivastava --- v2: * Added error log in sfc_vdpa_get_notify_area. drivers/vdpa/sfc/sfc_vdpa_ops.c | 168 ++++++++++++++++++++++++++++++++++++++-- drivers/vdpa/sfc/sfc_vdpa_ops.h | 2 + 2 files changed, 164 insertions(+), 6 deletions(-) diff --git a/drivers/vdpa/sfc/sfc_vdpa_ops.c b/drivers/vdpa/sfc/sfc_vdpa_ops.c index de1c81a..774d73e 100644 --- a/drivers/vdpa/sfc/sfc_vdpa_ops.c +++ b/drivers/vdpa/sfc/sfc_vdpa_ops.c @@ -3,6 +3,8 @@ * Copyright(c) 2020-2021 Xilinx, Inc. */ +#include +#include #include #include @@ -537,6 +539,67 @@ return 0; } +static void * +sfc_vdpa_notify_ctrl(void *arg) +{ + struct sfc_vdpa_ops_data *ops_data; + int vid; + + ops_data = arg; + if (ops_data == NULL) + return NULL; + + sfc_vdpa_adapter_lock(ops_data->dev_handle); + + vid = ops_data->vid; + + if (rte_vhost_host_notifier_ctrl(vid, RTE_VHOST_QUEUE_ALL, true) != 0) + sfc_vdpa_info(ops_data->dev_handle, + "vDPA (%s): Notifier could not get configured", + ops_data->vdpa_dev->device->name); + + sfc_vdpa_adapter_unlock(ops_data->dev_handle); + + return NULL; +} + +static int +sfc_vdpa_setup_notify_ctrl(int vid) +{ + int ret; + struct rte_vdpa_device *vdpa_dev; + struct sfc_vdpa_ops_data *ops_data; + + vdpa_dev = rte_vhost_get_vdpa_device(vid); + + ops_data = sfc_vdpa_get_data_by_dev(vdpa_dev); + if (ops_data == NULL) { + sfc_vdpa_err(ops_data->dev_handle, + "invalid vDPA device : %p, vid : %d", + vdpa_dev, vid); + return -1; + } + + ops_data->is_notify_thread_started = false; + + /* + * Use rte_vhost_host_notifier_ctrl in a thread to avoid + * dead lock scenario when multiple VFs are used in single vdpa + * application and multiple VFs are passed to a single VM. + */ + ret = pthread_create(&ops_data->notify_tid, NULL, + sfc_vdpa_notify_ctrl, ops_data); + if (ret != 0) { + sfc_vdpa_err(ops_data->dev_handle, + "failed to create notify_ctrl thread: %s", + rte_strerror(ret)); + return -1; + } + ops_data->is_notify_thread_started = true; + + return 0; +} + static int sfc_vdpa_dev_config(int vid) { @@ -570,18 +633,19 @@ if (rc != 0) goto fail_vdpa_start; - sfc_vdpa_adapter_unlock(ops_data->dev_handle); + rc = sfc_vdpa_setup_notify_ctrl(vid); + if (rc != 0) + goto fail_vdpa_notify; - sfc_vdpa_log_init(ops_data->dev_handle, "vhost notifier ctrl"); - if (rte_vhost_host_notifier_ctrl(vid, RTE_VHOST_QUEUE_ALL, true) != 0) - sfc_vdpa_info(ops_data->dev_handle, - "vDPA (%s): software relay for notify is used.", - vdpa_dev->device->name); + sfc_vdpa_adapter_unlock(ops_data->dev_handle); sfc_vdpa_log_init(ops_data->dev_handle, "done"); return 0; +fail_vdpa_notify: + sfc_vdpa_stop(ops_data); + fail_vdpa_start: sfc_vdpa_close(ops_data); @@ -594,6 +658,7 @@ static int sfc_vdpa_dev_close(int vid) { + int ret; struct rte_vdpa_device *vdpa_dev; struct sfc_vdpa_ops_data *ops_data; @@ -608,6 +673,23 @@ } sfc_vdpa_adapter_lock(ops_data->dev_handle); + if (ops_data->is_notify_thread_started == true) { + void *status; + ret = pthread_cancel(ops_data->notify_tid); + if (ret != 0) { + sfc_vdpa_err(ops_data->dev_handle, + "failed to cancel notify_ctrl thread: %s", + rte_strerror(ret)); + } + + ret = pthread_join(ops_data->notify_tid, &status); + if (ret != 0) { + sfc_vdpa_err(ops_data->dev_handle, + "failed to join terminated notify_ctrl thread: %s", + rte_strerror(ret)); + } + } + ops_data->is_notify_thread_started = false; sfc_vdpa_stop(ops_data); sfc_vdpa_close(ops_data); @@ -658,6 +740,79 @@ return vfio_dev_fd; } +static int +sfc_vdpa_get_notify_area(int vid, int qid, uint64_t *offset, uint64_t *size) +{ + int ret; + efx_nic_t *nic; + int vfio_dev_fd; + efx_rc_t rc; + unsigned int bar_offset; + struct rte_vdpa_device *vdpa_dev; + struct sfc_vdpa_ops_data *ops_data; + struct vfio_region_info reg = { .argsz = sizeof(reg) }; + const efx_nic_cfg_t *encp; + int max_vring_cnt; + int64_t len; + void *dev; + + vdpa_dev = rte_vhost_get_vdpa_device(vid); + + ops_data = sfc_vdpa_get_data_by_dev(vdpa_dev); + if (ops_data == NULL) + return -1; + + dev = ops_data->dev_handle; + + vfio_dev_fd = sfc_vdpa_adapter_by_dev_handle(dev)->vfio_dev_fd; + max_vring_cnt = + (sfc_vdpa_adapter_by_dev_handle(dev)->max_queue_count * 2); + + nic = sfc_vdpa_adapter_by_dev_handle(ops_data->dev_handle)->nic; + encp = efx_nic_cfg_get(nic); + + if (qid >= max_vring_cnt) { + sfc_vdpa_err(dev, "invalid qid : %d", qid); + return -1; + } + + if (ops_data->vq_cxt[qid].enable != B_TRUE) { + sfc_vdpa_err(dev, "vq is not enabled"); + return -1; + } + + rc = efx_virtio_get_doorbell_offset(ops_data->vq_cxt[qid].vq, + &bar_offset); + if (rc != 0) { + sfc_vdpa_err(dev, "failed to get doorbell offset: %s", + rte_strerror(rc)); + return rc; + } + + reg.index = sfc_vdpa_adapter_by_dev_handle(dev)->mem_bar.esb_rid; + ret = ioctl(vfio_dev_fd, VFIO_DEVICE_GET_REGION_INFO, ®); + if (ret != 0) { + sfc_vdpa_err(dev, "could not get device region info: %s", + strerror(errno)); + return ret; + } + + *offset = reg.offset + bar_offset; + + len = (1U << encp->enc_vi_window_shift) / 2; + if (len >= sysconf(_SC_PAGESIZE)) { + *size = sysconf(_SC_PAGESIZE); + } else { + sfc_vdpa_err(dev, "invalid VI window size : 0x%" PRIx64, len); + return -1; + } + + sfc_vdpa_info(dev, "vDPA ops get_notify_area :: offset : 0x%" PRIx64, + *offset); + + return 0; +} + static struct rte_vdpa_dev_ops sfc_vdpa_ops = { .get_queue_num = sfc_vdpa_get_queue_num, .get_features = sfc_vdpa_get_features, @@ -667,6 +822,7 @@ .set_vring_state = sfc_vdpa_set_vring_state, .set_features = sfc_vdpa_set_features, .get_vfio_device_fd = sfc_vdpa_get_vfio_device_fd, + .get_notify_area = sfc_vdpa_get_notify_area, }; struct sfc_vdpa_ops_data * diff --git a/drivers/vdpa/sfc/sfc_vdpa_ops.h b/drivers/vdpa/sfc/sfc_vdpa_ops.h index 8d553c5..f7523ef 100644 --- a/drivers/vdpa/sfc/sfc_vdpa_ops.h +++ b/drivers/vdpa/sfc/sfc_vdpa_ops.h @@ -50,6 +50,8 @@ struct sfc_vdpa_ops_data { struct rte_vdpa_device *vdpa_dev; enum sfc_vdpa_context vdpa_context; enum sfc_vdpa_state state; + pthread_t notify_tid; + bool is_notify_thread_started; uint64_t dev_features; uint64_t drv_features; -- 1.8.3.1