From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 5FB6EA0C4A; Wed, 7 Jul 2021 10:26:32 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 67C25414AF; Wed, 7 Jul 2021 10:25:41 +0200 (CEST) Received: from NAM12-BN8-obe.outbound.protection.outlook.com (mail-bn8nam12on2066.outbound.protection.outlook.com [40.107.237.66]) by mails.dpdk.org (Postfix) with ESMTP id 988644120E for ; Tue, 6 Jul 2021 18:50:27 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=fjTA2QNHUb0No8dYqss+Fyp3DVBm+eGb4GZELkrU2P82Zxc+Hx6Ex0VGFGgMHzkij/H1x0P7DWwRaYZfQnRGxM70QmJFLTWxXPLVJVdqyqG6rPQy5hY+luYHaVd+oerQlrAgBH6uztHkAy97DJLx57k0S0OgZ0BlAgiW3fBqTCt5kYhTTH30ygjyqB9p5CrTz1qNWqZMNLh6ouMIdBFf1HSFXPJ/naSG9W6QN59GSBJm1T/9io/rYAOQsUgPLxK6/Lm1NuJaarldYeBJPSWOGv6OfbiDaO5DOHyZj/0wRv72Nm8oszmbUMTxTxKJuQ11arEDKGPCFM51fIdogj+PRg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=ldWYRK2qp93s2F9F0fdPVP5KDof6d2Y5hWumy/z9ovE=; b=RyrZjOzTJpyORbu6dTLTx1Yg5BISC3cRUiyJZEqbui2fkgVq6FcPYCK/JwzMIshVp7cGRRVBBdISWnJ3OuqsHQNnZ1LDuZ3nGLlXs+WT7d7JtvDKswRm3b8I+7WUlpnOTQ9tTY64MghhcBMiw088MSWGCTHROCFPqatPgTTZpN6x8KPYD2WP2GX9FqKWHOMrAtouujCYriPlo/51AgNY5TusDxyIEzKBRJ7lmSUe2YKHXzQVqFiHi1Y8MXWISNHijCBFOqVyGxZx9/LGwy++CRk52qjhLSkEu6ltGIV0HHiOt3quUJXBkaCN/YsKybgWZ74w/ZDGoaotLfVe7uVFsQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 149.199.62.198) smtp.rcpttodomain=dpdk.org smtp.mailfrom=xilinx.com; dmarc=pass (p=none sp=none pct=100) action=none header.from=xilinx.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=xilinx.onmicrosoft.com; s=selector2-xilinx-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=ldWYRK2qp93s2F9F0fdPVP5KDof6d2Y5hWumy/z9ovE=; b=pK2SFcd4+d5OaHMwKmVoJokep3eM7VwmfTC0Mm/x4KrwZiYTs5UD7l+fsRGsiMQlZjSXek93ZmNv4M1Fje7utTEV2u76hlKH+JBms8X9h/UgCVroPkUtAbY2RsrAfcl9RxazTvAdh5vZTZGLhdpfHGrGp5t9PkknR9TF3qKPlJw= Received: from BN9PR03CA0505.namprd03.prod.outlook.com (2603:10b6:408:130::30) by SN1PR02MB3741.namprd02.prod.outlook.com (2603:10b6:802:28::24) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4219.22; Tue, 6 Jul 2021 16:50:25 +0000 Received: from BN1NAM02FT029.eop-nam02.prod.protection.outlook.com (2603:10b6:408:130:cafe::64) by BN9PR03CA0505.outlook.office365.com (2603:10b6:408:130::30) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4287.25 via Frontend Transport; Tue, 6 Jul 2021 16:50:25 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 149.199.62.198) smtp.mailfrom=xilinx.com; dpdk.org; dkim=none (message not signed) header.d=none;dpdk.org; dmarc=pass action=none header.from=xilinx.com; Received-SPF: Pass (protection.outlook.com: domain of xilinx.com designates 149.199.62.198 as permitted sender) receiver=protection.outlook.com; client-ip=149.199.62.198; helo=xsj-pvapexch01.xlnx.xilinx.com; Received: from xsj-pvapexch01.xlnx.xilinx.com (149.199.62.198) by BN1NAM02FT029.mail.protection.outlook.com (10.13.2.143) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.20.4287.22 via Frontend Transport; Tue, 6 Jul 2021 16:50:25 +0000 Received: from xsj-pvapexch02.xlnx.xilinx.com (172.19.86.41) by xsj-pvapexch01.xlnx.xilinx.com (172.19.86.40) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2176.2; Tue, 6 Jul 2021 09:50:04 -0700 Received: from smtp.xilinx.com (172.19.127.96) by xsj-pvapexch02.xlnx.xilinx.com (172.19.86.41) with Microsoft SMTP Server id 15.1.2176.2 via Frontend Transport; Tue, 6 Jul 2021 09:50:04 -0700 Envelope-to: dev@dpdk.org, maxime.coquelin@redhat.com, chenbo.xia@intel.com, andrew.rybchenko@oktetlabs.ru Received: from [10.177.4.108] (port=54950 helo=xndengvm004108.xilinx.com) by smtp.xilinx.com with esmtp (Exim 4.90) (envelope-from ) id 1m0oGp-0000pF-P8; Tue, 06 Jul 2021 09:50:04 -0700 From: Vijay Srivastava To: CC: , , , Vijay Kumar Srivastava Date: Tue, 6 Jul 2021 22:14:15 +0530 Message-ID: <20210706164418.32615-8-vsrivast@xilinx.com> X-Mailer: git-send-email 2.25.0 In-Reply-To: <20210706164418.32615-1-vsrivast@xilinx.com> References: <20210706164418.32615-1-vsrivast@xilinx.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 25b73bf7-d70c-4875-1fbe-08d9409e26fb X-MS-TrafficTypeDiagnostic: SN1PR02MB3741: X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:3044; X-MS-Exchange-SenderADCheck: 1 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: nj273mTyj74xReToosA2NqRWsTCv6IIuphppfbhAbootZDWZuA9ckKpY/9j1SykdpeMqLNycRgljZErw+rowm2fF4p5SqzVh4uWhfmTC4c6twxr37iccXCzK+cgbn7HH8zczRJVdHGR/9a1UVtnWXkGLBnIblE0kpQWa3YavnSI+CiBP+OyOYZk+3Gsx2WW3vwSAKVHLj5Ug1hAQ9ppHwOQOYI/oEuPzm3VyHah84I/ZrOPIdjZX1TYdsqkcwjzXY+oJsbDQQQS7sZCElJCPZbRTsBwWJO7pR6EEm7IQoaI2o1HerSHaZmk+RYTbWEALLg1zJoSpbaLoB1RJiPSW3X7gBhxfzQdLgttWAAEHyffOIZWgZUELzgI7mEj2NKTTQqsymvR0AWLYhEclmsxl8Af+iTN2EbSL4YtT2rj5095nOerjZe2v6B3q+zocyETxQ9uK+P1QEN+L2Z5Yi9FPNFDjoPuyntNTm/POgS3m3Ve5wEdVu0ZsEjPfOgj9nY/7BvC9XDUT9xJ4XDfQqe94JI/5APxe0+vT6kJLLB/jkHSu8okMVx/446OxMKELlWczBHyl843UWGVcvMEU63djGcfhw2r40F6cBrBrvfVechxDqZ/KX6leM2zb8CxCgbM7B00bPIfqgOXgKkOPXTpcNwQDHxBhfm7RRkbHp1DQktuS9OCRcQ+Ukkyu0FX1SeCx7vDRKKPFUAvapICzUaEgqkVvZAoDo/mNORzrb8UfMrE= X-Forefront-Antispam-Report: CIP:149.199.62.198; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:xsj-pvapexch01.xlnx.xilinx.com; PTR:unknown-62-198.xilinx.com; CAT:NONE; SFS:(4636009)(346002)(396003)(136003)(39860400002)(376002)(36840700001)(46966006)(54906003)(107886003)(47076005)(7636003)(70586007)(186003)(336012)(356005)(36860700001)(316002)(36906005)(82740400003)(2906002)(9786002)(70206006)(36756003)(4326008)(478600001)(26005)(83380400001)(82310400003)(2616005)(426003)(44832011)(8676002)(6916009)(1076003)(6666004)(8936002)(5660300002)(7696005)(102446001); DIR:OUT; SFP:1101; X-OriginatorOrg: xilinx.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 06 Jul 2021 16:50:25.1178 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 25b73bf7-d70c-4875-1fbe-08d9409e26fb X-MS-Exchange-CrossTenant-Id: 657af505-d5df-48d0-8300-c31994686c5c X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=657af505-d5df-48d0-8300-c31994686c5c; Ip=[149.199.62.198]; Helo=[xsj-pvapexch01.xlnx.xilinx.com] X-MS-Exchange-CrossTenant-AuthSource: BN1NAM02FT029.eop-nam02.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: SN1PR02MB3741 X-Mailman-Approved-At: Wed, 07 Jul 2021 10:25:28 +0200 Subject: [dpdk-dev] [PATCH 07/10] vdpa/sfc: add support to get queue notify area info X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Vijay Kumar Srivastava Implement the vDPA ops get_notify_area to get the notify area info of the queue. Signed-off-by: Vijay Kumar Srivastava --- drivers/vdpa/sfc/sfc_vdpa_ops.c | 166 ++++++++++++++++++++++++++++++++++++++-- drivers/vdpa/sfc/sfc_vdpa_ops.h | 2 + 2 files changed, 162 insertions(+), 6 deletions(-) diff --git a/drivers/vdpa/sfc/sfc_vdpa_ops.c b/drivers/vdpa/sfc/sfc_vdpa_ops.c index 4228044..a7b9085 100644 --- a/drivers/vdpa/sfc/sfc_vdpa_ops.c +++ b/drivers/vdpa/sfc/sfc_vdpa_ops.c @@ -3,6 +3,8 @@ * Copyright(c) 2020-2021 Xilinx, Inc. */ +#include +#include #include #include @@ -534,6 +536,67 @@ return 0; } +static void * +sfc_vdpa_notify_ctrl(void *arg) +{ + struct sfc_vdpa_ops_data *ops_data; + int vid; + + ops_data = arg; + if (ops_data == NULL) + return NULL; + + sfc_vdpa_adapter_lock(ops_data->dev_handle); + + vid = ops_data->vid; + + if (rte_vhost_host_notifier_ctrl(vid, RTE_VHOST_QUEUE_ALL, true) != 0) + sfc_vdpa_info(ops_data->dev_handle, + "vDPA (%s): Notifier could not get configured", + ops_data->vdpa_dev->device->name); + + sfc_vdpa_adapter_unlock(ops_data->dev_handle); + + return NULL; +} + +static int +sfc_vdpa_setup_notify_ctrl(int vid) +{ + int ret; + struct rte_vdpa_device *vdpa_dev; + struct sfc_vdpa_ops_data *ops_data; + + vdpa_dev = rte_vhost_get_vdpa_device(vid); + + ops_data = sfc_vdpa_get_data_by_dev(vdpa_dev); + if (ops_data == NULL) { + sfc_vdpa_err(ops_data->dev_handle, + "invalid vDPA device : %p, vid : %d", + vdpa_dev, vid); + return -1; + } + + ops_data->is_notify_thread_started = false; + + /* + * Use rte_vhost_host_notifier_ctrl in a thread to avoid + * dead lock scenario when multiple VFs are used in single vdpa + * application and multiple VFs are passed to a single VM. + */ + ret = pthread_create(&ops_data->notify_tid, NULL, + sfc_vdpa_notify_ctrl, ops_data); + if (ret != 0) { + sfc_vdpa_err(ops_data->dev_handle, + "failed to create notify_ctrl thread: %s", + rte_strerror(ret)); + return -1; + } + ops_data->is_notify_thread_started = true; + + return 0; +} + static int sfc_vdpa_dev_config(int vid) { @@ -567,18 +630,19 @@ if (rc != 0) goto fail_vdpa_start; - sfc_vdpa_adapter_unlock(ops_data->dev_handle); + rc = sfc_vdpa_setup_notify_ctrl(vid); + if (rc != 0) + goto fail_vdpa_notify; - sfc_vdpa_log_init(ops_data->dev_handle, "vhost notifier ctrl"); - if (rte_vhost_host_notifier_ctrl(vid, RTE_VHOST_QUEUE_ALL, true) != 0) - sfc_vdpa_info(ops_data->dev_handle, - "vDPA (%s): software relay for notify is used.", - vdpa_dev->device->name); + sfc_vdpa_adapter_unlock(ops_data->dev_handle); sfc_vdpa_log_init(ops_data->dev_handle, "done"); return 0; +fail_vdpa_notify: + sfc_vdpa_stop(ops_data); + fail_vdpa_start: sfc_vdpa_close(ops_data); @@ -591,6 +655,7 @@ static int sfc_vdpa_dev_close(int vid) { + int ret; struct rte_vdpa_device *vdpa_dev; struct sfc_vdpa_ops_data *ops_data; @@ -605,6 +670,23 @@ } sfc_vdpa_adapter_lock(ops_data->dev_handle); + if (ops_data->is_notify_thread_started == true) { + void *status; + ret = pthread_cancel(ops_data->notify_tid); + if (ret != 0) { + sfc_vdpa_err(ops_data->dev_handle, + "failed to cancel notify_ctrl thread: %s", + rte_strerror(ret)); + } + + ret = pthread_join(ops_data->notify_tid, &status); + if (ret != 0) { + sfc_vdpa_err(ops_data->dev_handle, + "failed to join terminated notify_ctrl thread: %s", + rte_strerror(ret)); + } + } + ops_data->is_notify_thread_started = false; sfc_vdpa_stop(ops_data); sfc_vdpa_close(ops_data); @@ -655,6 +737,77 @@ return vfio_dev_fd; } +static int +sfc_vdpa_get_notify_area(int vid, int qid, uint64_t *offset, uint64_t *size) +{ + int ret; + efx_nic_t *nic; + int vfio_dev_fd; + efx_rc_t rc; + unsigned int bar_offset; + struct rte_vdpa_device *vdpa_dev; + struct sfc_vdpa_ops_data *ops_data; + struct vfio_region_info reg = { .argsz = sizeof(reg) }; + const efx_nic_cfg_t *encp; + int max_vring_cnt; + int64_t len; + void *dev; + + vdpa_dev = rte_vhost_get_vdpa_device(vid); + + ops_data = sfc_vdpa_get_data_by_dev(vdpa_dev); + if (ops_data == NULL) + return -1; + + dev = ops_data->dev_handle; + + vfio_dev_fd = sfc_vdpa_adapter_by_dev_handle(dev)->vfio_dev_fd; + max_vring_cnt = + (sfc_vdpa_adapter_by_dev_handle(dev)->max_queue_count * 2); + + nic = sfc_vdpa_adapter_by_dev_handle(ops_data->dev_handle)->nic; + encp = efx_nic_cfg_get(nic); + + if (qid >= max_vring_cnt) { + sfc_vdpa_err(dev, "invalid qid : %d", qid); + return -1; + } + + if (ops_data->vq_cxt[qid].enable != B_TRUE) { + sfc_vdpa_err(dev, "vq is not enabled"); + return -1; + } + + rc = efx_virtio_get_doorbell_offset(ops_data->vq_cxt[qid].vq, + &bar_offset); + if (rc != 0) { + sfc_vdpa_err(dev, "failed to get doorbell offset: %s", + rte_strerror(rc)); + return rc; + } + + reg.index = sfc_vdpa_adapter_by_dev_handle(dev)->mem_bar.esb_rid; + ret = ioctl(vfio_dev_fd, VFIO_DEVICE_GET_REGION_INFO, ®); + if (ret != 0) { + sfc_vdpa_err(dev, "could not get device region info: %s", + strerror(errno)); + return ret; + } + + *offset = reg.offset + bar_offset; + + len = (1U << encp->enc_vi_window_shift) / 2; + if (len >= sysconf(_SC_PAGESIZE)) + *size = sysconf(_SC_PAGESIZE); + else + return -1; + + sfc_vdpa_info(dev, "vDPA ops get_notify_area :: offset : 0x%" PRIx64, + *offset); + + return 0; +} + static struct rte_vdpa_dev_ops sfc_vdpa_ops = { .get_queue_num = sfc_vdpa_get_queue_num, .get_features = sfc_vdpa_get_features, @@ -664,6 +817,7 @@ .set_vring_state = sfc_vdpa_set_vring_state, .set_features = sfc_vdpa_set_features, .get_vfio_device_fd = sfc_vdpa_get_vfio_device_fd, + .get_notify_area = sfc_vdpa_get_notify_area, }; struct sfc_vdpa_ops_data * diff --git a/drivers/vdpa/sfc/sfc_vdpa_ops.h b/drivers/vdpa/sfc/sfc_vdpa_ops.h index 8d553c5..f7523ef 100644 --- a/drivers/vdpa/sfc/sfc_vdpa_ops.h +++ b/drivers/vdpa/sfc/sfc_vdpa_ops.h @@ -50,6 +50,8 @@ struct sfc_vdpa_ops_data { struct rte_vdpa_device *vdpa_dev; enum sfc_vdpa_context vdpa_context; enum sfc_vdpa_state state; + pthread_t notify_tid; + bool is_notify_thread_started; uint64_t dev_features; uint64_t drv_features; -- 1.8.3.1