From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 8E67DA0C45; Thu, 28 Oct 2021 09:56:38 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 78DBB410F6; Thu, 28 Oct 2021 09:56:38 +0200 (CEST) Received: from NAM11-CO1-obe.outbound.protection.outlook.com (mail-co1nam11on2085.outbound.protection.outlook.com [40.107.220.85]) by mails.dpdk.org (Postfix) with ESMTP id 1FBBE4003F for ; Thu, 28 Oct 2021 09:56:37 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=l2IaQyiqPtpeYmaQykxBJw5e1UUWxLi9ES2VR1NXJpwKjCck2XNOLyh8cHLNwdiZduLg2ND/xJrDzM4optAzEkIYrkAjdQ4kjB0C3cmoArFbyVrVSIwS/edWV2Rxyn6vG0iXlvvXR7qm8o70vkFrNiaoDmrm6WpBBdZgXt9K1ILG+Ar8nS1JfrOnYv6Mokw62CSa1nWmwNNKgVqFmsvuTB1xBiOEJmiIawdArVeVIuwg4ZUnG3MFhcEr2gqY8nkOj9M3PBHqAbZ5rWGxKHQon7jE+gtdWUHrAKRro+RLgD1FDQKGt+oYH/P2HZn3xr10Y7QWEbVSjjLzCyb9S6kOAA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=Ma5FRKno9JaO1rbOOK0FRCWaNDWOW7cFQlgg2WhQHGo=; b=WB8jGhVvy3vW/4irwDITIia61+40Xn8HwWKWTmgg3Bl7EC4Nlh2O1KglIyynpP38SbDk5gFQFPhVdRB6e/2DNRPqy0ORAufsXYdLMdXmP6uRmY/ku+67s+YojFXvqUjh0QymmrSmo0lSWATfKXPw8R+LOdSHTbXm3mRwuwPlUpx74FykWaQxyqvOOtTsQOISgfbR/qpBjeVGzQZo4m7GzGbTfOBUIn5CsA0Rgci/XQbhvSrfqsl++qfHBOYlkGtjy20yPcdPzc0u8UyZ2diJPJ8THwzSKerKbETd3Dna4nrq2tOB86Dfx059j6sp1yLURZW2dXbPMjrNsbHSDy8ZFQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 149.199.62.198) smtp.rcpttodomain=dpdk.org smtp.mailfrom=xilinx.com; dmarc=pass (p=none sp=none pct=100) action=none header.from=xilinx.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=xilinx.onmicrosoft.com; s=selector2-xilinx-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=Ma5FRKno9JaO1rbOOK0FRCWaNDWOW7cFQlgg2WhQHGo=; b=tc/t6UnwSMg0xKabg7LyUiHuBr9iPBnY2wg+T5nqu3JRPNWrzKF0PE5lnQ1m/N0ls1JKnSY5BUMJDlkNq2PR/x5nP3EmudznXf8XLL5gD5us5z0s9yk8htboweF/a7s0Am+2PBzdsdcZ7PAykCU5pnp9ZKaOQ9R1G7/1Cw9iG7g= Received: from BN6PR19CA0097.namprd19.prod.outlook.com (2603:10b6:404:a0::11) by CO6PR02MB7716.namprd02.prod.outlook.com (2603:10b6:303:b2::23) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4628.18; Thu, 28 Oct 2021 07:56:34 +0000 Received: from BN1NAM02FT010.eop-nam02.prod.protection.outlook.com (2603:10b6:404:a0:cafe::51) by BN6PR19CA0097.outlook.office365.com (2603:10b6:404:a0::11) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4628.18 via Frontend Transport; Thu, 28 Oct 2021 07:56:34 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 149.199.62.198) smtp.mailfrom=xilinx.com; dpdk.org; dkim=none (message not signed) header.d=none;dpdk.org; dmarc=pass action=none header.from=xilinx.com; Received-SPF: Pass (protection.outlook.com: domain of xilinx.com designates 149.199.62.198 as permitted sender) receiver=protection.outlook.com; client-ip=149.199.62.198; helo=xsj-pvapexch01.xlnx.xilinx.com; Received: from xsj-pvapexch01.xlnx.xilinx.com (149.199.62.198) by BN1NAM02FT010.mail.protection.outlook.com (10.13.2.128) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.20.4649.14 via Frontend Transport; Thu, 28 Oct 2021 07:56:34 +0000 Received: from xsj-pvapexch02.xlnx.xilinx.com (172.19.86.41) by xsj-pvapexch01.xlnx.xilinx.com (172.19.86.40) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2176.14; Thu, 28 Oct 2021 00:56:33 -0700 Received: from smtp.xilinx.com (172.19.127.95) by xsj-pvapexch02.xlnx.xilinx.com (172.19.86.41) with Microsoft SMTP Server id 15.1.2176.14 via Frontend Transport; Thu, 28 Oct 2021 00:56:33 -0700 Envelope-to: dev@dpdk.org, maxime.coquelin@redhat.com, chenbo.xia@intel.com, andrew.rybchenko@oktetlabs.ru Received: from [10.170.66.108] (port=35870 helo=xndengvm004108.xilinx.com) by smtp.xilinx.com with esmtp (Exim 4.90) (envelope-from ) id 1mg0H2-0005p5-Pi; Thu, 28 Oct 2021 00:56:33 -0700 From: Vijay Srivastava To: CC: , , , Vijay Kumar Srivastava Date: Thu, 28 Oct 2021 13:24:44 +0530 Message-ID: <20211028075452.11804-3-vsrivast@xilinx.com> X-Mailer: git-send-email 2.25.0 In-Reply-To: <20211028075452.11804-1-vsrivast@xilinx.com> References: <20210706164418.32615-1-vsrivast@xilinx.com> <20211028075452.11804-1-vsrivast@xilinx.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 817023f9-aa10-4f4b-9977-08d999e8765d X-MS-TrafficTypeDiagnostic: CO6PR02MB7716: X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:1227; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: NAfADKfGo/Ael5IXqh0QAJyoMiNiBunPYFsRfwC/bVzOFiRkLdiWZ4PcR1rmsITqSvSTzvqdisGtFUYEgG91gz+lOLDzPGFlzHGn+9+7vG4TUnaAM0pJ2KExD0l+N7UEEinc7zLDN1Ipe+VEIifsDPthppCehzpzuqRN0g8yeRdg7io8zYzyLVFRyrOORk1BU0K5XXGkHCqLws5xOwHXlZZkxiuINkRF2QW5rAEAaeQmbxA6LjS3vF+622+DfURalv4X+uL1PFyq+o0aTpHkIGuMBai/QmRj8xTixkwdDYEvTSG36VaDBEwBGY/fssx1ySc9mJQxkOvdW4L2/NALv3yjiSY1lZZ2B3KFUj2fZ6tsdyzpJhzzoOR2Vzh860CMGIw6ovQpkUVQ5rcH45RH8nDn+p0Xl3SIE5vzrAJAsqbOzWXNvqzryQkW2PbMS1+UQXeke+GcA90JBEHyrSHKnuX7v1Ozs2aNSBu56p44qBXqFkrlyj1i3TY5JRMvnetHpgZ9QeMD3PLEshIeeSnytRjOUrVlG/+s8WCqQYdG12hsOUZA6nFHkU5ufw1PQcu4CLEruw6FNW3lJ/l/lNtvUOCKWP0tcdDNEZL/gDsznM0+uEPvov1Rzpx2sWQKDD+FA+FcayxvnndLYlI04uoX1evqoiwPuW19SF/4asdiToEwrak179rw/1sSBlubeUqYPcCwe1tqE01wb4enB+O0gh3/nmsAGZjlbyGp4AaUyOY= X-Forefront-Antispam-Report: CIP:149.199.62.198; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:xsj-pvapexch01.xlnx.xilinx.com; PTR:unknown-62-198.xilinx.com; CAT:NONE; SFS:(46966006)(36840700001)(186003)(70586007)(6916009)(54906003)(6666004)(2906002)(107886003)(336012)(83380400001)(4326008)(1076003)(508600001)(356005)(7636003)(316002)(5660300002)(70206006)(9786002)(36860700001)(30864003)(47076005)(36756003)(7696005)(426003)(2616005)(82310400003)(8676002)(26005)(44832011)(36906005)(8936002)(102446001); DIR:OUT; SFP:1101; X-OriginatorOrg: xilinx.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 28 Oct 2021 07:56:34.5534 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 817023f9-aa10-4f4b-9977-08d999e8765d X-MS-Exchange-CrossTenant-Id: 657af505-d5df-48d0-8300-c31994686c5c X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=657af505-d5df-48d0-8300-c31994686c5c; Ip=[149.199.62.198]; Helo=[xsj-pvapexch01.xlnx.xilinx.com] X-MS-Exchange-CrossTenant-AuthSource: BN1NAM02FT010.eop-nam02.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: CO6PR02MB7716 Subject: [dpdk-dev] [PATCH v2 02/10] vdpa/sfc: add support for device initialization X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Vijay Kumar Srivastava Add HW initialization and vDPA device registration support. Signed-off-by: Vijay Kumar Srivastava --- v2: * Used rte_memzone_reserve_aligned for mcdi buffer allocation. * Freeing mcdi buff when DMA map fails. * Fixed one typo. doc/guides/vdpadevs/sfc.rst | 6 + drivers/vdpa/sfc/meson.build | 3 + drivers/vdpa/sfc/sfc_vdpa.c | 23 +++ drivers/vdpa/sfc/sfc_vdpa.h | 49 +++++- drivers/vdpa/sfc/sfc_vdpa_debug.h | 21 +++ drivers/vdpa/sfc/sfc_vdpa_hw.c | 327 ++++++++++++++++++++++++++++++++++++++ drivers/vdpa/sfc/sfc_vdpa_log.h | 3 + drivers/vdpa/sfc/sfc_vdpa_mcdi.c | 74 +++++++++ drivers/vdpa/sfc/sfc_vdpa_ops.c | 129 +++++++++++++++ drivers/vdpa/sfc/sfc_vdpa_ops.h | 36 +++++ 10 files changed, 670 insertions(+), 1 deletion(-) create mode 100644 drivers/vdpa/sfc/sfc_vdpa_debug.h create mode 100644 drivers/vdpa/sfc/sfc_vdpa_hw.c create mode 100644 drivers/vdpa/sfc/sfc_vdpa_mcdi.c create mode 100644 drivers/vdpa/sfc/sfc_vdpa_ops.c create mode 100644 drivers/vdpa/sfc/sfc_vdpa_ops.h diff --git a/doc/guides/vdpadevs/sfc.rst b/doc/guides/vdpadevs/sfc.rst index 59f990b..abb5900 100644 --- a/doc/guides/vdpadevs/sfc.rst +++ b/doc/guides/vdpadevs/sfc.rst @@ -95,3 +95,9 @@ SFC vDPA PMD provides the following log types available for control: Matches a subset of per-port log types registered during runtime. A full name for a particular type may be obtained by appending a dot and a PCI device identifier (``XXXX:XX:XX.X``) to the prefix. + +- ``pmd.vdpa.sfc.mcdi`` (default level is **notice**) + + Extra logging of the communication with the NIC's management CPU. + The format of the log is consumed by the netlogdecode cross-platform + tool. May be managed per-port, as explained above. diff --git a/drivers/vdpa/sfc/meson.build b/drivers/vdpa/sfc/meson.build index d916389..aac7c51 100644 --- a/drivers/vdpa/sfc/meson.build +++ b/drivers/vdpa/sfc/meson.build @@ -30,4 +30,7 @@ endforeach deps += ['common_sfc_efx', 'bus_pci'] sources = files( 'sfc_vdpa.c', + 'sfc_vdpa_hw.c', + 'sfc_vdpa_mcdi.c', + 'sfc_vdpa_ops.c', ) diff --git a/drivers/vdpa/sfc/sfc_vdpa.c b/drivers/vdpa/sfc/sfc_vdpa.c index a6e1a9e..00fa94a 100644 --- a/drivers/vdpa/sfc/sfc_vdpa.c +++ b/drivers/vdpa/sfc/sfc_vdpa.c @@ -232,6 +232,19 @@ struct sfc_vdpa_adapter * goto fail_vfio_setup; } + sfc_vdpa_log_init(sva, "hw init"); + if (sfc_vdpa_hw_init(sva) != 0) { + sfc_vdpa_err(sva, "failed to init HW %s", pci_dev->name); + goto fail_hw_init; + } + + sfc_vdpa_log_init(sva, "dev init"); + sva->ops_data = sfc_vdpa_device_init(sva, SFC_VDPA_AS_VF); + if (sva->ops_data == NULL) { + sfc_vdpa_err(sva, "failed vDPA dev init %s", pci_dev->name); + goto fail_dev_init; + } + pthread_mutex_lock(&sfc_vdpa_adapter_list_lock); TAILQ_INSERT_TAIL(&sfc_vdpa_adapter_list, sva, next); pthread_mutex_unlock(&sfc_vdpa_adapter_list_lock); @@ -240,6 +253,12 @@ struct sfc_vdpa_adapter * return 0; +fail_dev_init: + sfc_vdpa_hw_fini(sva); + +fail_hw_init: + sfc_vdpa_vfio_teardown(sva); + fail_vfio_setup: fail_set_log_prefix: rte_free(sva); @@ -266,6 +285,10 @@ struct sfc_vdpa_adapter * TAILQ_REMOVE(&sfc_vdpa_adapter_list, sva, next); pthread_mutex_unlock(&sfc_vdpa_adapter_list_lock); + sfc_vdpa_device_fini(sva->ops_data); + + sfc_vdpa_hw_fini(sva); + sfc_vdpa_vfio_teardown(sva); rte_free(sva); diff --git a/drivers/vdpa/sfc/sfc_vdpa.h b/drivers/vdpa/sfc/sfc_vdpa.h index 3b77900..046f25d 100644 --- a/drivers/vdpa/sfc/sfc_vdpa.h +++ b/drivers/vdpa/sfc/sfc_vdpa.h @@ -11,14 +11,38 @@ #include +#include "sfc_efx.h" +#include "sfc_efx_mcdi.h" +#include "sfc_vdpa_debug.h" #include "sfc_vdpa_log.h" +#include "sfc_vdpa_ops.h" + +#define SFC_VDPA_DEFAULT_MCDI_IOVA 0x200000000000 /* Adapter private data */ struct sfc_vdpa_adapter { TAILQ_ENTRY(sfc_vdpa_adapter) next; + /* + * PMD setup and configuration is not thread safe. Since it is not + * performance sensitive, it is better to guarantee thread-safety + * and add device level lock. vDPA control operations which + * change its state should acquire the lock. + */ + rte_spinlock_t lock; struct rte_pci_device *pdev; struct rte_pci_addr pci_addr; + efx_family_t family; + efx_nic_t *nic; + rte_spinlock_t nic_lock; + + efsys_bar_t mem_bar; + + struct sfc_efx_mcdi mcdi; + size_t mcdi_buff_size; + + uint32_t max_queue_count; + char log_prefix[SFC_VDPA_LOG_PREFIX_MAX]; uint32_t logtype_main; @@ -26,6 +50,7 @@ struct sfc_vdpa_adapter { int vfio_dev_fd; int vfio_container_fd; int iommu_group_num; + struct sfc_vdpa_ops_data *ops_data; }; uint32_t @@ -36,5 +61,27 @@ struct sfc_vdpa_adapter { struct sfc_vdpa_adapter * sfc_vdpa_get_adapter_by_dev(struct rte_pci_device *pdev); -#endif /* _SFC_VDPA_H */ +int +sfc_vdpa_hw_init(struct sfc_vdpa_adapter *sva); +void +sfc_vdpa_hw_fini(struct sfc_vdpa_adapter *sva); +int +sfc_vdpa_mcdi_init(struct sfc_vdpa_adapter *sva); +void +sfc_vdpa_mcdi_fini(struct sfc_vdpa_adapter *sva); + +int +sfc_vdpa_dma_alloc(struct sfc_vdpa_adapter *sva, const char *name, + size_t len, efsys_mem_t *esmp); + +void +sfc_vdpa_dma_free(struct sfc_vdpa_adapter *sva, efsys_mem_t *esmp); + +static inline struct sfc_vdpa_adapter * +sfc_vdpa_adapter_by_dev_handle(void *dev_handle) +{ + return (struct sfc_vdpa_adapter *)dev_handle; +} + +#endif /* _SFC_VDPA_H */ diff --git a/drivers/vdpa/sfc/sfc_vdpa_debug.h b/drivers/vdpa/sfc/sfc_vdpa_debug.h new file mode 100644 index 0000000..cfa8cc5 --- /dev/null +++ b/drivers/vdpa/sfc/sfc_vdpa_debug.h @@ -0,0 +1,21 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * + * Copyright(c) 2020-2021 Xilinx, Inc. + */ + +#ifndef _SFC_VDPA_DEBUG_H_ +#define _SFC_VDPA_DEBUG_H_ + +#include + +#ifdef RTE_LIBRTE_SFC_VDPA_DEBUG +/* Avoid dependency from RTE_LOG_DP_LEVEL to be able to enable debug check + * in the driver only. + */ +#define SFC_VDPA_ASSERT(exp) RTE_VERIFY(exp) +#else +/* If the driver debug is not enabled, follow DPDK debug/non-debug */ +#define SFC_VDPA_ASSERT(exp) RTE_ASSERT(exp) +#endif + +#endif /* _SFC_VDPA_DEBUG_H_ */ diff --git a/drivers/vdpa/sfc/sfc_vdpa_hw.c b/drivers/vdpa/sfc/sfc_vdpa_hw.c new file mode 100644 index 0000000..7c256ff --- /dev/null +++ b/drivers/vdpa/sfc/sfc_vdpa_hw.c @@ -0,0 +1,327 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * + * Copyright(c) 2020-2021 Xilinx, Inc. + */ + +#include + +#include +#include +#include + +#include "efx.h" +#include "sfc_vdpa.h" +#include "sfc_vdpa_ops.h" + +extern uint32_t sfc_logtype_driver; + +#ifndef PAGE_SIZE +#define PAGE_SIZE (sysconf(_SC_PAGESIZE)) +#endif + +int +sfc_vdpa_dma_alloc(struct sfc_vdpa_adapter *sva, const char *name, + size_t len, efsys_mem_t *esmp) +{ + uint64_t mcdi_iova; + size_t mcdi_buff_size; + const struct rte_memzone *mz = NULL; + int numa_node = sva->pdev->device.numa_node; + int ret; + + mcdi_buff_size = RTE_ALIGN_CEIL(len, PAGE_SIZE); + + sfc_vdpa_log_init(sva, "name=%s, len=%zu", name, len); + + mz = rte_memzone_reserve_aligned(name, mcdi_buff_size, + numa_node, + RTE_MEMZONE_IOVA_CONTIG, + PAGE_SIZE); + if (mz == NULL) { + sfc_vdpa_err(sva, "cannot reserve memory for %s: len=%#x: %s", + name, (unsigned int)len, rte_strerror(rte_errno)); + return -ENOMEM; + } + + /* IOVA address for MCDI would be re-calculated if mapping + * using default IOVA would fail. + * TODO: Earlier there was no way to get valid IOVA range. + * Recently a patch has been submitted to get the IOVA range + * using ioctl. VFIO_IOMMU_GET_INFO. This patch is available + * in the kernel version >= 5.4. Support to get the default + * IOVA address for MCDI buffer using available IOVA range + * would be added later. Meanwhile default IOVA for MCDI buffer + * is kept at high mem at 2TB. In case of overlap new available + * addresses would be searched and same would be used. + */ + mcdi_iova = SFC_VDPA_DEFAULT_MCDI_IOVA; + + do { + ret = rte_vfio_container_dma_map(sva->vfio_container_fd, + (uint64_t)mz->addr, mcdi_iova, + mcdi_buff_size); + if (ret == 0) + break; + + mcdi_iova = mcdi_iova >> 1; + if (mcdi_iova < mcdi_buff_size) { + sfc_vdpa_err(sva, + "DMA mapping failed for MCDI : %s", + rte_strerror(rte_errno)); + rte_memzone_free(mz); + return ret; + } + + } while (ret < 0); + + esmp->esm_addr = mcdi_iova; + esmp->esm_base = mz->addr; + sva->mcdi_buff_size = mcdi_buff_size; + + sfc_vdpa_info(sva, + "DMA name=%s len=%zu => virt=%p iova=%" PRIx64, + name, len, esmp->esm_base, esmp->esm_addr); + + return 0; +} + +void +sfc_vdpa_dma_free(struct sfc_vdpa_adapter *sva, efsys_mem_t *esmp) +{ + int ret; + + sfc_vdpa_log_init(sva, "name=%s", esmp->esm_mz->name); + + ret = rte_vfio_container_dma_unmap(sva->vfio_container_fd, + (uint64_t)esmp->esm_base, + esmp->esm_addr, sva->mcdi_buff_size); + if (ret < 0) + sfc_vdpa_err(sva, "DMA unmap failed for MCDI : %s", + rte_strerror(rte_errno)); + + sfc_vdpa_info(sva, + "DMA free name=%s => virt=%p iova=%" PRIx64, + esmp->esm_mz->name, esmp->esm_base, esmp->esm_addr); + + rte_free((void *)(esmp->esm_base)); + + sva->mcdi_buff_size = 0; + memset(esmp, 0, sizeof(*esmp)); +} + +static int +sfc_vdpa_mem_bar_init(struct sfc_vdpa_adapter *sva, + const efx_bar_region_t *mem_ebrp) +{ + struct rte_pci_device *pci_dev = sva->pdev; + efsys_bar_t *ebp = &sva->mem_bar; + struct rte_mem_resource *res = + &pci_dev->mem_resource[mem_ebrp->ebr_index]; + + SFC_BAR_LOCK_INIT(ebp, pci_dev->name); + ebp->esb_rid = mem_ebrp->ebr_index; + ebp->esb_dev = pci_dev; + ebp->esb_base = res->addr; + + return 0; +} + +static void +sfc_vdpa_mem_bar_fini(struct sfc_vdpa_adapter *sva) +{ + efsys_bar_t *ebp = &sva->mem_bar; + + SFC_BAR_LOCK_DESTROY(ebp); + memset(ebp, 0, sizeof(*ebp)); +} + +static int +sfc_vdpa_nic_probe(struct sfc_vdpa_adapter *sva) +{ + efx_nic_t *enp = sva->nic; + int rc; + + rc = efx_nic_probe(enp, EFX_FW_VARIANT_DONT_CARE); + if (rc != 0) + sfc_vdpa_err(sva, "nic probe failed: %s", rte_strerror(rc)); + + return rc; +} + +static int +sfc_vdpa_estimate_resource_limits(struct sfc_vdpa_adapter *sva) +{ + efx_drv_limits_t limits; + int rc; + uint32_t evq_allocated; + uint32_t rxq_allocated; + uint32_t txq_allocated; + uint32_t max_queue_cnt; + + memset(&limits, 0, sizeof(limits)); + + /* Request at least one Rx and Tx queue */ + limits.edl_min_rxq_count = 1; + limits.edl_min_txq_count = 1; + /* Management event queue plus event queue for Tx/Rx queue */ + limits.edl_min_evq_count = + 1 + RTE_MAX(limits.edl_min_rxq_count, limits.edl_min_txq_count); + + limits.edl_max_rxq_count = SFC_VDPA_MAX_QUEUE_PAIRS; + limits.edl_max_txq_count = SFC_VDPA_MAX_QUEUE_PAIRS; + limits.edl_max_evq_count = 1 + SFC_VDPA_MAX_QUEUE_PAIRS; + + SFC_VDPA_ASSERT(limits.edl_max_evq_count >= limits.edl_min_rxq_count); + SFC_VDPA_ASSERT(limits.edl_max_rxq_count >= limits.edl_min_rxq_count); + SFC_VDPA_ASSERT(limits.edl_max_txq_count >= limits.edl_min_rxq_count); + + /* Configure the minimum required resources needed for the + * driver to operate, and the maximum desired resources that the + * driver is capable of using. + */ + sfc_vdpa_log_init(sva, "set drv limit"); + efx_nic_set_drv_limits(sva->nic, &limits); + + sfc_vdpa_log_init(sva, "init nic"); + rc = efx_nic_init(sva->nic); + if (rc != 0) { + sfc_vdpa_err(sva, "nic init failed: %s", rte_strerror(rc)); + goto fail_nic_init; + } + + /* Find resource dimensions assigned by firmware to this function */ + rc = efx_nic_get_vi_pool(sva->nic, &evq_allocated, &rxq_allocated, + &txq_allocated); + if (rc != 0) { + sfc_vdpa_err(sva, "vi pool get failed: %s", rte_strerror(rc)); + goto fail_get_vi_pool; + } + + /* It still may allocate more than maximum, ensure limit */ + evq_allocated = RTE_MIN(evq_allocated, limits.edl_max_evq_count); + rxq_allocated = RTE_MIN(rxq_allocated, limits.edl_max_rxq_count); + txq_allocated = RTE_MIN(txq_allocated, limits.edl_max_txq_count); + + + max_queue_cnt = RTE_MIN(rxq_allocated, txq_allocated); + /* Subtract management EVQ not used for traffic */ + max_queue_cnt = RTE_MIN(evq_allocated - 1, max_queue_cnt); + + SFC_VDPA_ASSERT(max_queue_cnt > 0); + + sva->max_queue_count = max_queue_cnt; + + return 0; + +fail_get_vi_pool: + efx_nic_fini(sva->nic); +fail_nic_init: + sfc_vdpa_log_init(sva, "failed: %s", rte_strerror(rc)); + return rc; +} + +int +sfc_vdpa_hw_init(struct sfc_vdpa_adapter *sva) +{ + efx_bar_region_t mem_ebr; + efx_nic_t *enp; + int rc; + + sfc_vdpa_log_init(sva, "entry"); + + sfc_vdpa_log_init(sva, "get family"); + rc = sfc_efx_family(sva->pdev, &mem_ebr, &sva->family); + if (rc != 0) + goto fail_family; + sfc_vdpa_log_init(sva, + "family is %u, membar is %u," + "function control window offset is %#" PRIx64, + sva->family, mem_ebr.ebr_index, mem_ebr.ebr_offset); + + sfc_vdpa_log_init(sva, "init mem bar"); + rc = sfc_vdpa_mem_bar_init(sva, &mem_ebr); + if (rc != 0) + goto fail_mem_bar_init; + + sfc_vdpa_log_init(sva, "create nic"); + rte_spinlock_init(&sva->nic_lock); + rc = efx_nic_create(sva->family, (efsys_identifier_t *)sva, + &sva->mem_bar, mem_ebr.ebr_offset, + &sva->nic_lock, &enp); + if (rc != 0) { + sfc_vdpa_err(sva, "nic create failed: %s", rte_strerror(rc)); + goto fail_nic_create; + } + sva->nic = enp; + + sfc_vdpa_log_init(sva, "init mcdi"); + rc = sfc_vdpa_mcdi_init(sva); + if (rc != 0) { + sfc_vdpa_err(sva, "mcdi init failed: %s", rte_strerror(rc)); + goto fail_mcdi_init; + } + + sfc_vdpa_log_init(sva, "probe nic"); + rc = sfc_vdpa_nic_probe(sva); + if (rc != 0) + goto fail_nic_probe; + + sfc_vdpa_log_init(sva, "reset nic"); + rc = efx_nic_reset(enp); + if (rc != 0) { + sfc_vdpa_err(sva, "nic reset failed: %s", rte_strerror(rc)); + goto fail_nic_reset; + } + + sfc_vdpa_log_init(sva, "estimate resource limits"); + rc = sfc_vdpa_estimate_resource_limits(sva); + if (rc != 0) + goto fail_estimate_rsrc_limits; + + sfc_vdpa_log_init(sva, "done"); + + return 0; + +fail_estimate_rsrc_limits: +fail_nic_reset: + efx_nic_unprobe(enp); + +fail_nic_probe: + sfc_vdpa_mcdi_fini(sva); + +fail_mcdi_init: + sfc_vdpa_log_init(sva, "destroy nic"); + sva->nic = NULL; + efx_nic_destroy(enp); + +fail_nic_create: + sfc_vdpa_mem_bar_fini(sva); + +fail_mem_bar_init: +fail_family: + sfc_vdpa_log_init(sva, "failed: %s", rte_strerror(rc)); + return rc; +} + +void +sfc_vdpa_hw_fini(struct sfc_vdpa_adapter *sva) +{ + efx_nic_t *enp = sva->nic; + + sfc_vdpa_log_init(sva, "entry"); + + sfc_vdpa_log_init(sva, "unprobe nic"); + efx_nic_unprobe(enp); + + sfc_vdpa_log_init(sva, "mcdi fini"); + sfc_vdpa_mcdi_fini(sva); + + sfc_vdpa_log_init(sva, "nic fini"); + efx_nic_fini(enp); + + sfc_vdpa_log_init(sva, "destroy nic"); + sva->nic = NULL; + efx_nic_destroy(enp); + + sfc_vdpa_mem_bar_fini(sva); +} diff --git a/drivers/vdpa/sfc/sfc_vdpa_log.h b/drivers/vdpa/sfc/sfc_vdpa_log.h index 858e5ee..4e7a84f 100644 --- a/drivers/vdpa/sfc/sfc_vdpa_log.h +++ b/drivers/vdpa/sfc/sfc_vdpa_log.h @@ -21,6 +21,9 @@ /** Name prefix for the per-device log type used to report basic information */ #define SFC_VDPA_LOGTYPE_MAIN_STR SFC_VDPA_LOGTYPE_PREFIX "main" +/** Device MCDI log type name prefix */ +#define SFC_VDPA_LOGTYPE_MCDI_STR SFC_VDPA_LOGTYPE_PREFIX "mcdi" + #define SFC_VDPA_LOG_PREFIX_MAX 32 /* Log PMD message, automatically add prefix and \n */ diff --git a/drivers/vdpa/sfc/sfc_vdpa_mcdi.c b/drivers/vdpa/sfc/sfc_vdpa_mcdi.c new file mode 100644 index 0000000..961d2d3 --- /dev/null +++ b/drivers/vdpa/sfc/sfc_vdpa_mcdi.c @@ -0,0 +1,74 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * + * Copyright(c) 2020-2021 Xilinx, Inc. + */ + +#include "sfc_efx_mcdi.h" + +#include "sfc_vdpa.h" +#include "sfc_vdpa_debug.h" +#include "sfc_vdpa_log.h" + +static sfc_efx_mcdi_dma_alloc_cb sfc_vdpa_mcdi_dma_alloc; +static int +sfc_vdpa_mcdi_dma_alloc(void *cookie, const char *name, size_t len, + efsys_mem_t *esmp) +{ + struct sfc_vdpa_adapter *sva = cookie; + + return sfc_vdpa_dma_alloc(sva, name, len, esmp); +} + +static sfc_efx_mcdi_dma_free_cb sfc_vdpa_mcdi_dma_free; +static void +sfc_vdpa_mcdi_dma_free(void *cookie, efsys_mem_t *esmp) +{ + struct sfc_vdpa_adapter *sva = cookie; + + sfc_vdpa_dma_free(sva, esmp); +} + +static sfc_efx_mcdi_sched_restart_cb sfc_vdpa_mcdi_sched_restart; +static void +sfc_vdpa_mcdi_sched_restart(void *cookie) +{ + RTE_SET_USED(cookie); +} + +static sfc_efx_mcdi_mgmt_evq_poll_cb sfc_vdpa_mcdi_mgmt_evq_poll; +static void +sfc_vdpa_mcdi_mgmt_evq_poll(void *cookie) +{ + RTE_SET_USED(cookie); +} + +static const struct sfc_efx_mcdi_ops sfc_vdpa_mcdi_ops = { + .dma_alloc = sfc_vdpa_mcdi_dma_alloc, + .dma_free = sfc_vdpa_mcdi_dma_free, + .sched_restart = sfc_vdpa_mcdi_sched_restart, + .mgmt_evq_poll = sfc_vdpa_mcdi_mgmt_evq_poll, + +}; + +int +sfc_vdpa_mcdi_init(struct sfc_vdpa_adapter *sva) +{ + uint32_t logtype; + + sfc_vdpa_log_init(sva, "entry"); + + logtype = sfc_vdpa_register_logtype(&(sva->pdev->addr), + SFC_VDPA_LOGTYPE_MCDI_STR, + RTE_LOG_NOTICE); + + return sfc_efx_mcdi_init(&sva->mcdi, logtype, + sva->log_prefix, sva->nic, + &sfc_vdpa_mcdi_ops, sva); +} + +void +sfc_vdpa_mcdi_fini(struct sfc_vdpa_adapter *sva) +{ + sfc_vdpa_log_init(sva, "entry"); + sfc_efx_mcdi_fini(&sva->mcdi); +} diff --git a/drivers/vdpa/sfc/sfc_vdpa_ops.c b/drivers/vdpa/sfc/sfc_vdpa_ops.c new file mode 100644 index 0000000..71696be --- /dev/null +++ b/drivers/vdpa/sfc/sfc_vdpa_ops.c @@ -0,0 +1,129 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * + * Copyright(c) 2020-2021 Xilinx, Inc. + */ + +#include +#include +#include +#include + +#include "sfc_vdpa_ops.h" +#include "sfc_vdpa.h" + +/* Dummy functions for mandatory vDPA ops to pass vDPA device registration. + * In subsequent patches these ops would be implemented. + */ +static int +sfc_vdpa_get_queue_num(struct rte_vdpa_device *vdpa_dev, uint32_t *queue_num) +{ + RTE_SET_USED(vdpa_dev); + RTE_SET_USED(queue_num); + + return -1; +} + +static int +sfc_vdpa_get_features(struct rte_vdpa_device *vdpa_dev, uint64_t *features) +{ + RTE_SET_USED(vdpa_dev); + RTE_SET_USED(features); + + return -1; +} + +static int +sfc_vdpa_get_protocol_features(struct rte_vdpa_device *vdpa_dev, + uint64_t *features) +{ + RTE_SET_USED(vdpa_dev); + RTE_SET_USED(features); + + return -1; +} + +static int +sfc_vdpa_dev_config(int vid) +{ + RTE_SET_USED(vid); + + return -1; +} + +static int +sfc_vdpa_dev_close(int vid) +{ + RTE_SET_USED(vid); + + return -1; +} + +static int +sfc_vdpa_set_vring_state(int vid, int vring, int state) +{ + RTE_SET_USED(vid); + RTE_SET_USED(vring); + RTE_SET_USED(state); + + return -1; +} + +static int +sfc_vdpa_set_features(int vid) +{ + RTE_SET_USED(vid); + + return -1; +} + +static struct rte_vdpa_dev_ops sfc_vdpa_ops = { + .get_queue_num = sfc_vdpa_get_queue_num, + .get_features = sfc_vdpa_get_features, + .get_protocol_features = sfc_vdpa_get_protocol_features, + .dev_conf = sfc_vdpa_dev_config, + .dev_close = sfc_vdpa_dev_close, + .set_vring_state = sfc_vdpa_set_vring_state, + .set_features = sfc_vdpa_set_features, +}; + +struct sfc_vdpa_ops_data * +sfc_vdpa_device_init(void *dev_handle, enum sfc_vdpa_context context) +{ + struct sfc_vdpa_ops_data *ops_data; + struct rte_pci_device *pci_dev; + + /* Create vDPA ops context */ + ops_data = rte_zmalloc("vdpa", sizeof(struct sfc_vdpa_ops_data), 0); + if (ops_data == NULL) + return NULL; + + ops_data->vdpa_context = context; + ops_data->dev_handle = dev_handle; + + pci_dev = sfc_vdpa_adapter_by_dev_handle(dev_handle)->pdev; + + /* Register vDPA Device */ + sfc_vdpa_log_init(dev_handle, "register vDPA device"); + ops_data->vdpa_dev = + rte_vdpa_register_device(&pci_dev->device, &sfc_vdpa_ops); + if (ops_data->vdpa_dev == NULL) { + sfc_vdpa_err(dev_handle, "vDPA device registration failed"); + goto fail_register_device; + } + + ops_data->state = SFC_VDPA_STATE_INITIALIZED; + + return ops_data; + +fail_register_device: + rte_free(ops_data); + return NULL; +} + +void +sfc_vdpa_device_fini(struct sfc_vdpa_ops_data *ops_data) +{ + rte_vdpa_unregister_device(ops_data->vdpa_dev); + + rte_free(ops_data); +} diff --git a/drivers/vdpa/sfc/sfc_vdpa_ops.h b/drivers/vdpa/sfc/sfc_vdpa_ops.h new file mode 100644 index 0000000..817b302 --- /dev/null +++ b/drivers/vdpa/sfc/sfc_vdpa_ops.h @@ -0,0 +1,36 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * + * Copyright(c) 2020-2021 Xilinx, Inc. + */ + +#ifndef _SFC_VDPA_OPS_H +#define _SFC_VDPA_OPS_H + +#include + +#define SFC_VDPA_MAX_QUEUE_PAIRS 1 + +enum sfc_vdpa_context { + SFC_VDPA_AS_PF = 0, + SFC_VDPA_AS_VF +}; + +enum sfc_vdpa_state { + SFC_VDPA_STATE_UNINITIALIZED = 0, + SFC_VDPA_STATE_INITIALIZED, + SFC_VDPA_STATE_NSTATES +}; + +struct sfc_vdpa_ops_data { + void *dev_handle; + struct rte_vdpa_device *vdpa_dev; + enum sfc_vdpa_context vdpa_context; + enum sfc_vdpa_state state; +}; + +struct sfc_vdpa_ops_data * +sfc_vdpa_device_init(void *adapter, enum sfc_vdpa_context context); +void +sfc_vdpa_device_fini(struct sfc_vdpa_ops_data *ops_data); + +#endif /* _SFC_VDPA_OPS_H */ -- 1.8.3.1