From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id F189C45AAF; Fri, 4 Oct 2024 17:14:43 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id DD75942E80; Fri, 4 Oct 2024 17:09:08 +0200 (CEST) Received: from egress-ip42a.ess.de.barracuda.com (egress-ip42a.ess.de.barracuda.com [18.185.115.201]) by mails.dpdk.org (Postfix) with ESMTP id A95E242DD3 for ; Fri, 4 Oct 2024 17:08:39 +0200 (CEST) Received: from EUR05-VI1-obe.outbound.protection.outlook.com (mail-vi1eur05lp2175.outbound.protection.outlook.com [104.47.17.175]) by mx-outbound44-124.eu-central-1c.ess.aws.cudaops.com (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NO); Fri, 04 Oct 2024 15:08:38 +0000 ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=W1nreFH5mxnB1DCV9zAMEIGvp3w5QCjG04n/lqD8KS8P9hvpAXH2Mb9WTHVdy5d1fL58XoB9Le47L12bRPjajDYF2530azYiShq1MoPUEMWBYb2J8gWo6/lALEYTcr1u20yXXV0rmIUBYw1DteOwqaT7K0jwKW2nV55ziGm4Uu5kbZq3jW5exNn35/nAXB97Ckl5Men8XgItz6JTVO3pWP2fZk+hAlxiKAFJsHTONgWdR8G+mQkB85JDskIj4eulNk4iszgxx/5LdX50oRcMwidWf96NNaEdImbUCL6k+e91JScbahgmV7w1dtxcdgXv33YKUttvwCbuODNQbtyoOg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=Jait+83qO9a8si1jf0oj5RSM6m1nawX84PMAikikLLE=; b=uNJ+1MM2IEfrBcfXOllam/ebVgYbdNFa6T1Ch9zvEYi3xgEaFIb0W2XU3a2PB8jmlvNNhBO7pJanhS4Dz5vTp1gZSMDqdETpoInkQpVLBmsyzc3pdfqOJ9yoQQo5KUMjnMywOkjW30zPGqif9emaeE47C9Yxl//lNdrfpP3FEOrQhgtd0VUr1Az9ijKn740ltqdLqhdhnqoJRSP6FIDeonLCAHf0Kmahf1s+XmaYaJZDRmsCu1WApAbSfhZKvcmnS7rlQzXuEXeGpIZWvGT+0ec+cdp0V6j3IgfrMALjqKK1yDKLCSxX96eeoR5cYosfQ0ZUVer/1C7lSpt80G0g5Q== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=fail (sender ip is 178.72.21.4) smtp.rcpttodomain=dpdk.org smtp.mailfrom=napatech.com; dmarc=fail (p=reject sp=reject pct=100) action=oreject header.from=napatech.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=napatech.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=Jait+83qO9a8si1jf0oj5RSM6m1nawX84PMAikikLLE=; b=Yg5VQTH+n3bwnQtfuhFgmOAdk7mQLBqHHJIkNrXMkSRDaq27jsrXme+PKwB4+caSsMKrGZtXJ72QIIjbCj/pPxhuVJuseWHPhmjMHQqHxBFs6eFjmDs30b6ra3zNiybtMDZlOomwUOwBtPOU5F03TYi7EIGels078b0dIem4Q1c= Received: from AM0PR02CA0009.eurprd02.prod.outlook.com (2603:10a6:208:3e::22) by DU0P190MB1906.EURP190.PROD.OUTLOOK.COM (2603:10a6:10:3be::6) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8026.18; Fri, 4 Oct 2024 15:08:34 +0000 Received: from AMS0EPF000001AC.eurprd05.prod.outlook.com (2603:10a6:208:3e:cafe::6) by AM0PR02CA0009.outlook.office365.com (2603:10a6:208:3e::22) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8026.18 via Frontend Transport; Fri, 4 Oct 2024 15:08:34 +0000 X-MS-Exchange-Authentication-Results: spf=fail (sender IP is 178.72.21.4) smtp.mailfrom=napatech.com; dkim=none (message not signed) header.d=none;dmarc=fail action=oreject header.from=napatech.com; Received-SPF: Fail (protection.outlook.com: domain of napatech.com does not designate 178.72.21.4 as permitted sender) receiver=protection.outlook.com; client-ip=178.72.21.4; helo=localhost.localdomain; Received: from localhost.localdomain (178.72.21.4) by AMS0EPF000001AC.mail.protection.outlook.com (10.167.16.152) with Microsoft SMTP Server id 15.20.7918.13 via Frontend Transport; Fri, 4 Oct 2024 15:08:34 +0000 From: Serhii Iliushyk To: dev@dpdk.org Cc: mko-plv@napatech.com, sil-plv@napatech.com, ckm@napatech.com, andrew.rybchenko@oktetlabs.ru, ferruh.yigit@amd.com, Danylo Vodopianov Subject: [PATCH v1 04/14] net/ntnic: add queue setup operations Date: Fri, 4 Oct 2024 17:07:29 +0200 Message-ID: <20241004150749.261020-43-sil-plv@napatech.com> X-Mailer: git-send-email 2.45.0 In-Reply-To: <20241004150749.261020-1-sil-plv@napatech.com> References: <20241004150749.261020-1-sil-plv@napatech.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: AMS0EPF000001AC:EE_|DU0P190MB1906:EE_ Content-Type: text/plain X-MS-Office365-Filtering-Correlation-Id: 686cea1f-7e78-4700-a296-08dce4866aa8 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; ARA:13230040|82310400026|36860700013|376014|1800799024; X-Microsoft-Antispam-Message-Info: =?us-ascii?Q?Wqggdf4+3lpdK2aLTsk65jWmKq5mccx00KIxtOcMsTBZzezMdDWxn9cDwMe3?= =?us-ascii?Q?+mbcw97A3dwO37ddY+KDKowGfqlOaUXIJdwOuHlM4KyKveqnmvifT0nyIRLm?= =?us-ascii?Q?nHYGG8AMF0ZrsChzlb2Vk88prbr9FMSzYlo9bvkL8JQjAjyuMC4idrT3OIfh?= =?us-ascii?Q?ehLkzvVsfp8NE0FJn84afH39QXk8RNAh6zsSQP4w1myzOkmT4U7guDRZoVkc?= =?us-ascii?Q?F8ImcKF3COX83KC5cLgaYw9zA13dwBZb/VpUFNSKst08br28O/r7IkRrcuAB?= =?us-ascii?Q?hkNgFkR4ZVQOqzn00W7LJR27PGJ4XHEdrZT0wHYU4Z4jgxMcRLi2FCPMgJjQ?= =?us-ascii?Q?0X15E5aFsgXzlKcLlozvCnQ3irQ7hpM9lgayc1RyvJMR2WCDwxHELFdzt9e/?= =?us-ascii?Q?aLCnzE6NcIPpPt/8oKTqTGBgMk6fZIBUSdXX5MYrStJQuDE0TCaqTEpCp/+T?= =?us-ascii?Q?odiN9V9h0pTQ7fteKwJQ+z/NIHSZAwpOg04cem9mICGvZmNRWZoeC7VW/itQ?= =?us-ascii?Q?D3qaZcYtBN4UwphkJ3She/GqvzVGxkr6RmLDcyXPViDF+3sE/bE8i84YYKJI?= =?us-ascii?Q?UdP7p8hGSh3o8Rs/SmvqMMcJZUMW3uX72fQ3aDRxEqTypbgUdrexGGxtLzeB?= =?us-ascii?Q?dQUFLE8ixD7nV8ECWmEtBg3VS1VUmOnMejJw+z1DgBXcxsUHG+nixmaX2a92?= =?us-ascii?Q?EfQAuTyyeluCKRIL9hKkOh5istD4lz+3F2NUdveRvipZ+LTaNqK5xAD7xmXw?= =?us-ascii?Q?FK6ubC/gMj8itrB5+kRkzf4wTVHXI+XP9i5ro/T/GCcwaYmqTzWczUba+H8f?= =?us-ascii?Q?Q4wJ6U4N+jxPwT7/BVCbWdML+dlh/NkoowNoZsrT281NUm/mmNjCqvskh8O+?= =?us-ascii?Q?Fcn60imHiUVm/LcKhLJccXC4VikZcQNKqkJzTCmb7rn5z0bhlcfqFn/Tfthu?= =?us-ascii?Q?cxphNtHUraJjR9rz9CWjMdxwOxHCX5o5Svtx4PKMxR3DFAZtDtQx1WXkknR4?= =?us-ascii?Q?/t2RFHGICwmjx+P1HoT/Z6/EcL6yaB7MPPgDUFPSQ7iNupncD20IGnjrgByi?= =?us-ascii?Q?j6UaIm1ezcT6lgb977SHElrSK15RNmlLyLgjWmJ8w6ppFnjXaKLFFDoluaNr?= =?us-ascii?Q?S7Fh4eCvKCbM5053qa0EA0R8+xfqAkjsC3vYoDekNALOxZjcPdx3oFUoXjaV?= =?us-ascii?Q?fgUH+ATUalOiPRPEaHNCOEh4ItEWQl8MDnQ/ou2hpUoUZpLgXcpD5ZEDjsNv?= =?us-ascii?Q?uBvLLVuf8yvtizFe/CxciLb2AqPhiG4X5Fs2yjOWdccwkmTJNxN0Dmu/yZC/?= =?us-ascii?Q?xtcevIlOd5WvA6qijWghifJpiFqe0qnBDM3MOBUk7+ovOjiodtI5X4ZCQIja?= =?us-ascii?Q?6TlcOSttqTTOmENuMTOZ2yCBsYHP?= X-Forefront-Antispam-Report: CIP:178.72.21.4; CTRY:DK; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:localhost.localdomain; PTR:InfoDomainNonexistent; CAT:NONE; SFS:(13230040)(82310400026)(36860700013)(376014)(1800799024); DIR:OUT; SFP:1102; X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0: 0/x6Yx4ZTARhfl8pPExlSNf+aYuDw8RA1JPwNWlSNDEZrZB9yIuDppoUx0Lrp2IgR2qJPqxEFwKfzZSVECuiKiujtXUAT5RaQ7d0FNk+BWxoUU+ZwLDTB5tSg7eicw2xwk068RAjzbRrFu9dNmsMw7u2sI2kLgl2qi0oOGH7ogPv9OA6ROv2rEcRALPIPqI+cosORMMco9er/uQDyZ2g4XNDrz/zCILK66yAJ9SjwxMwqXnqPTcWgmJaqRu69yXkqpPLZIBQ7TeB/AtxhOIsAKFwPznrMFkKkLNqdJAB4kamsrqgzCS3LLc9VZp2hIOoWaUreD+49pS3rH6cRQjapuE08qAmhVqm/nPxN88icycNdIVgta70/gM3Hwm3/cLgUKBTZSnQWFVV/OoiLcykR1ruGfIU8gpZQLcptfWlUKJfgTn5p0Ku1282jJeuS6cHpwFhtO+7WKn6SYz9OAcbt5GdXsolHzCO13VNLvActxntNr9qZtUnLTrPZKC+DzAVRJ1UcijflpalLzEYm7CGYVICtCuPP6cVUVk1qq1MOcFrMO8Axdf1/dSDtgkSKIoiNhjbEKgfptYSxfT22rr9PXfBRQUqaZ/UDSdEXOH6xIxi+2SSYiIIfoglEsi1JgSeu0yJzaoUl3gTKpg614CBOw== X-OriginatorOrg: napatech.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 04 Oct 2024 15:08:34.4236 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 686cea1f-7e78-4700-a296-08dce4866aa8 X-MS-Exchange-CrossTenant-Id: c4540d0b-728a-4233-9da5-9ea30c7ec3ed X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=c4540d0b-728a-4233-9da5-9ea30c7ec3ed; Ip=[178.72.21.4]; Helo=[localhost.localdomain] X-MS-Exchange-CrossTenant-AuthSource: AMS0EPF000001AC.eurprd05.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DU0P190MB1906 X-BESS-ID: 1728054516-311388-12685-30410-2 X-BESS-VER: 2019.1_20240924.1654 X-BESS-Apparent-Source-IP: 104.47.17.175 X-BESS-Parts: H4sIAAAAAAACA4uuVkqtKFGyUioBkjpK+cVKVoYmBobGQGYGUDTFNNHE1CjNIN EyJTXFNNnE2MLUJDHZKDnFPDXFzCzJQqk2FgCskqqTQgAAAA== X-BESS-Outbound-Spam-Score: 0.00 X-BESS-Outbound-Spam-Report: Code version 3.2, rules version 3.2.2.259494 [from cloudscan11-98.eu-central-1a.ess.aws.cudaops.com] Rule breakdown below pts rule name description ---- ---------------------- -------------------------------- 0.00 BSF_BESS_OUTBOUND META: BESS Outbound X-BESS-Outbound-Spam-Status: SCORE=0.00 using account:ESS113687 scores of KILL_LEVEL=7.0 tests=BSF_BESS_OUTBOUND X-BESS-BRTS-Status: 1 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Danylo Vodopianov Added TX and RX queue setup. Handles memory allocation and hardware Virtio queue setup. Allocates and configures memory for hardware Virtio queues, including handling IOMMU and VFIO mappings. Signed-off-by: Danylo Vodopianov --- drivers/net/ntnic/include/ntnic_virt_queue.h | 3 +- drivers/net/ntnic/include/ntos_drv.h | 6 + drivers/net/ntnic/nthw/nthw_drv.h | 2 + drivers/net/ntnic/ntnic_ethdev.c | 323 +++++++++++++++++++ 4 files changed, 333 insertions(+), 1 deletion(-) diff --git a/drivers/net/ntnic/include/ntnic_virt_queue.h b/drivers/net/ntnic/include/ntnic_virt_queue.h index 422ac3b950..821b23af6c 100644 --- a/drivers/net/ntnic/include/ntnic_virt_queue.h +++ b/drivers/net/ntnic/include/ntnic_virt_queue.h @@ -13,7 +13,8 @@ struct nthw_virt_queue; -struct nthw_virtq_desc_buf; +#define SPLIT_RING 0 +#define IN_ORDER 1 struct nthw_cvirtq_desc; diff --git a/drivers/net/ntnic/include/ntos_drv.h b/drivers/net/ntnic/include/ntos_drv.h index 233d585303..933b012e07 100644 --- a/drivers/net/ntnic/include/ntos_drv.h +++ b/drivers/net/ntnic/include/ntos_drv.h @@ -47,6 +47,7 @@ struct __rte_cache_aligned ntnic_rx_queue { struct hwq_s hwq; struct nthw_virt_queue *vq; + int nb_hw_rx_descr; nt_meta_port_type_t type; uint32_t port; /* Rx port for this queue */ enum fpga_info_profile profile; /* Inline / Capture */ @@ -57,7 +58,12 @@ struct __rte_cache_aligned ntnic_tx_queue { struct flow_queue_id_s queue; /* queue info - user id and hw queue index */ struct hwq_s hwq; struct nthw_virt_queue *vq; + int nb_hw_tx_descr; + /* Used for bypass in NTDVIO0 header on Tx - pre calculated */ + int target_id; nt_meta_port_type_t type; + /* only used for exception tx queue from OVS SW switching */ + int rss_target_id; uint32_t port; /* Tx port for this queue */ int enabled; /* Enabling/disabling of this queue */ diff --git a/drivers/net/ntnic/nthw/nthw_drv.h b/drivers/net/ntnic/nthw/nthw_drv.h index eaa2b19015..69e0360f5f 100644 --- a/drivers/net/ntnic/nthw/nthw_drv.h +++ b/drivers/net/ntnic/nthw/nthw_drv.h @@ -71,6 +71,8 @@ typedef struct fpga_info_s { struct nthw_pcie3 *mp_nthw_pcie3; struct nthw_tsm *mp_nthw_tsm; + nthw_dbs_t *mp_nthw_dbs; + uint8_t *bar0_addr; /* Needed for register read/write */ size_t bar0_size; diff --git a/drivers/net/ntnic/ntnic_ethdev.c b/drivers/net/ntnic/ntnic_ethdev.c index 78a689d444..57827d73d5 100644 --- a/drivers/net/ntnic/ntnic_ethdev.c +++ b/drivers/net/ntnic/ntnic_ethdev.c @@ -31,10 +31,16 @@ #define MAX_TOTAL_QUEUES 128 +#define SG_NB_HW_RX_DESCRIPTORS 1024 +#define SG_NB_HW_TX_DESCRIPTORS 1024 +#define SG_HW_RX_PKT_BUFFER_SIZE (1024 << 1) +#define SG_HW_TX_PKT_BUFFER_SIZE (1024 << 1) + /* Max RSS queues */ #define MAX_QUEUES 125 #define ONE_G_SIZE 0x40000000 +#define ONE_G_MASK (ONE_G_SIZE - 1) #define ETH_DEV_NTNIC_HELP_ARG "help" #define ETH_DEV_NTHW_RXQUEUES_ARG "rxqs" @@ -187,6 +193,157 @@ eth_dev_infos_get(struct rte_eth_dev *eth_dev, struct rte_eth_dev_info *dev_info return 0; } +static int allocate_hw_virtio_queues(struct rte_eth_dev *eth_dev, int vf_num, struct hwq_s *hwq, + int num_descr, int buf_size) +{ + int i, res; + uint32_t size; + uint64_t iova_addr; + + NT_LOG(DBG, NTNIC, "***** Configure IOMMU for HW queues on VF %i *****\n", vf_num); + + /* Just allocate 1MB to hold all combined descr rings */ + uint64_t tot_alloc_size = 0x100000 + buf_size * num_descr; + + void *virt = + rte_malloc_socket("VirtQDescr", tot_alloc_size, nt_util_align_size(tot_alloc_size), + eth_dev->data->numa_node); + + if (!virt) + return -1; + + uint64_t gp_offset = (uint64_t)virt & ONE_G_MASK; + rte_iova_t hpa = rte_malloc_virt2iova(virt); + + NT_LOG(DBG, NTNIC, "Allocated virtio descr rings : virt " + "%p [0x%" PRIX64 "],hpa %" PRIX64 " [0x%" PRIX64 "]\n", + virt, gp_offset, hpa, hpa & ONE_G_MASK); + + /* + * Same offset on both HPA and IOVA + * Make sure 1G boundary is never crossed + */ + if (((hpa & ONE_G_MASK) != gp_offset) || + (((uint64_t)virt + tot_alloc_size) & ~ONE_G_MASK) != + ((uint64_t)virt & ~ONE_G_MASK)) { + NT_LOG(ERR, NTNIC, "*********************************************************\n"); + NT_LOG(ERR, NTNIC, "ERROR, no optimal IOMMU mapping available hpa: %016" PRIX64 + "(%016" PRIX64 "), gp_offset: %016" PRIX64 " size: %" PRIu64 "\n", + hpa, hpa & ONE_G_MASK, gp_offset, tot_alloc_size); + NT_LOG(ERR, NTNIC, "*********************************************************\n"); + + rte_free(virt); + + /* Just allocate 1MB to hold all combined descr rings */ + size = 0x100000; + void *virt = rte_malloc_socket("VirtQDescr", size, 4096, eth_dev->data->numa_node); + + if (!virt) + return -1; + + res = nt_vfio_dma_map(vf_num, virt, &iova_addr, size); + + NT_LOG(DBG, NTNIC, "VFIO MMAP res %i, vf_num %i\n", res, vf_num); + + if (res != 0) + return -1; + + hwq->vf_num = vf_num; + hwq->virt_queues_ctrl.virt_addr = virt; + hwq->virt_queues_ctrl.phys_addr = (void *)iova_addr; + hwq->virt_queues_ctrl.len = size; + + NT_LOG(DBG, NTNIC, + "Allocated for virtio descr rings combined 1MB : %p, IOVA %016" PRIX64 "\n", + virt, iova_addr); + + size = num_descr * sizeof(struct nthw_memory_descriptor); + hwq->pkt_buffers = + rte_zmalloc_socket("rx_pkt_buffers", size, 64, eth_dev->data->numa_node); + + if (!hwq->pkt_buffers) { + NT_LOG(ERR, NTNIC, + "Failed to allocated buffer array for hw-queue %p, total size %i, elements %i\n", + hwq->pkt_buffers, size, num_descr); + rte_free(virt); + return -1; + } + + size = buf_size * num_descr; + void *virt_addr = + rte_malloc_socket("pkt_buffer_pkts", size, 4096, eth_dev->data->numa_node); + + if (!virt_addr) { + NT_LOG(ERR, NTNIC, + "Failed allocate packet buffers for hw-queue %p, buf size %i, elements %i\n", + hwq->pkt_buffers, buf_size, num_descr); + rte_free(hwq->pkt_buffers); + rte_free(virt); + return -1; + } + + res = nt_vfio_dma_map(vf_num, virt_addr, &iova_addr, size); + + NT_LOG(DBG, NTNIC, + "VFIO MMAP res %i, virt %p, iova %016" PRIX64 ", vf_num %i, num pkt bufs %i, tot size %i\n", + res, virt_addr, iova_addr, vf_num, num_descr, size); + + if (res != 0) + return -1; + + for (i = 0; i < num_descr; i++) { + hwq->pkt_buffers[i].virt_addr = + (void *)((char *)virt_addr + ((uint64_t)(i) * buf_size)); + hwq->pkt_buffers[i].phys_addr = + (void *)(iova_addr + ((uint64_t)(i) * buf_size)); + hwq->pkt_buffers[i].len = buf_size; + } + + return 0; + } /* End of: no optimal IOMMU mapping available */ + + res = nt_vfio_dma_map(vf_num, virt, &iova_addr, ONE_G_SIZE); + + if (res != 0) { + NT_LOG(ERR, NTNIC, "VFIO MMAP FAILED! res %i, vf_num %i\n", res, vf_num); + return -1; + } + + hwq->vf_num = vf_num; + hwq->virt_queues_ctrl.virt_addr = virt; + hwq->virt_queues_ctrl.phys_addr = (void *)(iova_addr); + hwq->virt_queues_ctrl.len = 0x100000; + iova_addr += 0x100000; + + NT_LOG(DBG, NTNIC, + "VFIO MMAP: virt_addr=%p phys_addr=%p size=%" PRIX32 " hpa=%" PRIX64 "\n", + hwq->virt_queues_ctrl.virt_addr, hwq->virt_queues_ctrl.phys_addr, + hwq->virt_queues_ctrl.len, rte_malloc_virt2iova(hwq->virt_queues_ctrl.virt_addr)); + + size = num_descr * sizeof(struct nthw_memory_descriptor); + hwq->pkt_buffers = + rte_zmalloc_socket("rx_pkt_buffers", size, 64, eth_dev->data->numa_node); + + if (!hwq->pkt_buffers) { + NT_LOG(ERR, NTNIC, + "Failed to allocated buffer array for hw-queue %p, total size %i, elements %i\n", + hwq->pkt_buffers, size, num_descr); + rte_free(virt); + return -1; + } + + void *virt_addr = (void *)((uint64_t)virt + 0x100000); + + for (i = 0; i < num_descr; i++) { + hwq->pkt_buffers[i].virt_addr = + (void *)((char *)virt_addr + ((uint64_t)(i) * buf_size)); + hwq->pkt_buffers[i].phys_addr = (void *)(iova_addr + ((uint64_t)(i) * buf_size)); + hwq->pkt_buffers[i].len = buf_size; + } + + return 0; +} + static void release_hw_virtio_queues(struct hwq_s *hwq) { if (!hwq || hwq->vf_num == 0) @@ -245,6 +402,170 @@ static int allocate_queue(int num) return next_free; } +static int eth_rx_scg_queue_setup(struct rte_eth_dev *eth_dev, + uint16_t rx_queue_id, + uint16_t nb_rx_desc __rte_unused, + unsigned int socket_id __rte_unused, + const struct rte_eth_rxconf *rx_conf __rte_unused, + struct rte_mempool *mb_pool) +{ + NT_LOG_DBGX(DBG, NTNIC, "\n"); + struct rte_pktmbuf_pool_private *mbp_priv; + struct pmd_internals *internals = (struct pmd_internals *)eth_dev->data->dev_private; + struct ntnic_rx_queue *rx_q = &internals->rxq_scg[rx_queue_id]; + struct drv_s *p_drv = internals->p_drv; + struct ntdrv_4ga_s *p_nt_drv = &p_drv->ntdrv; + + if (sg_ops == NULL) { + NT_LOG_DBGX(DBG, NTNIC, "SG module is not initialized\n"); + return 0; + } + + if (internals->type == PORT_TYPE_OVERRIDE) { + rx_q->mb_pool = mb_pool; + eth_dev->data->rx_queues[rx_queue_id] = rx_q; + mbp_priv = rte_mempool_get_priv(rx_q->mb_pool); + rx_q->buf_size = (uint16_t)(mbp_priv->mbuf_data_room_size - RTE_PKTMBUF_HEADROOM); + rx_q->enabled = 1; + return 0; + } + + NT_LOG(DBG, NTNIC, "(%i) NTNIC RX OVS-SW queue setup: queue id %i, hw queue index %i\n", + internals->port, rx_queue_id, rx_q->queue.hw_id); + + rx_q->mb_pool = mb_pool; + + eth_dev->data->rx_queues[rx_queue_id] = rx_q; + + mbp_priv = rte_mempool_get_priv(rx_q->mb_pool); + rx_q->buf_size = (uint16_t)(mbp_priv->mbuf_data_room_size - RTE_PKTMBUF_HEADROOM); + rx_q->enabled = 1; + + if (allocate_hw_virtio_queues(eth_dev, EXCEPTION_PATH_HID, &rx_q->hwq, + SG_NB_HW_RX_DESCRIPTORS, SG_HW_RX_PKT_BUFFER_SIZE) < 0) + return -1; + + rx_q->nb_hw_rx_descr = SG_NB_HW_RX_DESCRIPTORS; + + rx_q->profile = p_drv->ntdrv.adapter_info.fpga_info.profile; + + rx_q->vq = + sg_ops->nthw_setup_mngd_rx_virt_queue(p_nt_drv->adapter_info.fpga_info.mp_nthw_dbs, + rx_q->queue.hw_id, /* index */ + rx_q->nb_hw_rx_descr, + EXCEPTION_PATH_HID, /* host_id */ + 1, /* header NT DVIO header for exception path */ + &rx_q->hwq.virt_queues_ctrl, + rx_q->hwq.pkt_buffers, + SPLIT_RING, + -1); + + NT_LOG(DBG, NTNIC, "(%i) NTNIC RX OVS-SW queues successfully setup\n", internals->port); + + return 0; +} + +static int eth_tx_scg_queue_setup(struct rte_eth_dev *eth_dev, + uint16_t tx_queue_id, + uint16_t nb_tx_desc __rte_unused, + unsigned int socket_id __rte_unused, + const struct rte_eth_txconf *tx_conf __rte_unused) +{ + const struct port_ops *port_ops = get_port_ops(); + + if (port_ops == NULL) { + NT_LOG_DBGX(ERR, NTNIC, "Link management module uninitialized\n"); + return -1; + } + + NT_LOG_DBGX(DBG, NTNIC, "\n"); + struct pmd_internals *internals = (struct pmd_internals *)eth_dev->data->dev_private; + struct drv_s *p_drv = internals->p_drv; + struct ntdrv_4ga_s *p_nt_drv = &p_drv->ntdrv; + struct ntnic_tx_queue *tx_q = &internals->txq_scg[tx_queue_id]; + + if (internals->type == PORT_TYPE_OVERRIDE) { + eth_dev->data->tx_queues[tx_queue_id] = tx_q; + return 0; + } + + if (sg_ops == NULL) { + NT_LOG_DBGX(DBG, NTNIC, "SG module is not initialized\n"); + return 0; + } + + NT_LOG(DBG, NTNIC, "(%i) NTNIC TX OVS-SW queue setup: queue id %i, hw queue index %i\n", + tx_q->port, tx_queue_id, tx_q->queue.hw_id); + + if (tx_queue_id > internals->nb_tx_queues) { + NT_LOG(ERR, NTNIC, "Error invalid tx queue id\n"); + return -1; + } + + eth_dev->data->tx_queues[tx_queue_id] = tx_q; + + /* Calculate target ID for HW - to be used in NTDVIO0 header bypass_port */ + if (tx_q->rss_target_id >= 0) { + /* bypass to a multiqueue port - qsl-hsh index */ + tx_q->target_id = tx_q->rss_target_id + 0x90; + + } else if (internals->vpq[tx_queue_id].hw_id > -1) { + /* virtual port - queue index */ + tx_q->target_id = internals->vpq[tx_queue_id].hw_id; + + } else { + /* Phy port - phy port identifier */ + /* output/bypass to MAC */ + tx_q->target_id = (int)(tx_q->port + 0x80); + } + + if (allocate_hw_virtio_queues(eth_dev, EXCEPTION_PATH_HID, &tx_q->hwq, + SG_NB_HW_TX_DESCRIPTORS, SG_HW_TX_PKT_BUFFER_SIZE) < 0) { + return -1; + } + + tx_q->nb_hw_tx_descr = SG_NB_HW_TX_DESCRIPTORS; + + tx_q->profile = p_drv->ntdrv.adapter_info.fpga_info.profile; + + uint32_t port, header; + port = tx_q->port; /* transmit port */ + header = 0; /* header type VirtIO-Net */ + + tx_q->vq = + sg_ops->nthw_setup_mngd_tx_virt_queue(p_nt_drv->adapter_info.fpga_info.mp_nthw_dbs, + tx_q->queue.hw_id, /* index */ + tx_q->nb_hw_tx_descr, /* queue size */ + EXCEPTION_PATH_HID, /* host_id always VF4 */ + port, + /* + * in_port - in vswitch mode has + * to move tx port from OVS excep. + * away from VM tx port, + * because of QoS is matched by port id! + */ + tx_q->port + 128, + header, + &tx_q->hwq.virt_queues_ctrl, + tx_q->hwq.pkt_buffers, + SPLIT_RING, + -1, + IN_ORDER); + + tx_q->enabled = 1; + + NT_LOG(DBG, NTNIC, "(%i) NTNIC TX OVS-SW queues successfully setup\n", internals->port); + + if (internals->type == PORT_TYPE_PHYSICAL) { + struct adapter_info_s *p_adapter_info = &internals->p_drv->ntdrv.adapter_info; + NT_LOG(DBG, NTNIC, "Port %i is ready for data. Enable port\n", + internals->n_intf_no); + port_ops->set_adm_state(p_adapter_info, internals->n_intf_no, true); + } + + return 0; +} + static int eth_rx_queue_start(struct rte_eth_dev *eth_dev, uint16_t rx_queue_id) { eth_dev->data->rx_queue_state[rx_queue_id] = RTE_ETH_QUEUE_STATE_STARTED; @@ -580,9 +901,11 @@ static const struct eth_dev_ops nthw_eth_dev_ops = { .link_update = eth_link_update, .dev_infos_get = eth_dev_infos_get, .fw_version_get = eth_fw_version_get, + .rx_queue_setup = eth_rx_scg_queue_setup, .rx_queue_start = eth_rx_queue_start, .rx_queue_stop = eth_rx_queue_stop, .rx_queue_release = eth_rx_queue_release, + .tx_queue_setup = eth_tx_scg_queue_setup, .tx_queue_start = eth_tx_queue_start, .tx_queue_stop = eth_tx_queue_stop, .tx_queue_release = eth_tx_queue_release, -- 2.45.0