From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 887CC45ACC; Sun, 6 Oct 2024 22:43:12 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 5064740E96; Sun, 6 Oct 2024 22:38:51 +0200 (CEST) Received: from egress-ip42b.ess.de.barracuda.com (egress-ip42b.ess.de.barracuda.com [18.185.115.246]) by mails.dpdk.org (Postfix) with ESMTP id 5DDFF4065A for ; Sun, 6 Oct 2024 22:38:10 +0200 (CEST) Received: from EUR05-AM6-obe.outbound.protection.outlook.com (mail-am6eur05lp2109.outbound.protection.outlook.com [104.47.18.109]) by mx-outbound22-159.eu-central-1b.ess.aws.cudaops.com (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NO); Sun, 06 Oct 2024 20:38:09 +0000 ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=yNI6Gg+EB+cOQ1mWc2VP8s9rXKAbnLKdAsy05UOKHta1o2nVSfRMhmEdm3pOgRCSx/tmvT+2zW+uKx3M/U4U74kx1gYeqBEYf8CrhuPmQszqXG1BqK+/gPcQ8HQVfiiEQGRh9Kcy+UwdPmEtWgKX4RgQc3IPlen1cTZTlLDeAhtuCjk9BTM/X4EEewcE9kQYU7imi25wyS1x/SF+33v24uNf8yfxup1ZgxWjx01R9Muj8joAbyWepBv8AT9xjP+RqAW6k78gH/9IbLGaD260Eu1snvSAmKio0eLp9phizSYZaDPibgJOU/2Zw7XVZ+3QlmbZHWequ6nZkCGu6DT81A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=Jait+83qO9a8si1jf0oj5RSM6m1nawX84PMAikikLLE=; b=bMYS+rqjjhLVIoVmRTMr0nCEVfsf5xGoezzxE+f8AN6UTHV2Kn8EtdHnHLWSNRpAue0Vl5ZfjknuIvImIJ1vNJGnuNwkQ9lwnYzRan8X+QlULNUxGRAA24FrClQc/NQHMxrLLRuo+m/hZfXmk1wVF3XkrnoeXAdccVSBH/WmTCg1ad/DyOejrKDeTYPAnTGYEWWJ3Ynve+Czth/MLdENohOW8C0sF3Sxwgu2LWiNpYZUJCwegWtz9RMHetFTdK01AoWuXewFFhuCuxHTMFC3p/VozN8jaKR9hLf94Ya32/q+KUNnSEc2Eqe0nWBldTVY3iUuXFdUwPuuPr4j6VObrQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=fail (sender ip is 178.72.21.4) smtp.rcpttodomain=dpdk.org smtp.mailfrom=napatech.com; dmarc=fail (p=reject sp=reject pct=100) action=oreject header.from=napatech.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=napatech.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=Jait+83qO9a8si1jf0oj5RSM6m1nawX84PMAikikLLE=; b=OKePAnA4qbg2NvtajEDi+xnEWYCazLAAJz36DWp0QAU/mSolVB/0zpCcsAesyHQF8d8ZM3M87Gnpb16fY2HzAfCihu8MxeNfpUWGzLrqDZMJYUMSDWPNKn6CMrzZUeqG0WRH3HCgv+PM4CgksgzZkJpTvpvp1h2rDMIWxrYE6B8= Received: from AM6PR04CA0035.eurprd04.prod.outlook.com (2603:10a6:20b:92::48) by VI0P190MB2139.EURP190.PROD.OUTLOOK.COM (2603:10a6:800:215::13) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8026.22; Sun, 6 Oct 2024 20:38:03 +0000 Received: from AMS0EPF00000190.eurprd05.prod.outlook.com (2603:10a6:20b:92:cafe::a1) by AM6PR04CA0035.outlook.office365.com (2603:10a6:20b:92::48) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8026.20 via Frontend Transport; Sun, 6 Oct 2024 20:38:03 +0000 X-MS-Exchange-Authentication-Results: spf=fail (sender IP is 178.72.21.4) smtp.mailfrom=napatech.com; dkim=none (message not signed) header.d=none;dmarc=fail action=oreject header.from=napatech.com; Received-SPF: Fail (protection.outlook.com: domain of napatech.com does not designate 178.72.21.4 as permitted sender) receiver=protection.outlook.com; client-ip=178.72.21.4; helo=localhost.localdomain; Received: from localhost.localdomain (178.72.21.4) by AMS0EPF00000190.mail.protection.outlook.com (10.167.16.213) with Microsoft SMTP Server id 15.20.7918.13 via Frontend Transport; Sun, 6 Oct 2024 20:38:03 +0000 From: Serhii Iliushyk To: dev@dpdk.org Cc: mko-plv@napatech.com, sil-plv@napatech.com, ckm@napatech.com, andrew.rybchenko@oktetlabs.ru, ferruh.yigit@amd.com, Danylo Vodopianov Subject: [PATCH v1 40/50] net/ntnic: add queue setup operations Date: Sun, 6 Oct 2024 22:37:07 +0200 Message-ID: <20241006203728.330792-41-sil-plv@napatech.com> X-Mailer: git-send-email 2.45.0 In-Reply-To: <20241006203728.330792-1-sil-plv@napatech.com> References: <20241006203728.330792-1-sil-plv@napatech.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: AMS0EPF00000190:EE_|VI0P190MB2139:EE_ Content-Type: text/plain X-MS-Office365-Filtering-Correlation-Id: 8f0cb3ad-d5a7-481e-945b-08dce646c6be X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; ARA:13230040|376014|82310400026|36860700013|1800799024; X-Microsoft-Antispam-Message-Info: =?us-ascii?Q?TI8BJHCEhoQ8pfswbKueMFrwaPgbRiTcpiq00hQnyXj+lY7k9evn7V0VJBqC?= =?us-ascii?Q?XrJmSoXgC0z7gKCdjEWxTyPB3ixM6y6a1kdZmMQFqpSObISHQdZjem2cX+OS?= =?us-ascii?Q?A0etSxTox9V7jVafZIzYSLfJ+hdjXanZIh5GkPCjehoopA+NT4y73XZZSLyZ?= =?us-ascii?Q?rp5FNDh/SpcOK40BG+yPpEd3dh9/318dnlelfI12EORbjosIAL1Q1llbnvWf?= =?us-ascii?Q?7YeYnHarvaM6QrI67YsC4WC2KjoGJrVfgE2HrFiPwLbxS6kHrUe2c0OksOEt?= =?us-ascii?Q?kGpCaSZU6DIwaS/IOpStn8HNSCpKG094bqOlAw+MBkAjoVVM8DSbDqyKGPr5?= =?us-ascii?Q?lMv+/+Z13mOarHFUQzL92dZV2s2wZk9DcNTOc7Eg5eWbtsMbcnXQrQm8pEry?= =?us-ascii?Q?hbCEHL246J74RaDmouhkYtf3KrhZ8Z57ec4mJpTtcnghc3mEqPYXOC5b1JZl?= =?us-ascii?Q?i7YQcgKsChNjvo6phEDGuW8cJAAAVr6zkuFi3/WuTi86S9cJNSlkXamr3oHT?= =?us-ascii?Q?248swFejVrOpsvTWk1F9dPLgpVKy00kTLiDoDjNvPSsV7DaMgJjpxJaFvIKn?= =?us-ascii?Q?B55U/TxibFcIm4MdkMF1mdz8/1yQu8ciah0KsmVx7dPmUsAv2JPNumj4ZFwQ?= =?us-ascii?Q?JwNj2hjl27UCTPZG/Vh4U3+Ano17f04cIrZfq9RA4V8mNWshXtPNSaW2BtPW?= =?us-ascii?Q?aQZR6jA4/h2D+BLTti4q4fliRlnJe3i+bUdmuVp6rHTrlTeyM2wxUg14T2KG?= =?us-ascii?Q?Fviyt8GxxS2xLc6uH3FpIEWigehnw36Xo/0Xmn0erg4lu/l1gRd+qwOKoD4J?= =?us-ascii?Q?I5qD7ZzdN0DwnoAB7m11z+jQsULkHzI0W4VnIVlH1jKmMK35KUtLSVk2UEjT?= =?us-ascii?Q?99BahhGErJ1OTaBrc+yUChpdhwrfd9YosQad7eEX6X3YfUrz9Vuz/PqxoIQW?= =?us-ascii?Q?dx20cwROFzTIdEcfImMb+P0jVzq/IIDucGokXqOiDH3JP2YzNUfQJ/ptrEcT?= =?us-ascii?Q?tirTwetfI3P+0M9kK5zIUXWCawmWiaOIsqjh4rq3AQ2aCI72Q0a4dcSs5sAO?= =?us-ascii?Q?WO47acEIV2TZQubJgKxkTJ91UINP9FXRf7/bOB15hnQEYbu1bMtES5Lv4UY7?= =?us-ascii?Q?xK8c0n3MxGCo94VhLihLUL/N2sGscY0dMqB7bRtekroaRZOdZb2BJ4o4Ocmb?= =?us-ascii?Q?c5rnqV/yZaAcJqRaD8bhdNc2s6xUKgKjcuioO6nhx1cTef4GSU83t0iuQpt+?= =?us-ascii?Q?T3YjQ8Cbkewbf5rcuVgcreGjyzBzkWh2DAEIF+qzC1K4cOKNgjyimZjrjXK0?= =?us-ascii?Q?F4a0ab/Cm8xSRI0rT46Onu6cZE/aiVBTMc+JY8zTB1t0RPNLdDR5W39wbENU?= =?us-ascii?Q?PThst9QIx9udPh2VqnNEcE9Adh3f?= X-Forefront-Antispam-Report: CIP:178.72.21.4; CTRY:DK; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:localhost.localdomain; PTR:InfoDomainNonexistent; CAT:NONE; SFS:(13230040)(376014)(82310400026)(36860700013)(1800799024); DIR:OUT; SFP:1102; X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0: U8T+HDzqx5jO43J8DxrnFZFNQrOnWg9IsaMBtfD4aaDpO+gV6aONfkVdFS7fqLoZBICX4L6oeEs5lrnQNKupb+JCxgNaiU738Vb4MHptqbstpF8F+o7mrqwSxqJj1GgtOkpj18E7l9qYDFNZyff1N+b7c/pvkacJWYEzEkeygJ543Kra5x+wSuCtfmWWw5yq5ldA0k4DRxLilX4yG0ZAHSnph+BQmtRxk54oGdKSqQPY7e0lMmVpfu1ZKb3juYZRHJnZoMp0d/tGk1dLDi+HMB1BB8v2s10wLv3hCgepkQGdu+p8nXG8SJ1USF9uaUs5ZdrBHJD8gsDml8dc9/D0ZbGmFAroFhhvuDTddP3kCt3SiATTy0cGeHmpqDO7waGEgaPP++i8zVu7V/Mo74DCpaATyF59FlBciEpasHC5VFPvoWLm7bKJsjUb90/n5dJ0GAu0ZR5DHxmeYuTSZuCDVKvgXgok0upMbSjxLr0j0FUiJx0MhhREkloeymDla1Jl3v3HnaWsWC7Xeda4fAEXUWaDQ+xtgjLMVw2lIXPLMBeRXoswaq+mzzt8kmXaiaF+bUdgEAvXCvRkd3uOAy/s9D+ijJzqYqAl+GNLOpoKUIPfTexuN+IT7JNnxuwfv4GpMz9GdFL4t/WCaAEfCeFsFQ== X-OriginatorOrg: napatech.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 06 Oct 2024 20:38:03.4839 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 8f0cb3ad-d5a7-481e-945b-08dce646c6be X-MS-Exchange-CrossTenant-Id: c4540d0b-728a-4233-9da5-9ea30c7ec3ed X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=c4540d0b-728a-4233-9da5-9ea30c7ec3ed; Ip=[178.72.21.4]; Helo=[localhost.localdomain] X-MS-Exchange-CrossTenant-AuthSource: AMS0EPF00000190.eurprd05.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI0P190MB2139 X-BESS-ID: 1728247089-305791-12645-147792-1 X-BESS-VER: 2019.1_20240924.1654 X-BESS-Apparent-Source-IP: 104.47.18.109 X-BESS-Parts: H4sIAAAAAAACA4uuVkqtKFGyUioBkjpK+cVKVoYmBobGQGYGUDTFNNHE1CjNIN EyJTXFNNnE2MLUJDHZKDnFPDXFzCzJQqk2FgCskqqTQgAAAA== X-BESS-Outbound-Spam-Score: 0.00 X-BESS-Outbound-Spam-Report: Code version 3.2, rules version 3.2.2.259547 [from cloudscan23-175.eu-central-1b.ess.aws.cudaops.com] Rule breakdown below pts rule name description ---- ---------------------- -------------------------------- 0.00 BSF_BESS_OUTBOUND META: BESS Outbound X-BESS-Outbound-Spam-Status: SCORE=0.00 using account:ESS113687 scores of KILL_LEVEL=7.0 tests=BSF_BESS_OUTBOUND X-BESS-BRTS-Status: 1 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Danylo Vodopianov Added TX and RX queue setup. Handles memory allocation and hardware Virtio queue setup. Allocates and configures memory for hardware Virtio queues, including handling IOMMU and VFIO mappings. Signed-off-by: Danylo Vodopianov --- drivers/net/ntnic/include/ntnic_virt_queue.h | 3 +- drivers/net/ntnic/include/ntos_drv.h | 6 + drivers/net/ntnic/nthw/nthw_drv.h | 2 + drivers/net/ntnic/ntnic_ethdev.c | 323 +++++++++++++++++++ 4 files changed, 333 insertions(+), 1 deletion(-) diff --git a/drivers/net/ntnic/include/ntnic_virt_queue.h b/drivers/net/ntnic/include/ntnic_virt_queue.h index 422ac3b950..821b23af6c 100644 --- a/drivers/net/ntnic/include/ntnic_virt_queue.h +++ b/drivers/net/ntnic/include/ntnic_virt_queue.h @@ -13,7 +13,8 @@ struct nthw_virt_queue; -struct nthw_virtq_desc_buf; +#define SPLIT_RING 0 +#define IN_ORDER 1 struct nthw_cvirtq_desc; diff --git a/drivers/net/ntnic/include/ntos_drv.h b/drivers/net/ntnic/include/ntos_drv.h index 233d585303..933b012e07 100644 --- a/drivers/net/ntnic/include/ntos_drv.h +++ b/drivers/net/ntnic/include/ntos_drv.h @@ -47,6 +47,7 @@ struct __rte_cache_aligned ntnic_rx_queue { struct hwq_s hwq; struct nthw_virt_queue *vq; + int nb_hw_rx_descr; nt_meta_port_type_t type; uint32_t port; /* Rx port for this queue */ enum fpga_info_profile profile; /* Inline / Capture */ @@ -57,7 +58,12 @@ struct __rte_cache_aligned ntnic_tx_queue { struct flow_queue_id_s queue; /* queue info - user id and hw queue index */ struct hwq_s hwq; struct nthw_virt_queue *vq; + int nb_hw_tx_descr; + /* Used for bypass in NTDVIO0 header on Tx - pre calculated */ + int target_id; nt_meta_port_type_t type; + /* only used for exception tx queue from OVS SW switching */ + int rss_target_id; uint32_t port; /* Tx port for this queue */ int enabled; /* Enabling/disabling of this queue */ diff --git a/drivers/net/ntnic/nthw/nthw_drv.h b/drivers/net/ntnic/nthw/nthw_drv.h index eaa2b19015..69e0360f5f 100644 --- a/drivers/net/ntnic/nthw/nthw_drv.h +++ b/drivers/net/ntnic/nthw/nthw_drv.h @@ -71,6 +71,8 @@ typedef struct fpga_info_s { struct nthw_pcie3 *mp_nthw_pcie3; struct nthw_tsm *mp_nthw_tsm; + nthw_dbs_t *mp_nthw_dbs; + uint8_t *bar0_addr; /* Needed for register read/write */ size_t bar0_size; diff --git a/drivers/net/ntnic/ntnic_ethdev.c b/drivers/net/ntnic/ntnic_ethdev.c index 78a689d444..57827d73d5 100644 --- a/drivers/net/ntnic/ntnic_ethdev.c +++ b/drivers/net/ntnic/ntnic_ethdev.c @@ -31,10 +31,16 @@ #define MAX_TOTAL_QUEUES 128 +#define SG_NB_HW_RX_DESCRIPTORS 1024 +#define SG_NB_HW_TX_DESCRIPTORS 1024 +#define SG_HW_RX_PKT_BUFFER_SIZE (1024 << 1) +#define SG_HW_TX_PKT_BUFFER_SIZE (1024 << 1) + /* Max RSS queues */ #define MAX_QUEUES 125 #define ONE_G_SIZE 0x40000000 +#define ONE_G_MASK (ONE_G_SIZE - 1) #define ETH_DEV_NTNIC_HELP_ARG "help" #define ETH_DEV_NTHW_RXQUEUES_ARG "rxqs" @@ -187,6 +193,157 @@ eth_dev_infos_get(struct rte_eth_dev *eth_dev, struct rte_eth_dev_info *dev_info return 0; } +static int allocate_hw_virtio_queues(struct rte_eth_dev *eth_dev, int vf_num, struct hwq_s *hwq, + int num_descr, int buf_size) +{ + int i, res; + uint32_t size; + uint64_t iova_addr; + + NT_LOG(DBG, NTNIC, "***** Configure IOMMU for HW queues on VF %i *****\n", vf_num); + + /* Just allocate 1MB to hold all combined descr rings */ + uint64_t tot_alloc_size = 0x100000 + buf_size * num_descr; + + void *virt = + rte_malloc_socket("VirtQDescr", tot_alloc_size, nt_util_align_size(tot_alloc_size), + eth_dev->data->numa_node); + + if (!virt) + return -1; + + uint64_t gp_offset = (uint64_t)virt & ONE_G_MASK; + rte_iova_t hpa = rte_malloc_virt2iova(virt); + + NT_LOG(DBG, NTNIC, "Allocated virtio descr rings : virt " + "%p [0x%" PRIX64 "],hpa %" PRIX64 " [0x%" PRIX64 "]\n", + virt, gp_offset, hpa, hpa & ONE_G_MASK); + + /* + * Same offset on both HPA and IOVA + * Make sure 1G boundary is never crossed + */ + if (((hpa & ONE_G_MASK) != gp_offset) || + (((uint64_t)virt + tot_alloc_size) & ~ONE_G_MASK) != + ((uint64_t)virt & ~ONE_G_MASK)) { + NT_LOG(ERR, NTNIC, "*********************************************************\n"); + NT_LOG(ERR, NTNIC, "ERROR, no optimal IOMMU mapping available hpa: %016" PRIX64 + "(%016" PRIX64 "), gp_offset: %016" PRIX64 " size: %" PRIu64 "\n", + hpa, hpa & ONE_G_MASK, gp_offset, tot_alloc_size); + NT_LOG(ERR, NTNIC, "*********************************************************\n"); + + rte_free(virt); + + /* Just allocate 1MB to hold all combined descr rings */ + size = 0x100000; + void *virt = rte_malloc_socket("VirtQDescr", size, 4096, eth_dev->data->numa_node); + + if (!virt) + return -1; + + res = nt_vfio_dma_map(vf_num, virt, &iova_addr, size); + + NT_LOG(DBG, NTNIC, "VFIO MMAP res %i, vf_num %i\n", res, vf_num); + + if (res != 0) + return -1; + + hwq->vf_num = vf_num; + hwq->virt_queues_ctrl.virt_addr = virt; + hwq->virt_queues_ctrl.phys_addr = (void *)iova_addr; + hwq->virt_queues_ctrl.len = size; + + NT_LOG(DBG, NTNIC, + "Allocated for virtio descr rings combined 1MB : %p, IOVA %016" PRIX64 "\n", + virt, iova_addr); + + size = num_descr * sizeof(struct nthw_memory_descriptor); + hwq->pkt_buffers = + rte_zmalloc_socket("rx_pkt_buffers", size, 64, eth_dev->data->numa_node); + + if (!hwq->pkt_buffers) { + NT_LOG(ERR, NTNIC, + "Failed to allocated buffer array for hw-queue %p, total size %i, elements %i\n", + hwq->pkt_buffers, size, num_descr); + rte_free(virt); + return -1; + } + + size = buf_size * num_descr; + void *virt_addr = + rte_malloc_socket("pkt_buffer_pkts", size, 4096, eth_dev->data->numa_node); + + if (!virt_addr) { + NT_LOG(ERR, NTNIC, + "Failed allocate packet buffers for hw-queue %p, buf size %i, elements %i\n", + hwq->pkt_buffers, buf_size, num_descr); + rte_free(hwq->pkt_buffers); + rte_free(virt); + return -1; + } + + res = nt_vfio_dma_map(vf_num, virt_addr, &iova_addr, size); + + NT_LOG(DBG, NTNIC, + "VFIO MMAP res %i, virt %p, iova %016" PRIX64 ", vf_num %i, num pkt bufs %i, tot size %i\n", + res, virt_addr, iova_addr, vf_num, num_descr, size); + + if (res != 0) + return -1; + + for (i = 0; i < num_descr; i++) { + hwq->pkt_buffers[i].virt_addr = + (void *)((char *)virt_addr + ((uint64_t)(i) * buf_size)); + hwq->pkt_buffers[i].phys_addr = + (void *)(iova_addr + ((uint64_t)(i) * buf_size)); + hwq->pkt_buffers[i].len = buf_size; + } + + return 0; + } /* End of: no optimal IOMMU mapping available */ + + res = nt_vfio_dma_map(vf_num, virt, &iova_addr, ONE_G_SIZE); + + if (res != 0) { + NT_LOG(ERR, NTNIC, "VFIO MMAP FAILED! res %i, vf_num %i\n", res, vf_num); + return -1; + } + + hwq->vf_num = vf_num; + hwq->virt_queues_ctrl.virt_addr = virt; + hwq->virt_queues_ctrl.phys_addr = (void *)(iova_addr); + hwq->virt_queues_ctrl.len = 0x100000; + iova_addr += 0x100000; + + NT_LOG(DBG, NTNIC, + "VFIO MMAP: virt_addr=%p phys_addr=%p size=%" PRIX32 " hpa=%" PRIX64 "\n", + hwq->virt_queues_ctrl.virt_addr, hwq->virt_queues_ctrl.phys_addr, + hwq->virt_queues_ctrl.len, rte_malloc_virt2iova(hwq->virt_queues_ctrl.virt_addr)); + + size = num_descr * sizeof(struct nthw_memory_descriptor); + hwq->pkt_buffers = + rte_zmalloc_socket("rx_pkt_buffers", size, 64, eth_dev->data->numa_node); + + if (!hwq->pkt_buffers) { + NT_LOG(ERR, NTNIC, + "Failed to allocated buffer array for hw-queue %p, total size %i, elements %i\n", + hwq->pkt_buffers, size, num_descr); + rte_free(virt); + return -1; + } + + void *virt_addr = (void *)((uint64_t)virt + 0x100000); + + for (i = 0; i < num_descr; i++) { + hwq->pkt_buffers[i].virt_addr = + (void *)((char *)virt_addr + ((uint64_t)(i) * buf_size)); + hwq->pkt_buffers[i].phys_addr = (void *)(iova_addr + ((uint64_t)(i) * buf_size)); + hwq->pkt_buffers[i].len = buf_size; + } + + return 0; +} + static void release_hw_virtio_queues(struct hwq_s *hwq) { if (!hwq || hwq->vf_num == 0) @@ -245,6 +402,170 @@ static int allocate_queue(int num) return next_free; } +static int eth_rx_scg_queue_setup(struct rte_eth_dev *eth_dev, + uint16_t rx_queue_id, + uint16_t nb_rx_desc __rte_unused, + unsigned int socket_id __rte_unused, + const struct rte_eth_rxconf *rx_conf __rte_unused, + struct rte_mempool *mb_pool) +{ + NT_LOG_DBGX(DBG, NTNIC, "\n"); + struct rte_pktmbuf_pool_private *mbp_priv; + struct pmd_internals *internals = (struct pmd_internals *)eth_dev->data->dev_private; + struct ntnic_rx_queue *rx_q = &internals->rxq_scg[rx_queue_id]; + struct drv_s *p_drv = internals->p_drv; + struct ntdrv_4ga_s *p_nt_drv = &p_drv->ntdrv; + + if (sg_ops == NULL) { + NT_LOG_DBGX(DBG, NTNIC, "SG module is not initialized\n"); + return 0; + } + + if (internals->type == PORT_TYPE_OVERRIDE) { + rx_q->mb_pool = mb_pool; + eth_dev->data->rx_queues[rx_queue_id] = rx_q; + mbp_priv = rte_mempool_get_priv(rx_q->mb_pool); + rx_q->buf_size = (uint16_t)(mbp_priv->mbuf_data_room_size - RTE_PKTMBUF_HEADROOM); + rx_q->enabled = 1; + return 0; + } + + NT_LOG(DBG, NTNIC, "(%i) NTNIC RX OVS-SW queue setup: queue id %i, hw queue index %i\n", + internals->port, rx_queue_id, rx_q->queue.hw_id); + + rx_q->mb_pool = mb_pool; + + eth_dev->data->rx_queues[rx_queue_id] = rx_q; + + mbp_priv = rte_mempool_get_priv(rx_q->mb_pool); + rx_q->buf_size = (uint16_t)(mbp_priv->mbuf_data_room_size - RTE_PKTMBUF_HEADROOM); + rx_q->enabled = 1; + + if (allocate_hw_virtio_queues(eth_dev, EXCEPTION_PATH_HID, &rx_q->hwq, + SG_NB_HW_RX_DESCRIPTORS, SG_HW_RX_PKT_BUFFER_SIZE) < 0) + return -1; + + rx_q->nb_hw_rx_descr = SG_NB_HW_RX_DESCRIPTORS; + + rx_q->profile = p_drv->ntdrv.adapter_info.fpga_info.profile; + + rx_q->vq = + sg_ops->nthw_setup_mngd_rx_virt_queue(p_nt_drv->adapter_info.fpga_info.mp_nthw_dbs, + rx_q->queue.hw_id, /* index */ + rx_q->nb_hw_rx_descr, + EXCEPTION_PATH_HID, /* host_id */ + 1, /* header NT DVIO header for exception path */ + &rx_q->hwq.virt_queues_ctrl, + rx_q->hwq.pkt_buffers, + SPLIT_RING, + -1); + + NT_LOG(DBG, NTNIC, "(%i) NTNIC RX OVS-SW queues successfully setup\n", internals->port); + + return 0; +} + +static int eth_tx_scg_queue_setup(struct rte_eth_dev *eth_dev, + uint16_t tx_queue_id, + uint16_t nb_tx_desc __rte_unused, + unsigned int socket_id __rte_unused, + const struct rte_eth_txconf *tx_conf __rte_unused) +{ + const struct port_ops *port_ops = get_port_ops(); + + if (port_ops == NULL) { + NT_LOG_DBGX(ERR, NTNIC, "Link management module uninitialized\n"); + return -1; + } + + NT_LOG_DBGX(DBG, NTNIC, "\n"); + struct pmd_internals *internals = (struct pmd_internals *)eth_dev->data->dev_private; + struct drv_s *p_drv = internals->p_drv; + struct ntdrv_4ga_s *p_nt_drv = &p_drv->ntdrv; + struct ntnic_tx_queue *tx_q = &internals->txq_scg[tx_queue_id]; + + if (internals->type == PORT_TYPE_OVERRIDE) { + eth_dev->data->tx_queues[tx_queue_id] = tx_q; + return 0; + } + + if (sg_ops == NULL) { + NT_LOG_DBGX(DBG, NTNIC, "SG module is not initialized\n"); + return 0; + } + + NT_LOG(DBG, NTNIC, "(%i) NTNIC TX OVS-SW queue setup: queue id %i, hw queue index %i\n", + tx_q->port, tx_queue_id, tx_q->queue.hw_id); + + if (tx_queue_id > internals->nb_tx_queues) { + NT_LOG(ERR, NTNIC, "Error invalid tx queue id\n"); + return -1; + } + + eth_dev->data->tx_queues[tx_queue_id] = tx_q; + + /* Calculate target ID for HW - to be used in NTDVIO0 header bypass_port */ + if (tx_q->rss_target_id >= 0) { + /* bypass to a multiqueue port - qsl-hsh index */ + tx_q->target_id = tx_q->rss_target_id + 0x90; + + } else if (internals->vpq[tx_queue_id].hw_id > -1) { + /* virtual port - queue index */ + tx_q->target_id = internals->vpq[tx_queue_id].hw_id; + + } else { + /* Phy port - phy port identifier */ + /* output/bypass to MAC */ + tx_q->target_id = (int)(tx_q->port + 0x80); + } + + if (allocate_hw_virtio_queues(eth_dev, EXCEPTION_PATH_HID, &tx_q->hwq, + SG_NB_HW_TX_DESCRIPTORS, SG_HW_TX_PKT_BUFFER_SIZE) < 0) { + return -1; + } + + tx_q->nb_hw_tx_descr = SG_NB_HW_TX_DESCRIPTORS; + + tx_q->profile = p_drv->ntdrv.adapter_info.fpga_info.profile; + + uint32_t port, header; + port = tx_q->port; /* transmit port */ + header = 0; /* header type VirtIO-Net */ + + tx_q->vq = + sg_ops->nthw_setup_mngd_tx_virt_queue(p_nt_drv->adapter_info.fpga_info.mp_nthw_dbs, + tx_q->queue.hw_id, /* index */ + tx_q->nb_hw_tx_descr, /* queue size */ + EXCEPTION_PATH_HID, /* host_id always VF4 */ + port, + /* + * in_port - in vswitch mode has + * to move tx port from OVS excep. + * away from VM tx port, + * because of QoS is matched by port id! + */ + tx_q->port + 128, + header, + &tx_q->hwq.virt_queues_ctrl, + tx_q->hwq.pkt_buffers, + SPLIT_RING, + -1, + IN_ORDER); + + tx_q->enabled = 1; + + NT_LOG(DBG, NTNIC, "(%i) NTNIC TX OVS-SW queues successfully setup\n", internals->port); + + if (internals->type == PORT_TYPE_PHYSICAL) { + struct adapter_info_s *p_adapter_info = &internals->p_drv->ntdrv.adapter_info; + NT_LOG(DBG, NTNIC, "Port %i is ready for data. Enable port\n", + internals->n_intf_no); + port_ops->set_adm_state(p_adapter_info, internals->n_intf_no, true); + } + + return 0; +} + static int eth_rx_queue_start(struct rte_eth_dev *eth_dev, uint16_t rx_queue_id) { eth_dev->data->rx_queue_state[rx_queue_id] = RTE_ETH_QUEUE_STATE_STARTED; @@ -580,9 +901,11 @@ static const struct eth_dev_ops nthw_eth_dev_ops = { .link_update = eth_link_update, .dev_infos_get = eth_dev_infos_get, .fw_version_get = eth_fw_version_get, + .rx_queue_setup = eth_rx_scg_queue_setup, .rx_queue_start = eth_rx_queue_start, .rx_queue_stop = eth_rx_queue_stop, .rx_queue_release = eth_rx_queue_release, + .tx_queue_setup = eth_tx_scg_queue_setup, .tx_queue_start = eth_tx_queue_start, .tx_queue_stop = eth_tx_queue_stop, .tx_queue_release = eth_tx_queue_release, -- 2.45.0