From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 7B0F945ACC; Sun, 6 Oct 2024 22:43:05 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id EA5DE40E68; Sun, 6 Oct 2024 22:38:49 +0200 (CEST) Received: from egress-ip42b.ess.de.barracuda.com (egress-ip42b.ess.de.barracuda.com [18.185.115.246]) by mails.dpdk.org (Postfix) with ESMTP id C62E440B9B for ; Sun, 6 Oct 2024 22:38:09 +0200 (CEST) Received: from EUR03-DBA-obe.outbound.protection.outlook.com (mail-dbaeur03lp2174.outbound.protection.outlook.com [104.47.51.174]) by mx-outbound22-159.eu-central-1b.ess.aws.cudaops.com (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NO); Sun, 06 Oct 2024 20:38:08 +0000 ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=BxCXQvWcbpKR9oeOPgVT+Lw3sU/BbOxOSFw2rhk3Sm3VqJEX1MOGdIEa36EDoib4uqc4uX2SCLC30tskjeMWeereIKieByyH7rbaMqYgeUAOyFkqXzUwJplevGOFK0UASJl2XzDcZZvmLhlpSYCDGvHMCwgEjgFlWQMt+8WKrg9yexZJknqivTfFz/K5MQpROfoyXl9cmQGVyhrGf4icdq8JXV0pD+ZqZaOxvnBTiNkGH8310e4rQM6q6d6SOwJCdMP/WJyA3CkHAfsrvdK+LtFLTQzVS477RIS5yFvTMZbzyZVBT+rOWPQ6Qb17cugcSw/vG+rT2JbmiJD6inyVpw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=hPBuaF8EY9r2M5SH9hhxJcenydRdmxqnp/m08V43Tro=; b=SSZRLaoBo8uFVLw912OSvF2rkuQ/A1+taBMg0gZAI/ngj6gr07WROOhAPCYQf5G4MNtxNMDdvKt4CoW8dIo6U7pFKCXOPmCnjXt5oSmPH0JOz/USeAeV1eP76gpIwBqGs3LTe/VAj7KUub4FPOvNxpvtAC/DjPvgdaA5vQIxNR8S4iTb5fQEsN3MavqcasgwrcBcjMuSvdibvhaMQf941mVCpyJ6L8wCuy4icwIdoo8kYHAf0yI7TIrUYEXwk7BLK0AqZrRkAUVQb/srQZygD+x6vQu7PNiaINRtnO5Mp7SwjhTdjByWzDn7ljJKHz5JQoKEW55rK0GkRHikiqG7bA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=fail (sender ip is 178.72.21.4) smtp.rcpttodomain=dpdk.org smtp.mailfrom=napatech.com; dmarc=fail (p=reject sp=reject pct=100) action=oreject header.from=napatech.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=napatech.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=hPBuaF8EY9r2M5SH9hhxJcenydRdmxqnp/m08V43Tro=; b=d5yD918ahYl1bM/UHCEnHCcZ7KdFiPWMKCdPuY4f4e9X9rVAvHaEFxfk5xo9LCKnAVakXhO9y6kreXmySuwf8lJtBzpcnsJ+1sKE05nQeeM6F+vyvT5G5xPfqmmcC+IT7ybZ/fJ/6LHHhJnSPdxWz2bwNTQyeT67mgcnEKrHldM= Received: from AM6P191CA0002.EURP191.PROD.OUTLOOK.COM (2603:10a6:209:8b::15) by AM7P190MB0726.EURP190.PROD.OUTLOOK.COM (2603:10a6:20b:11f::13) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8026.22; Sun, 6 Oct 2024 20:38:06 +0000 Received: from AMS0EPF00000190.eurprd05.prod.outlook.com (2603:10a6:209:8b:cafe::4c) by AM6P191CA0002.outlook.office365.com (2603:10a6:209:8b::15) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8026.20 via Frontend Transport; Sun, 6 Oct 2024 20:38:05 +0000 X-MS-Exchange-Authentication-Results: spf=fail (sender IP is 178.72.21.4) smtp.mailfrom=napatech.com; dkim=none (message not signed) header.d=none;dmarc=fail action=oreject header.from=napatech.com; Received-SPF: Fail (protection.outlook.com: domain of napatech.com does not designate 178.72.21.4 as permitted sender) receiver=protection.outlook.com; client-ip=178.72.21.4; helo=localhost.localdomain; Received: from localhost.localdomain (178.72.21.4) by AMS0EPF00000190.mail.protection.outlook.com (10.167.16.213) with Microsoft SMTP Server id 15.20.7918.13 via Frontend Transport; Sun, 6 Oct 2024 20:38:05 +0000 From: Serhii Iliushyk To: dev@dpdk.org Cc: mko-plv@napatech.com, sil-plv@napatech.com, ckm@napatech.com, andrew.rybchenko@oktetlabs.ru, ferruh.yigit@amd.com, Danylo Vodopianov Subject: [PATCH v1 43/50] net/ntnic: add split-queue support Date: Sun, 6 Oct 2024 22:37:10 +0200 Message-ID: <20241006203728.330792-44-sil-plv@napatech.com> X-Mailer: git-send-email 2.45.0 In-Reply-To: <20241006203728.330792-1-sil-plv@napatech.com> References: <20241006203728.330792-1-sil-plv@napatech.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: AMS0EPF00000190:EE_|AM7P190MB0726:EE_ Content-Type: text/plain X-MS-Office365-Filtering-Correlation-Id: cf824051-f592-4827-0ebd-08dce646c81d X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; ARA:13230040|376014|1800799024|36860700013|82310400026; X-Microsoft-Antispam-Message-Info: =?us-ascii?Q?kTBVkcuCNQqWi6UbT6kUrTeXyTSsIrHLkDiv2di/0SDCEG93lVxqfWGxyZBM?= =?us-ascii?Q?ozcvziUpMiPHQWbUsAceFzcXbagQuIzu6x/FekMLjSoTuHm+fU7BoDftj20G?= =?us-ascii?Q?Qo/bNcHE3t0KKeCZsw8MrzKKoptZvSgYLd7Kbt1vheKxhOGYFOLbwHQRZlqn?= =?us-ascii?Q?Ffzd4Zw5W8Pkh25QUWZbm3gf4TNuCWsWmMCtsY4nQenmmG7Qkgy18b4GhZ1I?= =?us-ascii?Q?7xMts2OMMik2YgzZYLo5kIOE/uKGRqdNPHyNqn2LOETM/sVBi534ekFj4rqx?= =?us-ascii?Q?xkNboG6GxVRAWmrA+ZcXpXsy9hUeECZtxRPLP8q7ZOmIRNdRGsXFQ9e95Pq2?= =?us-ascii?Q?TvhAPRmRi2ZB/eZANX3ZEaO9VEGY9tkIWFLTqOnuGTxVzanjFm9u3jcL1MBY?= =?us-ascii?Q?EZqquhDRZ7yLLySl8vQlk8y3T8LJCS6P47Tip5jStDW4vojvqh/sI7LjOGtx?= =?us-ascii?Q?2axqK7Da8aQNKY7EzW7UL2AL1MAb3FWMSpwFImrFzNYbb0PoFR4rTergtFen?= =?us-ascii?Q?Z0dnmONtHfOfZtLAorza2Mkx4DSOLRRp2BpW7j/xFrJE93x6dQ+WbzzkhuIa?= =?us-ascii?Q?bEBnqikRtJbQhI+/23hp+5PFKx71sm9OvsTS0VxoUznuQbBui979P1e/t2Ue?= =?us-ascii?Q?DVlvUAik5lH00mCRdDY3Bo6F3uUMVRNnLc8wAmp33ZjhqgtpKjAzn1Gy7AzP?= =?us-ascii?Q?HpR62p26zvGTruqtvumJVwDKcDR1TGyc9dd0Rt3JHvdsoArLg9xH12vRbOz1?= =?us-ascii?Q?iHp5FPO5y0RNI/FZTVeDjn99/nDzTZH1moGsoCMPzEI5TKgmWm22qllBFZ87?= =?us-ascii?Q?C8oxynE9BYhO3tShKyr+dle79HJf0QslQ2y4fzYCyYzPXcxfNXDv0zOJpWNx?= =?us-ascii?Q?f8V/2n4bwjunh6zq2KUuQZFT9WSTDfEGiDZC3vhOG/ZElBKOhfGBq0mqoF2J?= =?us-ascii?Q?gHG9j/WefTr58i2234YxWGawqpBG7rjK6WDHJztwjEpWMAD01m3havxkOzF0?= =?us-ascii?Q?9ogRbc9uC/4kX/bkm8XjjV0L+xzUFPEpmP9Pvku4yGA6MYKNgEidHg8TPWGu?= =?us-ascii?Q?Sqqrzw2UtnUcYK7Al+AG4tBfYXO9nuBT1NP91qpli9669Jq2VKx2FAA8zyH0?= =?us-ascii?Q?s3Ek3NcRZL+HgKEqsiItwLVGo4I9Z9AUQ35FQJsSdgRo3hQ/IvQE2lMFa/w4?= =?us-ascii?Q?jTUNdAAd4CKOUxVo5rRVxs6xK1r1ZSckpymzEOoY4kG25rVGTZxnUeXFODM0?= =?us-ascii?Q?V1CbbiaQausgik84SoQnqp05xxMdxAIudpG5jT6lDzHM7EY2PyhkNfpVbwW3?= =?us-ascii?Q?1opr8JH2flcGhRsQrO8xh0obkxiaI+xrqxPo8ZqjgbyoB6023qRpXyxE1TPh?= =?us-ascii?Q?SyEwUnY=3D?= X-Forefront-Antispam-Report: CIP:178.72.21.4; CTRY:DK; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:localhost.localdomain; PTR:InfoDomainNonexistent; CAT:NONE; SFS:(13230040)(376014)(1800799024)(36860700013)(82310400026); DIR:OUT; SFP:1102; X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0: GitGhcozVc6g2X1tFgT5ZYBRKfFYm1tgpqpUahDaAgpd2uFYNxymgAJDOeKxo1NDB0w5PnxrQeF1NFYVCXcW4flnaUsGMJNGvpB2GDLrv/Y/L4UKPbItu8NYLjGyBn8MPuXCLNFPOI1utAc+HdQae44HpixhX1Wo2SkKDQmFg1gM8UtpFRwWTJtpahmSGps3CQPq9/HgfarySVLScnrUk8Pvy3SkLLBmix9corXAByNbCD6ITpCjcDp7TdkTUtvpk5lLkLkqIDsgOOKGHfI2AxxsHTdOR6CZWUoA/BqXX8EkwE/5GaH7HeK7Bd+2TpQFsMN3Q92mWy4Ytzpe5b2xmsuVi0aRZg/simoI2Ly34wXGXd7rnmodlMHWqCSMGTOE9HLMRWvEvi2IZL45fCyAheR644iSvPp2sl6qebo5U9RbxCG1096hFIo0426hjuLrbbgu54i5vW6JKgA0H4HFCNLWdcxkajrEaOIKlLN/060Xfi7w7sefV21MdBlH3GHim3qmKao7gRPjj3EXCduCojLY4z6vtn7Rs3FagdHWxCAXOmCGnJzkXQ5z9VlUCqU0+26IBsK98Zqq0Wl4/AkuzXf0c49eVjmWJYwglHKL1o985mfUx9uScoa0yKB3xeGKjVI+0+uluC0V3cjwbzBV1g== X-OriginatorOrg: napatech.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 06 Oct 2024 20:38:05.7808 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: cf824051-f592-4827-0ebd-08dce646c81d X-MS-Exchange-CrossTenant-Id: c4540d0b-728a-4233-9da5-9ea30c7ec3ed X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=c4540d0b-728a-4233-9da5-9ea30c7ec3ed; Ip=[178.72.21.4]; Helo=[localhost.localdomain] X-MS-Exchange-CrossTenant-AuthSource: AMS0EPF00000190.eurprd05.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM7P190MB0726 X-BESS-ID: 1728247088-305791-12642-147805-1 X-BESS-VER: 2019.1_20240924.1654 X-BESS-Apparent-Source-IP: 104.47.51.174 X-BESS-Parts: H4sIAAAAAAACA4uuVkqtKFGyUioBkjpK+cVKVkamhsbmQGYGUNTYNNnQzNzQ1C Qt1djCItUgOSXNIDXZINHENMXC3DTJTKk2FgBBvblfQgAAAA== X-BESS-Outbound-Spam-Score: 0.00 X-BESS-Outbound-Spam-Report: Code version 3.2, rules version 3.2.2.259547 [from cloudscan15-206.eu-central-1a.ess.aws.cudaops.com] Rule breakdown below pts rule name description ---- ---------------------- -------------------------------- 0.00 BSF_BESS_OUTBOUND META: BESS Outbound X-BESS-Outbound-Spam-Status: SCORE=0.00 using account:ESS113687 scores of KILL_LEVEL=7.0 tests=BSF_BESS_OUTBOUND X-BESS-BRTS-Status: 1 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Danylo Vodopianov Split-queue support was added. Internal structures were enhanced with additional managmnet fields. Implement a managed virtual queue function based on the queue type and configuration parameters. DBS control registers were added. Signed-off-by: Danylo Vodopianov --- drivers/net/ntnic/dbsconfig/ntnic_dbsconfig.c | 411 +++++++++++++++++- drivers/net/ntnic/include/ntnic_dbs.h | 19 + drivers/net/ntnic/include/ntnic_virt_queue.h | 7 + drivers/net/ntnic/nthw/dbs/nthw_dbs.c | 125 +++++- .../ntnic/nthw/supported/nthw_fpga_reg_defs.h | 1 + .../nthw/supported/nthw_fpga_reg_defs_dbs.h | 79 ++++ 6 files changed, 640 insertions(+), 2 deletions(-) create mode 100644 drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs_dbs.h diff --git a/drivers/net/ntnic/dbsconfig/ntnic_dbsconfig.c b/drivers/net/ntnic/dbsconfig/ntnic_dbsconfig.c index fc1dab6c5f..e69cf7ad21 100644 --- a/drivers/net/ntnic/dbsconfig/ntnic_dbsconfig.c +++ b/drivers/net/ntnic/dbsconfig/ntnic_dbsconfig.c @@ -10,6 +10,7 @@ #include "ntnic_mod_reg.h" #include "ntlog.h" +#define STRUCT_ALIGNMENT (4 * 1024LU) #define MAX_VIRT_QUEUES 128 #define LAST_QUEUE 127 @@ -34,12 +35,79 @@ #define TX_AM_POLL_SPEED 5 #define TX_UW_POLL_SPEED 8 +#define VIRTQ_AVAIL_F_NO_INTERRUPT 1 + +struct __rte_aligned(8) virtq_avail { + uint16_t flags; + uint16_t idx; + uint16_t ring[]; /* Queue Size */ +}; + +struct __rte_aligned(8) virtq_used_elem { + /* Index of start of used descriptor chain. */ + uint32_t id; + /* Total length of the descriptor chain which was used (written to) */ + uint32_t len; +}; + +struct __rte_aligned(8) virtq_used { + uint16_t flags; + uint16_t idx; + struct virtq_used_elem ring[]; /* Queue Size */ +}; + +struct virtq_struct_layout_s { + size_t used_offset; + size_t desc_offset; +}; + enum nthw_virt_queue_usage { - NTHW_VIRTQ_UNUSED = 0 + NTHW_VIRTQ_UNUSED = 0, + NTHW_VIRTQ_UNMANAGED, + NTHW_VIRTQ_MANAGED }; struct nthw_virt_queue { + /* Pointers to virt-queue structs */ + struct { + /* SPLIT virtqueue */ + struct virtq_avail *p_avail; + struct virtq_used *p_used; + struct virtq_desc *p_desc; + /* Control variables for virt-queue structs */ + uint16_t am_idx; + uint16_t used_idx; + uint16_t cached_idx; + uint16_t tx_descr_avail_idx; + }; + + /* Array with packet buffers */ + struct nthw_memory_descriptor *p_virtual_addr; + + /* Queue configuration info */ + nthw_dbs_t *mp_nthw_dbs; + enum nthw_virt_queue_usage usage; + uint16_t irq_vector; + uint16_t vq_type; + uint16_t in_order; + + uint16_t queue_size; + uint32_t index; + uint32_t am_enable; + uint32_t host_id; + uint32_t port; /* Only used by TX queues */ + uint32_t virtual_port; /* Only used by TX queues */ + /* + * Only used by TX queues: + * 0: VirtIO-Net header (12 bytes). + * 1: Napatech DVIO0 descriptor (12 bytes). + */ +}; + +struct pvirtq_struct_layout_s { + size_t driver_event_offset; + size_t device_event_offset; }; static struct nthw_virt_queue rxvq[MAX_VIRT_QUEUES]; @@ -143,7 +211,348 @@ static int nthw_virt_queue_init(struct fpga_info_s *p_fpga_info) return 0; } +static struct virtq_struct_layout_s dbs_calc_struct_layout(uint32_t queue_size) +{ + /* + sizeof(uint16_t); ("avail->used_event" is not used) */ + size_t avail_mem = sizeof(struct virtq_avail) + queue_size * sizeof(uint16_t); + size_t avail_mem_aligned = ((avail_mem % STRUCT_ALIGNMENT) == 0) + ? avail_mem + : STRUCT_ALIGNMENT * (avail_mem / STRUCT_ALIGNMENT + 1); + + /* + sizeof(uint16_t); ("used->avail_event" is not used) */ + size_t used_mem = sizeof(struct virtq_used) + queue_size * sizeof(struct virtq_used_elem); + size_t used_mem_aligned = ((used_mem % STRUCT_ALIGNMENT) == 0) + ? used_mem + : STRUCT_ALIGNMENT * (used_mem / STRUCT_ALIGNMENT + 1); + + struct virtq_struct_layout_s virtq_layout; + virtq_layout.used_offset = avail_mem_aligned; + virtq_layout.desc_offset = avail_mem_aligned + used_mem_aligned; + + return virtq_layout; +} + +static void dbs_initialize_avail_struct(void *addr, uint16_t queue_size, + uint16_t initial_avail_idx) +{ + uint16_t i; + struct virtq_avail *p_avail = (struct virtq_avail *)addr; + + p_avail->flags = VIRTQ_AVAIL_F_NO_INTERRUPT; + p_avail->idx = initial_avail_idx; + + for (i = 0; i < queue_size; ++i) + p_avail->ring[i] = i; +} + +static void dbs_initialize_used_struct(void *addr, uint16_t queue_size) +{ + int i; + struct virtq_used *p_used = (struct virtq_used *)addr; + + p_used->flags = 1; + p_used->idx = 0; + + for (i = 0; i < queue_size; ++i) { + p_used->ring[i].id = 0; + p_used->ring[i].len = 0; + } +} + +static void +dbs_initialize_descriptor_struct(void *addr, + struct nthw_memory_descriptor *packet_buffer_descriptors, + uint16_t queue_size, uint16_t flgs) +{ + if (packet_buffer_descriptors) { + int i; + struct virtq_desc *p_desc = (struct virtq_desc *)addr; + + for (i = 0; i < queue_size; ++i) { + p_desc[i].addr = (uint64_t)packet_buffer_descriptors[i].phys_addr; + p_desc[i].len = packet_buffer_descriptors[i].len; + p_desc[i].flags = flgs; + p_desc[i].next = 0; + } + } +} + +static void +dbs_initialize_virt_queue_structs(void *avail_struct_addr, void *used_struct_addr, + void *desc_struct_addr, + struct nthw_memory_descriptor *packet_buffer_descriptors, + uint16_t queue_size, uint16_t initial_avail_idx, uint16_t flgs) +{ + dbs_initialize_avail_struct(avail_struct_addr, queue_size, initial_avail_idx); + dbs_initialize_used_struct(used_struct_addr, queue_size); + dbs_initialize_descriptor_struct(desc_struct_addr, packet_buffer_descriptors, queue_size, + flgs); +} + +static struct nthw_virt_queue *nthw_setup_rx_virt_queue(nthw_dbs_t *p_nthw_dbs, + uint32_t index, + uint16_t start_idx, + uint16_t start_ptr, + void *avail_struct_phys_addr, + void *used_struct_phys_addr, + void *desc_struct_phys_addr, + uint16_t queue_size, + uint32_t host_id, + uint32_t header, + uint32_t vq_type, + int irq_vector) +{ + (void)header; + (void)desc_struct_phys_addr; + (void)avail_struct_phys_addr; + (void)used_struct_phys_addr; + + + /* + * 5. Initialize all RX queues (all DBS_RX_QUEUES of them) using the + * DBS.RX_INIT register. + */ + dbs_init_rx_queue(p_nthw_dbs, index, start_idx, start_ptr); + + /* Save queue state */ + rxvq[index].usage = NTHW_VIRTQ_UNMANAGED; + rxvq[index].mp_nthw_dbs = p_nthw_dbs; + rxvq[index].index = index; + rxvq[index].queue_size = queue_size; + rxvq[index].am_enable = (irq_vector < 0) ? RX_AM_ENABLE : RX_AM_DISABLE; + rxvq[index].host_id = host_id; + rxvq[index].vq_type = vq_type; + rxvq[index].in_order = 0; /* not used */ + rxvq[index].irq_vector = irq_vector; + + /* Return queue handle */ + return &rxvq[index]; +} + +static struct nthw_virt_queue *nthw_setup_tx_virt_queue(nthw_dbs_t *p_nthw_dbs, + uint32_t index, + uint16_t start_idx, + uint16_t start_ptr, + void *avail_struct_phys_addr, + void *used_struct_phys_addr, + void *desc_struct_phys_addr, + uint16_t queue_size, + uint32_t host_id, + uint32_t port, + uint32_t virtual_port, + uint32_t header, + uint32_t vq_type, + int irq_vector, + uint32_t in_order) +{ + (void)header; + (void)desc_struct_phys_addr; + (void)avail_struct_phys_addr; + (void)used_struct_phys_addr; + + /* + * 5. Initialize all TX queues (all DBS_TX_QUEUES of them) using the + * DBS.TX_INIT register. + */ + dbs_init_tx_queue(p_nthw_dbs, index, start_idx, start_ptr); + + /* Save queue state */ + txvq[index].usage = NTHW_VIRTQ_UNMANAGED; + txvq[index].mp_nthw_dbs = p_nthw_dbs; + txvq[index].index = index; + txvq[index].queue_size = queue_size; + txvq[index].am_enable = (irq_vector < 0) ? TX_AM_ENABLE : TX_AM_DISABLE; + txvq[index].host_id = host_id; + txvq[index].port = port; + txvq[index].virtual_port = virtual_port; + txvq[index].vq_type = vq_type; + txvq[index].in_order = in_order; + txvq[index].irq_vector = irq_vector; + + /* Return queue handle */ + return &txvq[index]; +} + +static struct nthw_virt_queue * +nthw_setup_mngd_rx_virt_queue_split(nthw_dbs_t *p_nthw_dbs, + uint32_t index, + uint32_t queue_size, + uint32_t host_id, + uint32_t header, + struct nthw_memory_descriptor *p_virt_struct_area, + struct nthw_memory_descriptor *p_packet_buffers, + int irq_vector) +{ + struct virtq_struct_layout_s virtq_struct_layout = dbs_calc_struct_layout(queue_size); + + dbs_initialize_virt_queue_structs(p_virt_struct_area->virt_addr, + (char *)p_virt_struct_area->virt_addr + + virtq_struct_layout.used_offset, + (char *)p_virt_struct_area->virt_addr + + virtq_struct_layout.desc_offset, + p_packet_buffers, + (uint16_t)queue_size, + p_packet_buffers ? (uint16_t)queue_size : 0, + VIRTQ_DESC_F_WRITE /* Rx */); + + rxvq[index].p_avail = p_virt_struct_area->virt_addr; + rxvq[index].p_used = + (void *)((char *)p_virt_struct_area->virt_addr + virtq_struct_layout.used_offset); + rxvq[index].p_desc = + (void *)((char *)p_virt_struct_area->virt_addr + virtq_struct_layout.desc_offset); + + rxvq[index].am_idx = p_packet_buffers ? (uint16_t)queue_size : 0; + rxvq[index].used_idx = 0; + rxvq[index].cached_idx = 0; + rxvq[index].p_virtual_addr = NULL; + + if (p_packet_buffers) { + rxvq[index].p_virtual_addr = malloc(queue_size * sizeof(*p_packet_buffers)); + memcpy(rxvq[index].p_virtual_addr, p_packet_buffers, + queue_size * sizeof(*p_packet_buffers)); + } + + nthw_setup_rx_virt_queue(p_nthw_dbs, index, 0, 0, (void *)p_virt_struct_area->phys_addr, + (char *)p_virt_struct_area->phys_addr + + virtq_struct_layout.used_offset, + (char *)p_virt_struct_area->phys_addr + + virtq_struct_layout.desc_offset, + (uint16_t)queue_size, host_id, header, SPLIT_RING, irq_vector); + + rxvq[index].usage = NTHW_VIRTQ_MANAGED; + + return &rxvq[index]; +} + +static struct nthw_virt_queue * +nthw_setup_mngd_tx_virt_queue_split(nthw_dbs_t *p_nthw_dbs, + uint32_t index, + uint32_t queue_size, + uint32_t host_id, + uint32_t port, + uint32_t virtual_port, + uint32_t header, + int irq_vector, + uint32_t in_order, + struct nthw_memory_descriptor *p_virt_struct_area, + struct nthw_memory_descriptor *p_packet_buffers) +{ + struct virtq_struct_layout_s virtq_struct_layout = dbs_calc_struct_layout(queue_size); + + dbs_initialize_virt_queue_structs(p_virt_struct_area->virt_addr, + (char *)p_virt_struct_area->virt_addr + + virtq_struct_layout.used_offset, + (char *)p_virt_struct_area->virt_addr + + virtq_struct_layout.desc_offset, + p_packet_buffers, + (uint16_t)queue_size, + 0, + 0 /* Tx */); + + txvq[index].p_avail = p_virt_struct_area->virt_addr; + txvq[index].p_used = + (void *)((char *)p_virt_struct_area->virt_addr + virtq_struct_layout.used_offset); + txvq[index].p_desc = + (void *)((char *)p_virt_struct_area->virt_addr + virtq_struct_layout.desc_offset); + txvq[index].queue_size = (uint16_t)queue_size; + txvq[index].am_idx = 0; + txvq[index].used_idx = 0; + txvq[index].cached_idx = 0; + txvq[index].p_virtual_addr = NULL; + + txvq[index].tx_descr_avail_idx = 0; + + if (p_packet_buffers) { + txvq[index].p_virtual_addr = malloc(queue_size * sizeof(*p_packet_buffers)); + memcpy(txvq[index].p_virtual_addr, p_packet_buffers, + queue_size * sizeof(*p_packet_buffers)); + } + + nthw_setup_tx_virt_queue(p_nthw_dbs, index, 0, 0, (void *)p_virt_struct_area->phys_addr, + (char *)p_virt_struct_area->phys_addr + + virtq_struct_layout.used_offset, + (char *)p_virt_struct_area->phys_addr + + virtq_struct_layout.desc_offset, + (uint16_t)queue_size, host_id, port, virtual_port, header, + SPLIT_RING, irq_vector, in_order); + + txvq[index].usage = NTHW_VIRTQ_MANAGED; + + return &txvq[index]; +} + +/* + * Create a Managed Rx Virt Queue + * + * Notice: The queue will be created with interrupts disabled. + * If interrupts are required, make sure to call nthw_enable_rx_virt_queue() + * afterwards. + */ +static struct nthw_virt_queue * +nthw_setup_mngd_rx_virt_queue(nthw_dbs_t *p_nthw_dbs, + uint32_t index, + uint32_t queue_size, + uint32_t host_id, + uint32_t header, + struct nthw_memory_descriptor *p_virt_struct_area, + struct nthw_memory_descriptor *p_packet_buffers, + uint32_t vq_type, + int irq_vector) +{ + switch (vq_type) { + case SPLIT_RING: + return nthw_setup_mngd_rx_virt_queue_split(p_nthw_dbs, index, queue_size, + host_id, header, p_virt_struct_area, + p_packet_buffers, irq_vector); + + default: + break; + } + + return NULL; +} + +/* + * Create a Managed Tx Virt Queue + * + * Notice: The queue will be created with interrupts disabled. + * If interrupts are required, make sure to call nthw_enable_tx_virt_queue() + * afterwards. + */ +static struct nthw_virt_queue * +nthw_setup_mngd_tx_virt_queue(nthw_dbs_t *p_nthw_dbs, + uint32_t index, + uint32_t queue_size, + uint32_t host_id, + uint32_t port, + uint32_t virtual_port, + uint32_t header, + struct nthw_memory_descriptor *p_virt_struct_area, + struct nthw_memory_descriptor *p_packet_buffers, + uint32_t vq_type, + int irq_vector, + uint32_t in_order) +{ + switch (vq_type) { + case SPLIT_RING: + return nthw_setup_mngd_tx_virt_queue_split(p_nthw_dbs, index, queue_size, + host_id, port, virtual_port, header, + irq_vector, in_order, + p_virt_struct_area, + p_packet_buffers); + + default: + break; + } + + return NULL; +} + static struct sg_ops_s sg_ops = { + .nthw_setup_rx_virt_queue = nthw_setup_rx_virt_queue, + .nthw_setup_tx_virt_queue = nthw_setup_tx_virt_queue, + .nthw_setup_mngd_rx_virt_queue = nthw_setup_mngd_rx_virt_queue, + .nthw_setup_mngd_tx_virt_queue = nthw_setup_mngd_tx_virt_queue, .nthw_virt_queue_init = nthw_virt_queue_init }; diff --git a/drivers/net/ntnic/include/ntnic_dbs.h b/drivers/net/ntnic/include/ntnic_dbs.h index a64d2a0aeb..4e6236e8b4 100644 --- a/drivers/net/ntnic/include/ntnic_dbs.h +++ b/drivers/net/ntnic/include/ntnic_dbs.h @@ -47,6 +47,11 @@ struct nthw_dbs_s { nthw_field_t *mp_fld_rx_init_val_idx; nthw_field_t *mp_fld_rx_init_val_ptr; + nthw_register_t *mp_reg_rx_ptr; + nthw_field_t *mp_fld_rx_ptr_ptr; + nthw_field_t *mp_fld_rx_ptr_queue; + nthw_field_t *mp_fld_rx_ptr_valid; + nthw_register_t *mp_reg_tx_init; nthw_field_t *mp_fld_tx_init_init; nthw_field_t *mp_fld_tx_init_queue; @@ -56,6 +61,20 @@ struct nthw_dbs_s { nthw_field_t *mp_fld_tx_init_val_idx; nthw_field_t *mp_fld_tx_init_val_ptr; + nthw_register_t *mp_reg_tx_ptr; + nthw_field_t *mp_fld_tx_ptr_ptr; + nthw_field_t *mp_fld_tx_ptr_queue; + nthw_field_t *mp_fld_tx_ptr_valid; + + nthw_register_t *mp_reg_rx_idle; + nthw_field_t *mp_fld_rx_idle_idle; + nthw_field_t *mp_fld_rx_idle_queue; + nthw_field_t *mp_fld_rx_idle_busy; + + nthw_register_t *mp_reg_tx_idle; + nthw_field_t *mp_fld_tx_idle_idle; + nthw_field_t *mp_fld_tx_idle_queue; + nthw_field_t *mp_fld_tx_idle_busy; }; typedef struct nthw_dbs_s nthw_dbs_t; diff --git a/drivers/net/ntnic/include/ntnic_virt_queue.h b/drivers/net/ntnic/include/ntnic_virt_queue.h index f8842819e4..97cb474dc8 100644 --- a/drivers/net/ntnic/include/ntnic_virt_queue.h +++ b/drivers/net/ntnic/include/ntnic_virt_queue.h @@ -23,6 +23,13 @@ struct nthw_virt_queue; * contiguous) In Used descriptors it must be ignored */ #define VIRTQ_DESC_F_NEXT 1 +/* + * SPLIT : This marks a buffer as device write-only (otherwise device read-only). + * PACKED: This marks a descriptor as device write-only (otherwise device read-only). + * PACKED: In a used descriptor, this bit is used to specify whether any data has been written by + * the device into any parts of the buffer. + */ +#define VIRTQ_DESC_F_WRITE 2 /* * Split Ring virtq Descriptor diff --git a/drivers/net/ntnic/nthw/dbs/nthw_dbs.c b/drivers/net/ntnic/nthw/dbs/nthw_dbs.c index 853d7bc1ec..cd1123b6f3 100644 --- a/drivers/net/ntnic/nthw/dbs/nthw_dbs.c +++ b/drivers/net/ntnic/nthw/dbs/nthw_dbs.c @@ -44,12 +44,135 @@ int dbs_init(nthw_dbs_t *p, nthw_fpga_t *p_fpga, int n_instance) p->mp_fpga->p_fpga_info->mp_adapter_id_str, p->mn_instance); } + p->mp_reg_rx_control = nthw_module_get_register(p->mp_mod_dbs, DBS_RX_CONTROL); + p->mp_fld_rx_control_last_queue = + nthw_register_get_field(p->mp_reg_rx_control, DBS_RX_CONTROL_LQ); + p->mp_fld_rx_control_avail_monitor_enable = + nthw_register_get_field(p->mp_reg_rx_control, DBS_RX_CONTROL_AME); + p->mp_fld_rx_control_avail_monitor_scan_speed = + nthw_register_get_field(p->mp_reg_rx_control, DBS_RX_CONTROL_AMS); + p->mp_fld_rx_control_used_write_enable = + nthw_register_get_field(p->mp_reg_rx_control, DBS_RX_CONTROL_UWE); + p->mp_fld_rx_control_used_writer_update_speed = + nthw_register_get_field(p->mp_reg_rx_control, DBS_RX_CONTROL_UWS); + p->mp_fld_rx_control_rx_queues_enable = + nthw_register_get_field(p->mp_reg_rx_control, DBS_RX_CONTROL_QE); + + p->mp_reg_tx_control = nthw_module_get_register(p->mp_mod_dbs, DBS_TX_CONTROL); + p->mp_fld_tx_control_last_queue = + nthw_register_get_field(p->mp_reg_tx_control, DBS_TX_CONTROL_LQ); + p->mp_fld_tx_control_avail_monitor_enable = + nthw_register_get_field(p->mp_reg_tx_control, DBS_TX_CONTROL_AME); + p->mp_fld_tx_control_avail_monitor_scan_speed = + nthw_register_get_field(p->mp_reg_tx_control, DBS_TX_CONTROL_AMS); + p->mp_fld_tx_control_used_write_enable = + nthw_register_get_field(p->mp_reg_tx_control, DBS_TX_CONTROL_UWE); + p->mp_fld_tx_control_used_writer_update_speed = + nthw_register_get_field(p->mp_reg_tx_control, DBS_TX_CONTROL_UWS); + p->mp_fld_tx_control_tx_queues_enable = + nthw_register_get_field(p->mp_reg_tx_control, DBS_TX_CONTROL_QE); + + p->mp_reg_rx_init = nthw_module_get_register(p->mp_mod_dbs, DBS_RX_INIT); + p->mp_fld_rx_init_init = nthw_register_get_field(p->mp_reg_rx_init, DBS_RX_INIT_INIT); + p->mp_fld_rx_init_queue = nthw_register_get_field(p->mp_reg_rx_init, DBS_RX_INIT_QUEUE); + p->mp_fld_rx_init_busy = nthw_register_get_field(p->mp_reg_rx_init, DBS_RX_INIT_BUSY); + + p->mp_reg_rx_init_val = nthw_module_query_register(p->mp_mod_dbs, DBS_RX_INIT_VAL); + + if (p->mp_reg_rx_init_val) { + p->mp_fld_rx_init_val_idx = + nthw_register_query_field(p->mp_reg_rx_init_val, DBS_RX_INIT_VAL_IDX); + p->mp_fld_rx_init_val_ptr = + nthw_register_query_field(p->mp_reg_rx_init_val, DBS_RX_INIT_VAL_PTR); + } + + p->mp_reg_rx_ptr = nthw_module_query_register(p->mp_mod_dbs, DBS_RX_PTR); + + if (p->mp_reg_rx_ptr) { + p->mp_fld_rx_ptr_ptr = nthw_register_query_field(p->mp_reg_rx_ptr, DBS_RX_PTR_PTR); + p->mp_fld_rx_ptr_queue = + nthw_register_query_field(p->mp_reg_rx_ptr, DBS_RX_PTR_QUEUE); + p->mp_fld_rx_ptr_valid = + nthw_register_query_field(p->mp_reg_rx_ptr, DBS_RX_PTR_VALID); + } + + p->mp_reg_tx_init = nthw_module_get_register(p->mp_mod_dbs, DBS_TX_INIT); + p->mp_fld_tx_init_init = nthw_register_get_field(p->mp_reg_tx_init, DBS_TX_INIT_INIT); + p->mp_fld_tx_init_queue = nthw_register_get_field(p->mp_reg_tx_init, DBS_TX_INIT_QUEUE); + p->mp_fld_tx_init_busy = nthw_register_get_field(p->mp_reg_tx_init, DBS_TX_INIT_BUSY); + + p->mp_reg_tx_init_val = nthw_module_query_register(p->mp_mod_dbs, DBS_TX_INIT_VAL); + + if (p->mp_reg_tx_init_val) { + p->mp_fld_tx_init_val_idx = + nthw_register_query_field(p->mp_reg_tx_init_val, DBS_TX_INIT_VAL_IDX); + p->mp_fld_tx_init_val_ptr = + nthw_register_query_field(p->mp_reg_tx_init_val, DBS_TX_INIT_VAL_PTR); + } + + p->mp_reg_tx_ptr = nthw_module_query_register(p->mp_mod_dbs, DBS_TX_PTR); + + if (p->mp_reg_tx_ptr) { + p->mp_fld_tx_ptr_ptr = nthw_register_query_field(p->mp_reg_tx_ptr, DBS_TX_PTR_PTR); + p->mp_fld_tx_ptr_queue = + nthw_register_query_field(p->mp_reg_tx_ptr, DBS_TX_PTR_QUEUE); + p->mp_fld_tx_ptr_valid = + nthw_register_query_field(p->mp_reg_tx_ptr, DBS_TX_PTR_VALID); + } + + p->mp_reg_rx_idle = nthw_module_query_register(p->mp_mod_dbs, DBS_RX_IDLE); + + if (p->mp_reg_rx_idle) { + p->mp_fld_rx_idle_idle = + nthw_register_query_field(p->mp_reg_rx_idle, DBS_RX_IDLE_IDLE); + p->mp_fld_rx_idle_queue = + nthw_register_query_field(p->mp_reg_rx_idle, DBS_RX_IDLE_QUEUE); + p->mp_fld_rx_idle_busy = + nthw_register_query_field(p->mp_reg_rx_idle, DBS_RX_IDLE_BUSY); + } + + p->mp_reg_tx_idle = nthw_module_query_register(p->mp_mod_dbs, DBS_TX_IDLE); + + if (p->mp_reg_tx_idle) { + p->mp_fld_tx_idle_idle = + nthw_register_query_field(p->mp_reg_tx_idle, DBS_TX_IDLE_IDLE); + p->mp_fld_tx_idle_queue = + nthw_register_query_field(p->mp_reg_tx_idle, DBS_TX_IDLE_QUEUE); + p->mp_fld_tx_idle_busy = + nthw_register_query_field(p->mp_reg_tx_idle, DBS_TX_IDLE_BUSY); + } + + return 0; +} + +static int dbs_reset_rx_control(nthw_dbs_t *p) +{ + nthw_field_set_val32(p->mp_fld_rx_control_last_queue, 0); + nthw_field_set_val32(p->mp_fld_rx_control_avail_monitor_enable, 0); + nthw_field_set_val32(p->mp_fld_rx_control_avail_monitor_scan_speed, 8); + nthw_field_set_val32(p->mp_fld_rx_control_used_write_enable, 0); + nthw_field_set_val32(p->mp_fld_rx_control_used_writer_update_speed, 5); + nthw_field_set_val32(p->mp_fld_rx_control_rx_queues_enable, 0); + nthw_register_flush(p->mp_reg_rx_control, 1); + return 0; +} + +static int dbs_reset_tx_control(nthw_dbs_t *p) +{ + nthw_field_set_val32(p->mp_fld_tx_control_last_queue, 0); + nthw_field_set_val32(p->mp_fld_tx_control_avail_monitor_enable, 0); + nthw_field_set_val32(p->mp_fld_tx_control_avail_monitor_scan_speed, 5); + nthw_field_set_val32(p->mp_fld_tx_control_used_write_enable, 0); + nthw_field_set_val32(p->mp_fld_tx_control_used_writer_update_speed, 8); + nthw_field_set_val32(p->mp_fld_tx_control_tx_queues_enable, 0); + nthw_register_flush(p->mp_reg_tx_control, 1); return 0; } void dbs_reset(nthw_dbs_t *p) { - (void)p; + dbs_reset_rx_control(p); + dbs_reset_tx_control(p); } int set_rx_control(nthw_dbs_t *p, diff --git a/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs.h b/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs.h index 45f9794958..3560eeda7d 100644 --- a/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs.h +++ b/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs.h @@ -16,6 +16,7 @@ #include "nthw_fpga_reg_defs_cat.h" #include "nthw_fpga_reg_defs_cpy.h" #include "nthw_fpga_reg_defs_csu.h" +#include "nthw_fpga_reg_defs_dbs.h" #include "nthw_fpga_reg_defs_flm.h" #include "nthw_fpga_reg_defs_gfg.h" #include "nthw_fpga_reg_defs_gmf.h" diff --git a/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs_dbs.h b/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs_dbs.h new file mode 100644 index 0000000000..ee5d726aab --- /dev/null +++ b/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs_dbs.h @@ -0,0 +1,79 @@ +/* + * SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2024 Napatech A/S + */ + +/* + * nthw_fpga_reg_defs_dbs.h + * + * Auto-generated file - do *NOT* edit + * + */ + +#ifndef _NTHW_FPGA_REG_DEFS_DBS_ +#define _NTHW_FPGA_REG_DEFS_DBS_ + +/* DBS */ +#define DBS_RX_CONTROL (0xb18b2866UL) +#define DBS_RX_CONTROL_AME (0x1f9219acUL) +#define DBS_RX_CONTROL_AMS (0xeb46acfdUL) +#define DBS_RX_CONTROL_LQ (0xe65f90b2UL) +#define DBS_RX_CONTROL_QE (0x3e928d3UL) +#define DBS_RX_CONTROL_UWE (0xb490e8dbUL) +#define DBS_RX_CONTROL_UWS (0x40445d8aUL) +#define DBS_RX_IDLE (0x93c723bfUL) +#define DBS_RX_IDLE_BUSY (0x8e043b5bUL) +#define DBS_RX_IDLE_IDLE (0x9dba27ccUL) +#define DBS_RX_IDLE_QUEUE (0xbbddab49UL) +#define DBS_RX_INIT (0x899772deUL) +#define DBS_RX_INIT_BUSY (0x8576d90aUL) +#define DBS_RX_INIT_INIT (0x8c9894fcUL) +#define DBS_RX_INIT_QUEUE (0xa7bab8c9UL) +#define DBS_RX_INIT_VAL (0x7789b4d8UL) +#define DBS_RX_INIT_VAL_IDX (0xead0e2beUL) +#define DBS_RX_INIT_VAL_PTR (0x5330810eUL) +#define DBS_RX_PTR (0x628ce523UL) +#define DBS_RX_PTR_PTR (0x7f834481UL) +#define DBS_RX_PTR_QUEUE (0x4f3fa6d1UL) +#define DBS_RX_PTR_VALID (0xbcc5ec4dUL) +#define DBS_STATUS (0xb5f35220UL) +#define DBS_STATUS_OK (0xcf09a30fUL) +#define DBS_TX_CONTROL (0xbc955821UL) +#define DBS_TX_CONTROL_AME (0xe750521aUL) +#define DBS_TX_CONTROL_AMS (0x1384e74bUL) +#define DBS_TX_CONTROL_LQ (0x46ba4f6fUL) +#define DBS_TX_CONTROL_QE (0xa30cf70eUL) +#define DBS_TX_CONTROL_UWE (0x4c52a36dUL) +#define DBS_TX_CONTROL_UWS (0xb886163cUL) +#define DBS_TX_IDLE (0xf0171685UL) +#define DBS_TX_IDLE_BUSY (0x61399ebbUL) +#define DBS_TX_IDLE_IDLE (0x7287822cUL) +#define DBS_TX_IDLE_QUEUE (0x1b387494UL) +#define DBS_TX_INIT (0xea4747e4UL) +#define DBS_TX_INIT_BUSY (0x6a4b7ceaUL) +#define DBS_TX_INIT_INIT (0x63a5311cUL) +#define DBS_TX_INIT_QUEUE (0x75f6714UL) +#define DBS_TX_INIT_VAL (0x9f3c7e9bUL) +#define DBS_TX_INIT_VAL_IDX (0xc82a364cUL) +#define DBS_TX_INIT_VAL_PTR (0x71ca55fcUL) +#define DBS_TX_PTR (0xb4d5063eUL) +#define DBS_TX_PTR_PTR (0x729d34c6UL) +#define DBS_TX_PTR_QUEUE (0xa0020331UL) +#define DBS_TX_PTR_VALID (0x53f849adUL) +#define DBS_TX_QOS_CTRL (0x3b2c3286UL) +#define DBS_TX_QOS_CTRL_ADR (0x666600acUL) +#define DBS_TX_QOS_CTRL_CNT (0x766e997dUL) +#define DBS_TX_QOS_DATA (0x94fdb09fUL) +#define DBS_TX_QOS_DATA_BS (0x2c394071UL) +#define DBS_TX_QOS_DATA_EN (0x7eba6fUL) +#define DBS_TX_QOS_DATA_IR (0xb8caa92cUL) +#define DBS_TX_QOS_DATA_MUL (0xd7407a67UL) +#define DBS_TX_QOS_RATE (0xe6e27cc5UL) +#define DBS_TX_QOS_RATE_DIV (0x8cd07ba3UL) +#define DBS_TX_QOS_RATE_MUL (0x9814e40bUL) + +#endif /* _NTHW_FPGA_REG_DEFS_DBS_ */ + +/* + * Auto-generated file - do *NOT* edit + */ -- 2.45.0