From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 8B3AD45AD9; Mon, 7 Oct 2024 21:42:07 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id EBFE0427C9; Mon, 7 Oct 2024 21:36:21 +0200 (CEST) Received: from egress-ip42b.ess.de.barracuda.com (egress-ip42b.ess.de.barracuda.com [18.185.115.246]) by mails.dpdk.org (Postfix) with ESMTP id A22FB410D4 for ; Mon, 7 Oct 2024 21:35:51 +0200 (CEST) Received: from EUR02-DB5-obe.outbound.protection.outlook.com (mail-db5eur02lp2107.outbound.protection.outlook.com [104.47.11.107]) by mx-outbound43-193.eu-central-1c.ess.aws.cudaops.com (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NO); Mon, 07 Oct 2024 19:35:50 +0000 ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=AczQR5D1HDbpQoDR6m14gzuSPXwleJZ+xL2vFsUS34ldhrBXQVU0T+Cm42Bvtb45GzQLD2POYwfbALxmDdWHy9lZdGjAGyLPQdcQPoQNPM7Y7BS+VTxtiboRWUeepkOA6yXGeczDkbe0vtf9oPAr8fsmmFwdke/eIPp8dPCUv0CG7LAaJYueizdL8izcznXdRfnQmC1QnlSo2RLfZMTkDnsyGLULv3fv+Vtem+kt4KZW8G5LAgWLLV1DrKYFkttqBKYqssnxt23VRPhQ5rU6+8t9z+i6mMSblTLOWkLoC28z5ecxtlxWSg6S3eT58s7tw4w2jTzFzVY8Kh66bArJyw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=ahC8wrniSgFDG0q5Hn9+dSRDNcwDAZ+ND6BQS/Fkezs=; b=QpemToAW8igbumR0Yc4IyFOXiUUA1JxcVLCLHB0G9Y7zHxjD6leoXVbAVg3cpBOfrm/0h9HBSHcavo7aFot3RONEuBAHSbX4jHoyMc4jpsD2y68ehhIxVidQ+62nTmBfOXCnhblM2kgR9ABfDXS0JLPEvVmyM+VYc4uJsjSa9dVyCxW/8Xa5mnz9m+x6lhJindalDUcmMIDYlfFTUqV/wVE+8+yIzBQjk/go8q1/iWyafh9feUFE0KqhVfbolG3lS11ZhlpsGwyfrtVRQ9UTjMww5mEp2l4iFQ9k+EmFTQ6cZ6gYHaYCUQhQYT3JKzQBhJ5zBYKMddT8idx0T5B//Q== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=fail (sender ip is 178.72.21.4) smtp.rcpttodomain=dpdk.org smtp.mailfrom=napatech.com; dmarc=fail (p=reject sp=reject pct=100) action=oreject header.from=napatech.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=napatech.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=ahC8wrniSgFDG0q5Hn9+dSRDNcwDAZ+ND6BQS/Fkezs=; b=SFBLsqmSeOmeFfkvL/9+K+bzhJB7zaKSmq/CmUfNHr17paUu1G0A0MWUXNkYsw8FGaOSqn4NWVTvqKvLg2Nf2sfDFykZDESXFSzqevSfGip3Lza12bxETU+Su482bbpgLkFZYd5OHzlAz8JsIyKvPcS5negBv41HCvxXkp3z8bA= Received: from DB9PR05CA0007.eurprd05.prod.outlook.com (2603:10a6:10:1da::12) by DU0P190MB1978.EURP190.PROD.OUTLOOK.COM (2603:10a6:10:3b9::20) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8026.23; Mon, 7 Oct 2024 19:35:43 +0000 Received: from DB1PEPF000509EC.eurprd03.prod.outlook.com (2603:10a6:10:1da:cafe::bb) by DB9PR05CA0007.outlook.office365.com (2603:10a6:10:1da::12) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7982.34 via Frontend Transport; Mon, 7 Oct 2024 19:35:43 +0000 X-MS-Exchange-Authentication-Results: spf=fail (sender IP is 178.72.21.4) smtp.mailfrom=napatech.com; dkim=none (message not signed) header.d=none;dmarc=fail action=oreject header.from=napatech.com; Received-SPF: Fail (protection.outlook.com: domain of napatech.com does not designate 178.72.21.4 as permitted sender) receiver=protection.outlook.com; client-ip=178.72.21.4; helo=localhost.localdomain; Received: from localhost.localdomain (178.72.21.4) by DB1PEPF000509EC.mail.protection.outlook.com (10.167.242.70) with Microsoft SMTP Server id 15.20.8048.13 via Frontend Transport; Mon, 7 Oct 2024 19:35:43 +0000 From: Serhii Iliushyk To: dev@dpdk.org Cc: mko-plv@napatech.com, sil-plv@napatech.com, ckm@napatech.com, andrew.rybchenko@oktetlabs.ru, ferruh.yigit@amd.com, Danylo Vodopianov Subject: [PATCH v2 49/50] net/ntnic: add functions for releasing virt queues Date: Mon, 7 Oct 2024 21:34:25 +0200 Message-ID: <20241007193436.675785-50-sil-plv@napatech.com> X-Mailer: git-send-email 2.45.0 In-Reply-To: <20241007193436.675785-1-sil-plv@napatech.com> References: <20241006203728.330792-2-sil-plv@napatech.com> <20241007193436.675785-1-sil-plv@napatech.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: DB1PEPF000509EC:EE_|DU0P190MB1978:EE_ Content-Type: text/plain X-MS-Office365-Filtering-Correlation-Id: 119f07ef-4f36-47fb-7efc-08dce7073bcc X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; ARA:13230040|82310400026|36860700013|376014|1800799024; X-Microsoft-Antispam-Message-Info: =?us-ascii?Q?r9PtG3F7MMte/Qnpsvci3XIj4feJEgUpdMDLw6Vhsu5FTaR3tWzlWZsKRncB?= =?us-ascii?Q?/HUiJYdkb6RFHZ6Enr04jNT5nIJ0oeiLtEXrcsPJ0ZjdGN6MuLrGuqkS5CLC?= =?us-ascii?Q?f9rZMv1aWsQ9JY51CtqJjz5KQFHzr1XHa/6WuMLoiG0Be4q67xCty/fD/YvS?= =?us-ascii?Q?5veh2+lzHpELQ8U2rYZ4mHJMeLdrv6tDCTXO+xqxSjQJIjZ9cLu3gc5HTyxx?= =?us-ascii?Q?/Nu30+mtZgbPTN4x/uPjFiyIrnZm2n8G2YmWjuwhflrYc7Ip4EPO/uHeST3H?= =?us-ascii?Q?mGoGY8kMo3KZXxlXt/8YWiP3QF8sbGEqIQxMltu9cPvzqurejoHS9Nejt3Nq?= =?us-ascii?Q?g7OPxCFlqTlHGlxnsY+jqwiKo6xIGeaBnCkf5KXKOB3t9Gbw6qFv3XH53kGB?= =?us-ascii?Q?PD9D0X2kPt4l2pAReDdbDVCgIplJYoQmRIBA+v3/Q2Fx81271ii5PVR6fWkm?= =?us-ascii?Q?DwpKEpjWEx9Um1/XPiS8NxLG8yXAm4rFkbYSaxzy7z5MhmcIdq/V6L7cC7Qb?= =?us-ascii?Q?gE8gEAT3cQqicwuruWd99g9+ltMXo7cwyCag8ihVUnCFYk88BL6tLaB2MQqt?= =?us-ascii?Q?4phVL5GF07AlZsoar/FkmmOi15Q6IrzCt2eJ89sXfcoXZVuRuC4fDUBkljwR?= =?us-ascii?Q?FGdGQwJ3ft/J9PQ6s2sFaAXLHeYZa5/XBDkzYP3ilJK+IUhOzGay9ebyJxyt?= =?us-ascii?Q?5IHbUvOM8yBj/ZoVTwMWxCJBN6wADbgU7fqx4xzivp7WsXivA2acJI4O/2lY?= =?us-ascii?Q?lMThSBAvT8dzTVVARvl2FJLvUxLAzQuVWInc43IUV5Ttnynph8ts77/UMBwF?= =?us-ascii?Q?NzqB+9HbvwisL96ZRueOPHrQ5tkU8sjaJXfJZATbMxSbb1mLdSzL+XH7o2KG?= =?us-ascii?Q?jQnaVsRva43D06HdjJotR9hTYAqjYccxBXN6ECjo6WJlQ0mkfJFwNxRHGj4n?= =?us-ascii?Q?FQ7VLvL8bqtbUT7tOBVOdG2DHIzzisVSZiG5KPRpxJ3MB6aFRMFjxBtglF3m?= =?us-ascii?Q?eBwvS1oU7UkGaVSUBNgqz5+7+OxD8eFWqJkfxp+YiSiGOEU/gfNhpWq9rLJP?= =?us-ascii?Q?T7zAalCDtsACItzuX7vI9JrNK9p2bAhjpJ0uZkVdlVEvsLvVvnIHefRVobSi?= =?us-ascii?Q?si/5eyO0+ZmfAS7uidtpfWCZS2cODjBEWfVmnlwm6WZoDij7oIcNFLnNm0Zf?= =?us-ascii?Q?citvi8A33pk4tWenHEFTqH64Kh9UIsexIAAFzs4SPKpgL1GrAELxVuVhO173?= =?us-ascii?Q?takf3Mu0P72QpKDbQsc9/hAXdkobVK48s3b/WVEvUTr4mqlpNgsF89rf7g97?= =?us-ascii?Q?g2p9oxZQLhBjpRKZwSh2WgFUq7VLDUGEn5K8UiOF0RP5dkpc9OR299OfNjEL?= =?us-ascii?Q?FDgCq+M=3D?= X-Forefront-Antispam-Report: CIP:178.72.21.4; CTRY:DK; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:localhost.localdomain; PTR:InfoDomainNonexistent; CAT:NONE; SFS:(13230040)(82310400026)(36860700013)(376014)(1800799024); DIR:OUT; SFP:1102; X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0: WNeauQDVAhEzonNeaclNRybHoOiiWv/gnO8CUSFR112j2QlJkwV7DntnBnWoO6uZenmfp03YH5ucHhwfI5YKOnAp14sXNKqGPjYWjWcjosIAPiCzIcatsgOtzcRxsg4+CeRqQhWaYreZLVYDxI5M3PUQ3Xve0iaXbQobX7S4O1bMyLG8lYSV3yHZq0p2fq7OpquPpbiSBgBi/Kp+gv0RZ/yPxZ7Hk53eAJzhex/qtHhDFgCYCjxDd/H68QM5RI6O2ut810dpO6K8Azq2ZQtMT+KLHdNHC51rRtYF5h95CA3ZThx7o+B6+8rsRZWSdZ0jMwjpHzVf4axR1jPmCZ4ihK2JXDlBneUMwYfCR909HdN93/zkRqQ8H2V+uh9oJysbtiXjAxUCiKrqq5fqFRqtCV9MRYsm7amztLqlfX9EjX8IQ/tUY+K0EWiJmJiMU6Y05JefuLkY4hgUs23cLHrx+u/udWHJ8rNDGacbFy4nWd8SlrDcGWm5zNV7MexKT2P1m2K6iJzVHRA0JrCEW3cwpXSWJMyccb3KHdEusQpAmbBkVOY8JvIfhpxzdN+JXSg0Du9JhwMKxy2bsJf3B4RZTXcpg1o6jeu7SzPdkqO4gv5GbpCB2n5vdOjsu1NnosY05SqTVMJITL2QE94dEIFpJA== X-OriginatorOrg: napatech.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 07 Oct 2024 19:35:43.1000 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 119f07ef-4f36-47fb-7efc-08dce7073bcc X-MS-Exchange-CrossTenant-Id: c4540d0b-728a-4233-9da5-9ea30c7ec3ed X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=c4540d0b-728a-4233-9da5-9ea30c7ec3ed; Ip=[178.72.21.4]; Helo=[localhost.localdomain] X-MS-Exchange-CrossTenant-AuthSource: DB1PEPF000509EC.eurprd03.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DU0P190MB1978 X-BESS-ID: 1728329745-311201-2745-3536-2 X-BESS-VER: 2019.1_20241004.2057 X-BESS-Apparent-Source-IP: 104.47.11.107 X-BESS-Parts: H4sIAAAAAAACA4uuVkqtKFGyUioBkjpK+cVKVoYmhsbmQGYGUNTc0Mws1dAkLT XF1Dw5JdE80cAwzcDUJC0l0dLCzMTASKk2FgB14jvKQgAAAA== X-BESS-Outbound-Spam-Score: 0.00 X-BESS-Outbound-Spam-Report: Code version 3.2, rules version 3.2.2.259566 [from cloudscan12-161.eu-central-1a.ess.aws.cudaops.com] Rule breakdown below pts rule name description ---- ---------------------- -------------------------------- 0.00 BSF_BESS_OUTBOUND META: BESS Outbound X-BESS-Outbound-Spam-Status: SCORE=0.00 using account:ESS113687 scores of KILL_LEVEL=7.0 tests=BSF_BESS_OUTBOUND X-BESS-BRTS-Status: 1 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Danylo Vodopianov Implemented handler busy states and shutdowns of hardware queues. Added functionality for releasing RX and TX virtual queue resources, managed and releasing packets back into the availability ring. Updated sg_ops structure to include new queue management functions. Signed-off-by: Danylo Vodopianov --- drivers/net/ntnic/dbsconfig/ntnic_dbsconfig.c | 353 ++++++++++++++++++ drivers/net/ntnic/include/ntnic_dbs.h | 4 + drivers/net/ntnic/include/ntnic_virt_queue.h | 6 + drivers/net/ntnic/nthw/dbs/nthw_dbs.c | 45 +++ 4 files changed, 408 insertions(+) diff --git a/drivers/net/ntnic/dbsconfig/ntnic_dbsconfig.c b/drivers/net/ntnic/dbsconfig/ntnic_dbsconfig.c index 46b4c4415c..6d7862791d 100644 --- a/drivers/net/ntnic/dbsconfig/ntnic_dbsconfig.c +++ b/drivers/net/ntnic/dbsconfig/ntnic_dbsconfig.c @@ -6,6 +6,7 @@ #include #include "ntos_drv.h" +#include "nt_util.h" #include "ntnic_virt_queue.h" #include "ntnic_mod_reg.h" #include "ntlog.h" @@ -37,6 +38,26 @@ #define VIRTQ_AVAIL_F_NO_INTERRUPT 1 +#define vq_log_arg(vq, format, ...) + +/* + * Packed Ring helper macros + */ +#define PACKED(vq_type) ((vq_type) == PACKED_RING ? 1 : 0) + +#define avail_flag(vq) ((vq)->avail_wrap_count ? VIRTQ_DESC_F_AVAIL : 0) +#define used_flag_inv(vq) ((vq)->avail_wrap_count ? 0 : VIRTQ_DESC_F_USED) + +#define inc_avail(vq, num) \ + do { \ + struct nthw_virt_queue *temp_vq = (vq); \ + temp_vq->next_avail += (num); \ + if (temp_vq->next_avail >= temp_vq->queue_size) { \ + temp_vq->next_avail -= temp_vq->queue_size; \ + temp_vq->avail_wrap_count ^= 1; \ + } \ + } while (0) + struct __rte_aligned(8) virtq_avail { uint16_t flags; uint16_t idx; @@ -115,6 +136,7 @@ struct nthw_virt_queue { uint32_t host_id; uint32_t port; /* Only used by TX queues */ uint32_t virtual_port; /* Only used by TX queues */ + uint32_t header; /* * Only used by TX queues: * 0: VirtIO-Net header (12 bytes). @@ -417,6 +439,237 @@ static struct nthw_virt_queue *nthw_setup_rx_virt_queue(nthw_dbs_t *p_nthw_dbs, return &rxvq[index]; } +static int dbs_wait_hw_queue_shutdown(struct nthw_virt_queue *vq, int rx); + +static int dbs_wait_on_busy(struct nthw_virt_queue *vq, uint32_t *idle, int rx) +{ + uint32_t busy; + uint32_t queue; + int err = 0; + nthw_dbs_t *p_nthw_dbs = vq->mp_nthw_dbs; + + do { + if (rx) + err = get_rx_idle(p_nthw_dbs, idle, &queue, &busy); + + else + err = get_tx_idle(p_nthw_dbs, idle, &queue, &busy); + } while (!err && busy); + + return err; +} + +static int dbs_wait_hw_queue_shutdown(struct nthw_virt_queue *vq, int rx) +{ + int err = 0; + uint32_t idle = 0; + nthw_dbs_t *p_nthw_dbs = vq->mp_nthw_dbs; + + err = dbs_wait_on_busy(vq, &idle, rx); + + if (err) { + if (err == -ENOTSUP) { + nt_os_wait_usec(200000); + return 0; + } + + return -1; + } + + do { + if (rx) + err = set_rx_idle(p_nthw_dbs, 1, vq->index); + + else + err = set_tx_idle(p_nthw_dbs, 1, vq->index); + + if (err) + return -1; + + if (dbs_wait_on_busy(vq, &idle, rx) != 0) + return -1; + + } while (idle == 0); + + return 0; +} + +static int dbs_internal_release_rx_virt_queue(struct nthw_virt_queue *rxvq) +{ + nthw_dbs_t *p_nthw_dbs = rxvq->mp_nthw_dbs; + + if (rxvq == NULL) + return -1; + + /* Clear UW */ + rxvq->used_struct_phys_addr = NULL; + + if (set_rx_uw_data(p_nthw_dbs, rxvq->index, (uint64_t)rxvq->used_struct_phys_addr, + rxvq->host_id, 0, PACKED(rxvq->vq_type), 0, 0, 0) != 0) { + return -1; + } + + /* Disable AM */ + rxvq->am_enable = RX_AM_DISABLE; + + if (set_rx_am_data(p_nthw_dbs, + rxvq->index, + (uint64_t)rxvq->avail_struct_phys_addr, + rxvq->am_enable, + rxvq->host_id, + PACKED(rxvq->vq_type), + 0) != 0) { + return -1; + } + + /* Let the FPGA finish packet processing */ + if (dbs_wait_hw_queue_shutdown(rxvq, 1) != 0) + return -1; + + /* Clear rest of AM */ + rxvq->avail_struct_phys_addr = NULL; + rxvq->host_id = 0; + + if (set_rx_am_data(p_nthw_dbs, + rxvq->index, + (uint64_t)rxvq->avail_struct_phys_addr, + rxvq->am_enable, + rxvq->host_id, + PACKED(rxvq->vq_type), + 0) != 0) + return -1; + + /* Clear DR */ + rxvq->desc_struct_phys_addr = NULL; + + if (set_rx_dr_data(p_nthw_dbs, + rxvq->index, + (uint64_t)rxvq->desc_struct_phys_addr, + rxvq->host_id, + 0, + rxvq->header, + PACKED(rxvq->vq_type)) != 0) + return -1; + + /* Initialize queue */ + dbs_init_rx_queue(p_nthw_dbs, rxvq->index, 0, 0); + + /* Reset queue state */ + rxvq->usage = NTHW_VIRTQ_UNUSED; + rxvq->mp_nthw_dbs = p_nthw_dbs; + rxvq->index = 0; + rxvq->queue_size = 0; + + return 0; +} + +static int nthw_release_mngd_rx_virt_queue(struct nthw_virt_queue *rxvq) +{ + if (rxvq == NULL || rxvq->usage != NTHW_VIRTQ_MANAGED) + return -1; + + if (rxvq->p_virtual_addr) { + free(rxvq->p_virtual_addr); + rxvq->p_virtual_addr = NULL; + } + + return dbs_internal_release_rx_virt_queue(rxvq); +} + +static int dbs_internal_release_tx_virt_queue(struct nthw_virt_queue *txvq) +{ + nthw_dbs_t *p_nthw_dbs = txvq->mp_nthw_dbs; + + if (txvq == NULL) + return -1; + + /* Clear UW */ + txvq->used_struct_phys_addr = NULL; + + if (set_tx_uw_data(p_nthw_dbs, txvq->index, (uint64_t)txvq->used_struct_phys_addr, + txvq->host_id, 0, PACKED(txvq->vq_type), 0, 0, 0, + txvq->in_order) != 0) { + return -1; + } + + /* Disable AM */ + txvq->am_enable = TX_AM_DISABLE; + + if (set_tx_am_data(p_nthw_dbs, + txvq->index, + (uint64_t)txvq->avail_struct_phys_addr, + txvq->am_enable, + txvq->host_id, + PACKED(txvq->vq_type), + 0) != 0) { + return -1; + } + + /* Let the FPGA finish packet processing */ + if (dbs_wait_hw_queue_shutdown(txvq, 0) != 0) + return -1; + + /* Clear rest of AM */ + txvq->avail_struct_phys_addr = NULL; + txvq->host_id = 0; + + if (set_tx_am_data(p_nthw_dbs, + txvq->index, + (uint64_t)txvq->avail_struct_phys_addr, + txvq->am_enable, + txvq->host_id, + PACKED(txvq->vq_type), + 0) != 0) { + return -1; + } + + /* Clear DR */ + txvq->desc_struct_phys_addr = NULL; + txvq->port = 0; + txvq->header = 0; + + if (set_tx_dr_data(p_nthw_dbs, + txvq->index, + (uint64_t)txvq->desc_struct_phys_addr, + txvq->host_id, + 0, + txvq->port, + txvq->header, + PACKED(txvq->vq_type)) != 0) { + return -1; + } + + /* Clear QP */ + txvq->virtual_port = 0; + + if (nthw_dbs_set_tx_qp_data(p_nthw_dbs, txvq->index, txvq->virtual_port) != 0) + return -1; + + /* Initialize queue */ + dbs_init_tx_queue(p_nthw_dbs, txvq->index, 0, 0); + + /* Reset queue state */ + txvq->usage = NTHW_VIRTQ_UNUSED; + txvq->mp_nthw_dbs = p_nthw_dbs; + txvq->index = 0; + txvq->queue_size = 0; + + return 0; +} + +static int nthw_release_mngd_tx_virt_queue(struct nthw_virt_queue *txvq) +{ + if (txvq == NULL || txvq->usage != NTHW_VIRTQ_MANAGED) + return -1; + + if (txvq->p_virtual_addr) { + free(txvq->p_virtual_addr); + txvq->p_virtual_addr = NULL; + } + + return dbs_internal_release_tx_virt_queue(txvq); +} + static struct nthw_virt_queue *nthw_setup_tx_virt_queue(nthw_dbs_t *p_nthw_dbs, uint32_t index, uint16_t start_idx, @@ -844,11 +1097,111 @@ nthw_setup_mngd_tx_virt_queue(nthw_dbs_t *p_nthw_dbs, return NULL; } +/* + * Put buffers back into Avail Ring + */ +static void nthw_release_rx_packets(struct nthw_virt_queue *rxvq, uint16_t n) +{ + if (rxvq->vq_type == SPLIT_RING) { + rxvq->am_idx = (uint16_t)(rxvq->am_idx + n); + rxvq->p_avail->idx = rxvq->am_idx; + + } else if (rxvq->vq_type == PACKED_RING) { + int i; + /* + * Defer flags update on first segment - due to serialization towards HW and + * when jumbo segments are added + */ + + uint16_t first_flags = VIRTQ_DESC_F_WRITE | avail_flag(rxvq) | used_flag_inv(rxvq); + struct pvirtq_desc *first_desc = &rxvq->desc[rxvq->next_avail]; + + uint32_t len = rxvq->p_virtual_addr[0].len; /* all same size */ + + /* Optimization point: use in-order release */ + + for (i = 0; i < n; i++) { + struct pvirtq_desc *desc = &rxvq->desc[rxvq->next_avail]; + + desc->id = rxvq->next_avail; + desc->addr = (uint64_t)rxvq->p_virtual_addr[desc->id].phys_addr; + desc->len = len; + + if (i) + desc->flags = VIRTQ_DESC_F_WRITE | avail_flag(rxvq) | + used_flag_inv(rxvq); + + inc_avail(rxvq, 1); + } + + rte_rmb(); + first_desc->flags = first_flags; + } +} + +static void nthw_release_tx_packets(struct nthw_virt_queue *txvq, uint16_t n, uint16_t n_segs[]) +{ + int i; + + if (txvq->vq_type == SPLIT_RING) { + /* Valid because queue_size is always 2^n */ + uint16_t queue_mask = (uint16_t)(txvq->queue_size - 1); + + vq_log_arg(txvq, "pkts %i, avail idx %i, start at %i\n", n, txvq->am_idx, + txvq->tx_descr_avail_idx); + + for (i = 0; i < n; i++) { + int idx = txvq->am_idx & queue_mask; + txvq->p_avail->ring[idx] = txvq->tx_descr_avail_idx; + txvq->tx_descr_avail_idx = + (txvq->tx_descr_avail_idx + n_segs[i]) & queue_mask; + txvq->am_idx++; + } + + /* Make sure the ring has been updated before HW reads index update */ + rte_mb(); + txvq->p_avail->idx = txvq->am_idx; + vq_log_arg(txvq, "new avail idx %i, descr_idx %i\n", txvq->p_avail->idx, + txvq->tx_descr_avail_idx); + + } else if (txvq->vq_type == PACKED_RING) { + /* + * Defer flags update on first segment - due to serialization towards HW and + * when jumbo segments are added + */ + + uint16_t first_flags = avail_flag(txvq) | used_flag_inv(txvq); + struct pvirtq_desc *first_desc = &txvq->desc[txvq->next_avail]; + + for (i = 0; i < n; i++) { + struct pvirtq_desc *desc = &txvq->desc[txvq->next_avail]; + + desc->id = txvq->next_avail; + desc->addr = (uint64_t)txvq->p_virtual_addr[desc->id].phys_addr; + + if (i) + /* bitwise-or here because next flags may already have been setup + */ + desc->flags |= avail_flag(txvq) | used_flag_inv(txvq); + + inc_avail(txvq, 1); + } + + /* Proper read barrier before FPGA may see first flags */ + rte_rmb(); + first_desc->flags = first_flags; + } +} + static struct sg_ops_s sg_ops = { .nthw_setup_rx_virt_queue = nthw_setup_rx_virt_queue, .nthw_setup_tx_virt_queue = nthw_setup_tx_virt_queue, .nthw_setup_mngd_rx_virt_queue = nthw_setup_mngd_rx_virt_queue, + .nthw_release_mngd_rx_virt_queue = nthw_release_mngd_rx_virt_queue, .nthw_setup_mngd_tx_virt_queue = nthw_setup_mngd_tx_virt_queue, + .nthw_release_mngd_tx_virt_queue = nthw_release_mngd_tx_virt_queue, + .nthw_release_rx_packets = nthw_release_rx_packets, + .nthw_release_tx_packets = nthw_release_tx_packets, .nthw_virt_queue_init = nthw_virt_queue_init }; diff --git a/drivers/net/ntnic/include/ntnic_dbs.h b/drivers/net/ntnic/include/ntnic_dbs.h index 64947b4d8f..d70a7c445e 100644 --- a/drivers/net/ntnic/include/ntnic_dbs.h +++ b/drivers/net/ntnic/include/ntnic_dbs.h @@ -256,6 +256,10 @@ int get_rx_init(nthw_dbs_t *p, uint32_t *init, uint32_t *queue, uint32_t *busy); int set_tx_init(nthw_dbs_t *p, uint32_t start_idx, uint32_t start_ptr, uint32_t init, uint32_t queue); int get_tx_init(nthw_dbs_t *p, uint32_t *init, uint32_t *queue, uint32_t *busy); +int set_rx_idle(nthw_dbs_t *p, uint32_t idle, uint32_t queue); +int get_rx_idle(nthw_dbs_t *p, uint32_t *idle, uint32_t *queue, uint32_t *busy); +int set_tx_idle(nthw_dbs_t *p, uint32_t idle, uint32_t queue); +int get_tx_idle(nthw_dbs_t *p, uint32_t *idle, uint32_t *queue, uint32_t *busy); int set_rx_am_data(nthw_dbs_t *p, uint32_t index, uint64_t guest_physical_address, diff --git a/drivers/net/ntnic/include/ntnic_virt_queue.h b/drivers/net/ntnic/include/ntnic_virt_queue.h index d4c9a9835a..b95efabf97 100644 --- a/drivers/net/ntnic/include/ntnic_virt_queue.h +++ b/drivers/net/ntnic/include/ntnic_virt_queue.h @@ -45,8 +45,14 @@ struct __rte_aligned(8) virtq_desc { uint16_t next; }; + +/* + * Packed Ring special structures and defines + */ + /* additional packed ring flags */ #define VIRTQ_DESC_F_AVAIL (1 << 7) +#define VIRTQ_DESC_F_USED (1 << 15) /* descr phys address must be 16 byte aligned */ struct __rte_aligned(16) pvirtq_desc { diff --git a/drivers/net/ntnic/nthw/dbs/nthw_dbs.c b/drivers/net/ntnic/nthw/dbs/nthw_dbs.c index 6e1c5a5af6..89aa5428eb 100644 --- a/drivers/net/ntnic/nthw/dbs/nthw_dbs.c +++ b/drivers/net/ntnic/nthw/dbs/nthw_dbs.c @@ -485,6 +485,51 @@ int get_tx_init(nthw_dbs_t *p, uint32_t *init, uint32_t *queue, uint32_t *busy) return 0; } +int set_rx_idle(nthw_dbs_t *p, uint32_t idle, uint32_t queue) +{ + if (!p->mp_reg_rx_idle) + return -ENOTSUP; + + nthw_field_set_val32(p->mp_fld_rx_idle_idle, idle); + nthw_field_set_val32(p->mp_fld_rx_idle_queue, queue); + nthw_register_flush(p->mp_reg_rx_idle, 1); + return 0; +} + +int get_rx_idle(nthw_dbs_t *p, uint32_t *idle, uint32_t *queue, uint32_t *busy) +{ + if (!p->mp_reg_rx_idle) + return -ENOTSUP; + + *idle = nthw_field_get_updated(p->mp_fld_rx_idle_idle); + *queue = 0; + *busy = nthw_field_get_updated(p->mp_fld_rx_idle_busy); + return 0; +} + +int set_tx_idle(nthw_dbs_t *p, uint32_t idle, uint32_t queue) +{ + if (!p->mp_reg_tx_idle) + return -ENOTSUP; + + nthw_field_set_val32(p->mp_fld_tx_idle_idle, idle); + nthw_field_set_val32(p->mp_fld_tx_idle_queue, queue); + nthw_register_flush(p->mp_reg_tx_idle, 1); + return 0; +} + +int get_tx_idle(nthw_dbs_t *p, uint32_t *idle, uint32_t *queue, uint32_t *busy) +{ + if (!p->mp_reg_tx_idle) + return -ENOTSUP; + + *idle = nthw_field_get_updated(p->mp_fld_tx_idle_idle); + *queue = 0; + *busy = nthw_field_get_updated(p->mp_fld_tx_idle_busy); + return 0; +} + + static void set_rx_am_data_index(nthw_dbs_t *p, uint32_t index) { nthw_field_set_val32(p->mp_fld_rx_avail_monitor_control_adr, index); -- 2.45.0