From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id CD41945AD9; Mon, 7 Oct 2024 21:40:37 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id AB9A74279E; Mon, 7 Oct 2024 21:36:06 +0200 (CEST) Received: from egress-ip11b.ess.de.barracuda.com (egress-ip11b.ess.de.barracuda.com [18.185.115.215]) by mails.dpdk.org (Postfix) with ESMTP id AE35D40E2C for ; Mon, 7 Oct 2024 21:35:39 +0200 (CEST) Received: from EUR03-VI1-obe.outbound.protection.outlook.com (mail-vi1eur03lp2110.outbound.protection.outlook.com [104.47.30.110]) by mx-outbound18-114.eu-central-1b.ess.aws.cudaops.com (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NO); Mon, 07 Oct 2024 19:35:38 +0000 ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=UUF7kVd61Wh8JiWFv/IeZC3M2yuT/kALNmuk+yLkcRzVNMcKJJPZiLVW8zN3iz9JISJPRLvyNLt2UD+TpIlkZV57hu+J6uM68g1Wpti+Df/focu5OmktwoUUi6lsfiyX3Ocph4Vw1LZtlxfkkTzafo8pAVpM7jEeBQB/+6QGariflq0XGkNP/mhojOlBphNdQgJwX3rU0R9enSRsiP/xvsr7aO7bVDv7hDsKF5511ACBoTiCiQPR1jSrlM02xG19o9EH5asnQ3fZ3cMZSZz8YQxwsoU584LSDNoQPOYZu+3Pme2CCSp719cPpvrKgwhW+ZK/J6JwJo7aPMrzoyzWIA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=kl135QL+Wilsxaog3ZfKJhEIfQPHVBRKv6TVgSdqZI8=; b=KZOIYuK8XfjyPC5NJiOnvf4DnFYuDGViAmcoa7BY5sn9zcj4IYs5CLS/56H+orXWOXrtz+gRDH1oLZMvbwJBNwbw29ykBQZwLcE3xTBALbyUY7+YlUYUoS4T/WUkpYpfS/S32w2CZffZQVi8gPG9h2nyNmUMqoU7OLcd+WCE7UTFK+2kl/cimdiEHLEPxkbiSLu8Txa2ItKiHzTQB0O8kGNWL/eLuQ/YBqixlrsoA1IxB0Pj87tPx6T3Ad2ct1m6UavxKB/fS4jV8sSJIYU2ci4KAvApJx2dwgEaDUX3oVDIxJbzeyNvUBGHYbOhdp6lz/PvvQ8GLlaWFMYRqnAKYw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=fail (sender ip is 178.72.21.4) smtp.rcpttodomain=dpdk.org smtp.mailfrom=napatech.com; dmarc=fail (p=reject sp=reject pct=100) action=oreject header.from=napatech.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=napatech.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=kl135QL+Wilsxaog3ZfKJhEIfQPHVBRKv6TVgSdqZI8=; b=hZLeQB7XJ2D8i1RbCJ/xWeRrWR68p6uvwHWShwJ3VsXmTAsH4yXvXMSOzvqUXJHA848mVh6JPSbSmnTv9iMiDN42n21pCRJpZmRaADln2V5p7sXKl9BUYMFE+ACIkTHRPvGI8Yk4iVs/0BqyQo1oWF211TM9SnRwQqJysQ1Hpj4= Received: from DB9PR05CA0022.eurprd05.prod.outlook.com (2603:10a6:10:1da::27) by AM7P190MB0742.EURP190.PROD.OUTLOOK.COM (2603:10a6:20b:122::17) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8026.23; Mon, 7 Oct 2024 19:35:36 +0000 Received: from DB1PEPF000509EC.eurprd03.prod.outlook.com (2603:10a6:10:1da:cafe::1) by DB9PR05CA0022.outlook.office365.com (2603:10a6:10:1da::27) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7982.34 via Frontend Transport; Mon, 7 Oct 2024 19:35:36 +0000 X-MS-Exchange-Authentication-Results: spf=fail (sender IP is 178.72.21.4) smtp.mailfrom=napatech.com; dkim=none (message not signed) header.d=none;dmarc=fail action=oreject header.from=napatech.com; Received-SPF: Fail (protection.outlook.com: domain of napatech.com does not designate 178.72.21.4 as permitted sender) receiver=protection.outlook.com; client-ip=178.72.21.4; helo=localhost.localdomain; Received: from localhost.localdomain (178.72.21.4) by DB1PEPF000509EC.mail.protection.outlook.com (10.167.242.70) with Microsoft SMTP Server id 15.20.8048.13 via Frontend Transport; Mon, 7 Oct 2024 19:35:35 +0000 From: Serhii Iliushyk To: dev@dpdk.org Cc: mko-plv@napatech.com, sil-plv@napatech.com, ckm@napatech.com, andrew.rybchenko@oktetlabs.ru, ferruh.yigit@amd.com, Danylo Vodopianov Subject: [PATCH v2 41/50] net/ntnic: add packet handler for virtio queues Date: Mon, 7 Oct 2024 21:34:17 +0200 Message-ID: <20241007193436.675785-42-sil-plv@napatech.com> X-Mailer: git-send-email 2.45.0 In-Reply-To: <20241007193436.675785-1-sil-plv@napatech.com> References: <20241006203728.330792-2-sil-plv@napatech.com> <20241007193436.675785-1-sil-plv@napatech.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: DB1PEPF000509EC:EE_|AM7P190MB0742:EE_ Content-Type: text/plain X-MS-Office365-Filtering-Correlation-Id: fd09f0f2-e266-4ce1-784c-08dce7073775 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; ARA:13230040|1800799024|36860700013|376014|82310400026; X-Microsoft-Antispam-Message-Info: =?us-ascii?Q?VV0HGWdvDLCSEHF5HjUCHzNw5AW/Ro5x26TmGKx2l1oXCUXNQwYAsXa8Jxry?= =?us-ascii?Q?GichHQ1VKmVtQbjjTSY1/FrjM7B/2uNfKjVnKuBEKYU0CMFKN5fhkgdtXA2Q?= =?us-ascii?Q?j0haSCtYiFaNg5TxWd3b+n1vY5qiXKaVWDRDanVOVX5HcA6EZGelgaVtnxDj?= =?us-ascii?Q?z38y4Sovdca+N2bENU+2YuDIYycOjxOHJbI0CE6p+JSANvpnkLakJdRhA8Hi?= =?us-ascii?Q?E7UVlTWeQdiNzVzv+pIUiidWEWoP0OagAZRAMzcxvpPRK9Xl/fiPQqiksktl?= =?us-ascii?Q?Vr2eLslezixRFea3gBvBcZyDiMAoWmqPShtFhD9k1sfz/b0Q/bFSydcvqX/X?= =?us-ascii?Q?F5utbskEwZHrDa4H2kgT9innT0E9QXba5Lzog+JSJw0vnfb7ykrUFaQ/AiLJ?= =?us-ascii?Q?/9sA5Qq7mPMMkK566QNNBpikOvVXfe/ec0R2jPo3FN6fgzavff5S/tdaMbVq?= =?us-ascii?Q?jj75FgO9ebCpKLm8SmB+fSNHJ4Fx8OZ8TZ9mKlybop3EDEM3xpQf6uziHkos?= =?us-ascii?Q?OqEVKtIp/guCzPUnVgRjOlSIZ+YvdXcFpqpVK5JGHip9w6CKfxrTsk7uWGwB?= =?us-ascii?Q?22CVYx14iN/WxH97jaUrjgHmPomKt0iai1dLHgZqkKaa18UPG4p87PfOUpNX?= =?us-ascii?Q?Ejrf7v/Kerxdm8ykHyZcBSe+4vFVXSxhSx3k+MGg/wILVoVfuRThkIDkPEG8?= =?us-ascii?Q?ABfNwU/FY+qIdvdg+PVfdBPihpxmjN9wztpYegBTGK2lr1lY2AnlyDRqkMva?= =?us-ascii?Q?qjvSo2lim+1gCIj/gTeVw2qafYbb2T9h+FHRnnr/K6Y3hKQNKLpF0Q5uY7Qv?= =?us-ascii?Q?FKwo8Tx+p21+Xv3z2HXUnt0vxdoaKo4pEb8/OZHCgUtTR+OELRRIxxat+GRM?= =?us-ascii?Q?WEi4FtVtnKYGl/Q4enohB86rEf1V3Y8VNHzA0n6i31CeQ33BSyX0EP2edghy?= =?us-ascii?Q?8O/QHn94ys9f1KKLHaYD6MezYs9L8Vvz6rHS09xc6yw7ZGhFyGVO+A/iuYcU?= =?us-ascii?Q?Zj61FgxnJ1zAv47fSNJr+Xk/CfzzDGupK/nc3DVCivdAsqbgL8Wdyw4Z//88?= =?us-ascii?Q?/QZqgTjFJOkKWSmQEsK7f4hUU+zN8enXrS/QCNzH/mi5uSxnHBzO1O7QyJ8j?= =?us-ascii?Q?C6+JbYFrt50DXUujvoVS68cEF/eDECqQZnUaOd6EcNB/JMN3N6sc720m8zas?= =?us-ascii?Q?qxlBNo/akSJgLn0OTKaY+QYVsBBYZPMLETarrqw9wJHCSM1wAcn6svuEP8co?= =?us-ascii?Q?wtRubunTuY/Ki4jZZZYHKVIqySpfxKkOoJSY+Nxtda3R+XZ3MlOou6jBpS8p?= =?us-ascii?Q?rLNkLJG/ZHjCqaD8jjnQKy8RQ1NBmxt2MWHW7j/SCbI8a68DaASqecMW7wJd?= =?us-ascii?Q?QaY9ViHxvm1g69DRGprpEqMkUSpf?= X-Forefront-Antispam-Report: CIP:178.72.21.4; CTRY:DK; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:localhost.localdomain; PTR:InfoDomainNonexistent; CAT:NONE; SFS:(13230040)(1800799024)(36860700013)(376014)(82310400026); DIR:OUT; SFP:1102; X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0: Mdv/HCOGttYtJTE248rlIouOMmNfrV959VdX/eYHuIjpUWt6PDCM4i7Z0nZUioJs/VmpwzEW+6fBgY+Lpi8Uv4lYWajGBjSh6sHyWWzHPRWhkEHejJ2hZDMgcQMQxRZh2Ygd0quKa0OdK0JYpy4NRhIFNXihJ2J3Ljvh93wA13InPnG8DeFbcfaVBf/pSUxxZDmzxVno3gdv3o6eSpqIR6l4jbnVFgItMNF9UvRyOnjMDUBKhfdfyE2u8VLPAupm01E9ROQSSzHel676So57D2DYezKrV8dbxt6ZhfzPFvAtUrkdTAk+AdwPua1tVqXBIEdfkUNfJ7+TEXiZWXpGob9a4nsUGUyCu5I6hYvo/GSmUFCLUxElmvu4VgPbNYwgq9msUovwfgKMUp8z12p8g3FXRzDjfc4o5D/PONdR+KoKAaxJZitVym2t0SKsrfHJtnAXNzHrUQHfcO5juzF5wZD3rqRX18ePwdmee1QUk50ZahECs7/2Mnb+M5Bn8Fkrjfy5yEmvhiFr9D7GFY/kxIK+IybofXeYG6VRnsQImkhZ04Q1khAm1z46bxJ1dmVBCxC5I0e3lpLVLhjf3XzdWpK6hzYSJERVi2rXyxWF9cxbFoc1ZI5jJGMG2DnHj8Ikv9VImYvXCcJOeRkZscN0Kw== X-OriginatorOrg: napatech.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 07 Oct 2024 19:35:35.8188 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: fd09f0f2-e266-4ce1-784c-08dce7073775 X-MS-Exchange-CrossTenant-Id: c4540d0b-728a-4233-9da5-9ea30c7ec3ed X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=c4540d0b-728a-4233-9da5-9ea30c7ec3ed; Ip=[178.72.21.4]; Helo=[localhost.localdomain] X-MS-Exchange-CrossTenant-AuthSource: DB1PEPF000509EC.eurprd03.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM7P190MB0742 X-BESS-ID: 1728329738-304722-3998-4142-1 X-BESS-VER: 2019.1_20241004.2057 X-BESS-Apparent-Source-IP: 104.47.30.110 X-BESS-Parts: H4sIAAAAAAACA4uuVkqtKFGyUioBkjpK+cVKVoZmBuYWQGYGUDQ1Mc0oJTHVND HJ0DzNwtA4OcnA0jLJwsjA3Cw5KS0xVak2FgAKOWdTQgAAAA== X-BESS-Outbound-Spam-Score: 0.00 X-BESS-Outbound-Spam-Report: Code version 3.2, rules version 3.2.2.259566 [from cloudscan9-218.eu-central-1a.ess.aws.cudaops.com] Rule breakdown below pts rule name description ---- ---------------------- -------------------------------- 0.00 BSF_BESS_OUTBOUND META: BESS Outbound X-BESS-Outbound-Spam-Status: SCORE=0.00 using account:ESS113687 scores of KILL_LEVEL=7.0 tests=BSF_BESS_OUTBOUND X-BESS-BRTS-Status: 1 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Danylo Vodopianov Added functionality to handles the copying of segmented queue data into a rte_mbuf and vice versa. Added functionality to manages packet transmission for a specified TX and RX queues. Signed-off-by: Danylo Vodopianov --- drivers/net/ntnic/include/ntnic_virt_queue.h | 87 +++- drivers/net/ntnic/include/ntos_drv.h | 14 + drivers/net/ntnic/ntnic_ethdev.c | 441 +++++++++++++++++++ 3 files changed, 540 insertions(+), 2 deletions(-) diff --git a/drivers/net/ntnic/include/ntnic_virt_queue.h b/drivers/net/ntnic/include/ntnic_virt_queue.h index 821b23af6c..f8842819e4 100644 --- a/drivers/net/ntnic/include/ntnic_virt_queue.h +++ b/drivers/net/ntnic/include/ntnic_virt_queue.h @@ -14,10 +14,93 @@ struct nthw_virt_queue; #define SPLIT_RING 0 +#define PACKED_RING 1 #define IN_ORDER 1 -struct nthw_cvirtq_desc; +/* + * SPLIT : This marks a buffer as continuing via the next field. + * PACKED: This marks a buffer as continuing. (packed does not have a next field, so must be + * contiguous) In Used descriptors it must be ignored + */ +#define VIRTQ_DESC_F_NEXT 1 + +/* + * Split Ring virtq Descriptor + */ +struct __rte_aligned(8) virtq_desc { + /* Address (guest-physical). */ + uint64_t addr; + /* Length. */ + uint32_t len; + /* The flags as indicated above. */ + uint16_t flags; + /* Next field if flags & NEXT */ + uint16_t next; +}; + +/* descr phys address must be 16 byte aligned */ +struct __rte_aligned(16) pvirtq_desc { + /* Buffer Address. */ + uint64_t addr; + /* Buffer Length. */ + uint32_t len; + /* Buffer ID. */ + uint16_t id; + /* The flags depending on descriptor type. */ + uint16_t flags; +}; + +/* + * Common virtq descr + */ +#define vq_set_next(vq, index, nxt) \ +do { \ + struct nthw_cvirtq_desc *temp_vq = (vq); \ + if (temp_vq->vq_type == SPLIT_RING) \ + temp_vq->s[index].next = nxt; \ +} while (0) + +#define vq_add_flags(vq, index, flgs) \ +do { \ + struct nthw_cvirtq_desc *temp_vq = (vq); \ + uint16_t tmp_index = (index); \ + typeof(flgs) tmp_flgs = (flgs); \ + if (temp_vq->vq_type == SPLIT_RING) \ + temp_vq->s[tmp_index].flags |= tmp_flgs; \ + else if (temp_vq->vq_type == PACKED_RING) \ + temp_vq->p[tmp_index].flags |= tmp_flgs; \ +} while (0) + +#define vq_set_flags(vq, index, flgs) \ +do { \ + struct nthw_cvirtq_desc *temp_vq = (vq); \ + uint32_t temp_flags = (flgs); \ + uint32_t temp_index = (index); \ + if ((temp_vq)->vq_type == SPLIT_RING) \ + (temp_vq)->s[temp_index].flags = temp_flags; \ + else if ((temp_vq)->vq_type == PACKED_RING) \ + (temp_vq)->p[temp_index].flags = temp_flags; \ +} while (0) + +struct nthw_virtq_desc_buf { + /* Address (guest-physical). */ + alignas(16) uint64_t addr; + /* Length. */ + uint32_t len; +}; + +struct nthw_cvirtq_desc { + union { + struct nthw_virtq_desc_buf *b; /* buffer part as is common */ + struct virtq_desc *s; /* SPLIT */ + struct pvirtq_desc *p; /* PACKED */ + }; + uint16_t vq_type; +}; -struct nthw_received_packets; +struct nthw_received_packets { + void *addr; + uint32_t len; +}; #endif /* __NTOSS_VIRT_QUEUE_H__ */ diff --git a/drivers/net/ntnic/include/ntos_drv.h b/drivers/net/ntnic/include/ntos_drv.h index 933b012e07..d51d1e3677 100644 --- a/drivers/net/ntnic/include/ntos_drv.h +++ b/drivers/net/ntnic/include/ntos_drv.h @@ -27,6 +27,20 @@ #define MAX_QUEUES 125 /* Structs: */ +#define SG_HDR_SIZE 12 + +struct _pkt_hdr_rx { + uint32_t cap_len:14; + uint32_t fid:10; + uint32_t ofs1:8; + uint32_t ip_prot:8; + uint32_t port:13; + uint32_t descr:8; + uint32_t descr_12b:1; + uint32_t color_type:2; + uint32_t color:32; +}; + struct nthw_memory_descriptor { void *phys_addr; void *virt_addr; diff --git a/drivers/net/ntnic/ntnic_ethdev.c b/drivers/net/ntnic/ntnic_ethdev.c index 57827d73d5..b0ec41e2bb 100644 --- a/drivers/net/ntnic/ntnic_ethdev.c +++ b/drivers/net/ntnic/ntnic_ethdev.c @@ -39,9 +39,29 @@ /* Max RSS queues */ #define MAX_QUEUES 125 +#define NUM_VQ_SEGS(_data_size_) \ + ({ \ + size_t _size = (_data_size_); \ + size_t _segment_count = ((_size + SG_HDR_SIZE) > SG_HW_TX_PKT_BUFFER_SIZE) \ + ? (((_size + SG_HDR_SIZE) + SG_HW_TX_PKT_BUFFER_SIZE - 1) / \ + SG_HW_TX_PKT_BUFFER_SIZE) \ + : 1; \ + _segment_count; \ + }) + +#define VIRTQ_DESCR_IDX(_tx_pkt_idx_) \ + (((_tx_pkt_idx_) + first_vq_descr_idx) % SG_NB_HW_TX_DESCRIPTORS) + +#define VIRTQ_DESCR_IDX_NEXT(_vq_descr_idx_) (((_vq_descr_idx_) + 1) % SG_NB_HW_TX_DESCRIPTORS) + #define ONE_G_SIZE 0x40000000 #define ONE_G_MASK (ONE_G_SIZE - 1) +#define MAX_RX_PACKETS 128 +#define MAX_TX_PACKETS 128 + +int kill_pmd; + #define ETH_DEV_NTNIC_HELP_ARG "help" #define ETH_DEV_NTHW_RXQUEUES_ARG "rxqs" #define ETH_DEV_NTHW_TXQUEUES_ARG "txqs" @@ -193,6 +213,423 @@ eth_dev_infos_get(struct rte_eth_dev *eth_dev, struct rte_eth_dev_info *dev_info return 0; } +static __rte_always_inline int copy_virtqueue_to_mbuf(struct rte_mbuf *mbuf, + struct rte_mempool *mb_pool, + struct nthw_received_packets *hw_recv, + int max_segs, + uint16_t data_len) +{ + int src_pkt = 0; + /* + * 1. virtqueue packets may be segmented + * 2. the mbuf size may be too small and may need to be segmented + */ + char *data = (char *)hw_recv->addr + SG_HDR_SIZE; + char *dst = (char *)mbuf->buf_addr + RTE_PKTMBUF_HEADROOM; + + /* set packet length */ + mbuf->pkt_len = data_len - SG_HDR_SIZE; + + int remain = mbuf->pkt_len; + /* First cpy_size is without header */ + int cpy_size = (data_len > SG_HW_RX_PKT_BUFFER_SIZE) + ? SG_HW_RX_PKT_BUFFER_SIZE - SG_HDR_SIZE + : remain; + + struct rte_mbuf *m = mbuf; /* if mbuf segmentation is needed */ + + while (++src_pkt <= max_segs) { + /* keep track of space in dst */ + int cpto_size = rte_pktmbuf_tailroom(m); + + if (cpy_size > cpto_size) { + int new_cpy_size = cpto_size; + + rte_memcpy((void *)dst, (void *)data, new_cpy_size); + m->data_len += new_cpy_size; + remain -= new_cpy_size; + cpy_size -= new_cpy_size; + + data += new_cpy_size; + + /* + * loop if remaining data from this virtqueue seg + * cannot fit in one extra mbuf + */ + do { + m->next = rte_pktmbuf_alloc(mb_pool); + + if (unlikely(!m->next)) + return -1; + + m = m->next; + + /* Headroom is not needed in chained mbufs */ + rte_pktmbuf_prepend(m, rte_pktmbuf_headroom(m)); + dst = (char *)m->buf_addr; + m->data_len = 0; + m->pkt_len = 0; + + cpto_size = rte_pktmbuf_tailroom(m); + + int actual_cpy_size = + (cpy_size > cpto_size) ? cpto_size : cpy_size; + + rte_memcpy((void *)dst, (void *)data, actual_cpy_size); + m->pkt_len += actual_cpy_size; + m->data_len += actual_cpy_size; + + remain -= actual_cpy_size; + cpy_size -= actual_cpy_size; + + data += actual_cpy_size; + + mbuf->nb_segs++; + + } while (cpy_size && remain); + + } else { + /* all data from this virtqueue segment can fit in current mbuf */ + rte_memcpy((void *)dst, (void *)data, cpy_size); + m->data_len += cpy_size; + + if (mbuf->nb_segs > 1) + m->pkt_len += cpy_size; + + remain -= cpy_size; + } + + /* packet complete - all data from current virtqueue packet has been copied */ + if (remain == 0) + break; + + /* increment dst to data end */ + dst = rte_pktmbuf_mtod_offset(m, char *, m->data_len); + /* prepare for next virtqueue segment */ + data = (char *)hw_recv[src_pkt].addr; /* following packets are full data */ + + cpy_size = (remain > SG_HW_RX_PKT_BUFFER_SIZE) ? SG_HW_RX_PKT_BUFFER_SIZE : remain; + }; + + if (src_pkt > max_segs) { + NT_LOG(ERR, NTNIC, + "Did not receive correct number of segment for a whole packet"); + return -1; + } + + return src_pkt; +} + +static uint16_t eth_dev_rx_scg(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) +{ + unsigned int i; + struct rte_mbuf *mbuf; + struct ntnic_rx_queue *rx_q = queue; + uint16_t num_rx = 0; + + struct nthw_received_packets hw_recv[MAX_RX_PACKETS]; + + if (kill_pmd) + return 0; + + if (unlikely(nb_pkts == 0)) + return 0; + + if (nb_pkts > MAX_RX_PACKETS) + nb_pkts = MAX_RX_PACKETS; + + uint16_t whole_pkts = 0; + uint16_t hw_recv_pkt_segs = 0; + + if (sg_ops != NULL) { + hw_recv_pkt_segs = + sg_ops->nthw_get_rx_packets(rx_q->vq, nb_pkts, hw_recv, &whole_pkts); + + if (!hw_recv_pkt_segs) + return 0; + } + + nb_pkts = whole_pkts; + + int src_pkt = 0;/* from 0 to hw_recv_pkt_segs */ + + for (i = 0; i < nb_pkts; i++) { + bufs[i] = rte_pktmbuf_alloc(rx_q->mb_pool); + + if (!bufs[i]) { + NT_LOG(ERR, NTNIC, "ERROR - no more buffers mbuf in mempool\n"); + goto err_exit; + } + + mbuf = bufs[i]; + + struct _pkt_hdr_rx *phdr = (struct _pkt_hdr_rx *)hw_recv[src_pkt].addr; + + if (phdr->cap_len < SG_HDR_SIZE) { + NT_LOG(ERR, NTNIC, + "Pkt len of zero received. No header!! - dropping packets\n"); + rte_pktmbuf_free(mbuf); + goto err_exit; + } + + { + if (phdr->cap_len <= SG_HW_RX_PKT_BUFFER_SIZE && + (phdr->cap_len - SG_HDR_SIZE) <= rte_pktmbuf_tailroom(mbuf)) { + mbuf->data_len = phdr->cap_len - SG_HDR_SIZE; + rte_memcpy(rte_pktmbuf_mtod(mbuf, char *), + (char *)hw_recv[src_pkt].addr + SG_HDR_SIZE, + mbuf->data_len); + + mbuf->pkt_len = mbuf->data_len; + src_pkt++; + + } else { + int cpy_segs = copy_virtqueue_to_mbuf(mbuf, rx_q->mb_pool, + &hw_recv[src_pkt], + hw_recv_pkt_segs - src_pkt, + phdr->cap_len); + + if (cpy_segs < 0) { + /* Error */ + rte_pktmbuf_free(mbuf); + goto err_exit; + } + + src_pkt += cpy_segs; + } + + num_rx++; + + mbuf->ol_flags &= ~(RTE_MBUF_F_RX_FDIR_ID | RTE_MBUF_F_RX_FDIR); + mbuf->port = (uint16_t)-1; + } + } + +err_exit: + + if (sg_ops != NULL) + sg_ops->nthw_release_rx_packets(rx_q->vq, hw_recv_pkt_segs); + + return num_rx; +} + +static int copy_mbuf_to_virtqueue(struct nthw_cvirtq_desc *cvq_desc, + uint16_t vq_descr_idx, + struct nthw_memory_descriptor *vq_bufs, + int max_segs, + struct rte_mbuf *mbuf) +{ + /* + * 1. mbuf packet may be segmented + * 2. the virtqueue buffer size may be too small and may need to be segmented + */ + + char *data = rte_pktmbuf_mtod(mbuf, char *); + char *dst = (char *)vq_bufs[vq_descr_idx].virt_addr + SG_HDR_SIZE; + + int remain = mbuf->pkt_len; + int cpy_size = mbuf->data_len; + + struct rte_mbuf *m = mbuf; + int cpto_size = SG_HW_TX_PKT_BUFFER_SIZE - SG_HDR_SIZE; + + cvq_desc->b[vq_descr_idx].len = SG_HDR_SIZE; + + int cur_seg_num = 0; /* start from 0 */ + + while (m) { + /* Can all data in current src segment be in current dest segment */ + if (cpy_size > cpto_size) { + int new_cpy_size = cpto_size; + + rte_memcpy((void *)dst, (void *)data, new_cpy_size); + + cvq_desc->b[vq_descr_idx].len += new_cpy_size; + + remain -= new_cpy_size; + cpy_size -= new_cpy_size; + + data += new_cpy_size; + + /* + * Loop if remaining data from this virtqueue seg cannot fit in one extra + * mbuf + */ + do { + vq_add_flags(cvq_desc, vq_descr_idx, VIRTQ_DESC_F_NEXT); + + int next_vq_descr_idx = VIRTQ_DESCR_IDX_NEXT(vq_descr_idx); + + vq_set_next(cvq_desc, vq_descr_idx, next_vq_descr_idx); + + vq_descr_idx = next_vq_descr_idx; + + vq_set_flags(cvq_desc, vq_descr_idx, 0); + vq_set_next(cvq_desc, vq_descr_idx, 0); + + if (++cur_seg_num > max_segs) + break; + + dst = (char *)vq_bufs[vq_descr_idx].virt_addr; + cpto_size = SG_HW_TX_PKT_BUFFER_SIZE; + + int actual_cpy_size = + (cpy_size > cpto_size) ? cpto_size : cpy_size; + rte_memcpy((void *)dst, (void *)data, actual_cpy_size); + + cvq_desc->b[vq_descr_idx].len = actual_cpy_size; + + remain -= actual_cpy_size; + cpy_size -= actual_cpy_size; + cpto_size -= actual_cpy_size; + + data += actual_cpy_size; + + } while (cpy_size && remain); + + } else { + /* All data from this segment can fit in current virtqueue buffer */ + rte_memcpy((void *)dst, (void *)data, cpy_size); + + cvq_desc->b[vq_descr_idx].len += cpy_size; + + remain -= cpy_size; + cpto_size -= cpy_size; + } + + /* Packet complete - all segments from current mbuf has been copied */ + if (remain == 0) + break; + + /* increment dst to data end */ + dst = (char *)vq_bufs[vq_descr_idx].virt_addr + cvq_desc->b[vq_descr_idx].len; + + m = m->next; + + if (!m) { + NT_LOG(ERR, NTNIC, "ERROR: invalid packet size\n"); + break; + } + + /* Prepare for next mbuf segment */ + data = rte_pktmbuf_mtod(m, char *); + cpy_size = m->data_len; + }; + + cur_seg_num++; + + if (cur_seg_num > max_segs) { + NT_LOG(ERR, NTNIC, + "Did not receive correct number of segment for a whole packet"); + return -1; + } + + return cur_seg_num; +} + +static uint16_t eth_dev_tx_scg(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) +{ + uint16_t pkt; + uint16_t first_vq_descr_idx = 0; + + struct nthw_cvirtq_desc cvq_desc; + + struct nthw_memory_descriptor *vq_bufs; + + struct ntnic_tx_queue *tx_q = queue; + + int nb_segs = 0, i; + int pkts_sent = 0; + uint16_t nb_segs_arr[MAX_TX_PACKETS]; + + if (kill_pmd) + return 0; + + if (nb_pkts > MAX_TX_PACKETS) + nb_pkts = MAX_TX_PACKETS; + + /* + * count all segments needed to contain all packets in vq buffers + */ + for (i = 0; i < nb_pkts; i++) { + /* build the num segments array for segmentation control and release function */ + int vq_segs = NUM_VQ_SEGS(bufs[i]->pkt_len); + nb_segs_arr[i] = vq_segs; + nb_segs += vq_segs; + } + + if (!nb_segs) + goto exit_out; + + if (sg_ops == NULL) + goto exit_out; + + int got_nb_segs = sg_ops->nthw_get_tx_packets(tx_q->vq, nb_segs, &first_vq_descr_idx, + &cvq_desc /*&vq_descr,*/, &vq_bufs); + + if (!got_nb_segs) + goto exit_out; + + /* + * we may get less vq buffers than we have asked for + * calculate last whole packet that can fit into what + * we have got + */ + while (got_nb_segs < nb_segs) { + if (!--nb_pkts) + goto exit_out; + + nb_segs -= NUM_VQ_SEGS(bufs[nb_pkts]->pkt_len); + + if (nb_segs <= 0) + goto exit_out; + } + + /* + * nb_pkts & nb_segs, got it all, ready to copy + */ + int seg_idx = 0; + int last_seg_idx = seg_idx; + + for (pkt = 0; pkt < nb_pkts; ++pkt) { + uint16_t vq_descr_idx = VIRTQ_DESCR_IDX(seg_idx); + + vq_set_flags(&cvq_desc, vq_descr_idx, 0); + vq_set_next(&cvq_desc, vq_descr_idx, 0); + + if (bufs[pkt]->nb_segs == 1 && nb_segs_arr[pkt] == 1) { + rte_memcpy((void *)((char *)vq_bufs[vq_descr_idx].virt_addr + SG_HDR_SIZE), + rte_pktmbuf_mtod(bufs[pkt], void *), bufs[pkt]->pkt_len); + + cvq_desc.b[vq_descr_idx].len = bufs[pkt]->pkt_len + SG_HDR_SIZE; + + seg_idx++; + + } else { + int cpy_segs = copy_mbuf_to_virtqueue(&cvq_desc, vq_descr_idx, vq_bufs, + nb_segs - last_seg_idx, bufs[pkt]); + + if (cpy_segs < 0) + break; + + seg_idx += cpy_segs; + } + + last_seg_idx = seg_idx; + rte_pktmbuf_free(bufs[pkt]); + pkts_sent++; + } + +exit_out: + + if (sg_ops != NULL) { + if (pkts_sent) + sg_ops->nthw_release_tx_packets(tx_q->vq, pkts_sent, nb_segs_arr); + } + + return pkts_sent; +} + static int allocate_hw_virtio_queues(struct rte_eth_dev *eth_dev, int vf_num, struct hwq_s *hwq, int num_descr, int buf_size) { @@ -1252,6 +1689,10 @@ nthw_pci_dev_init(struct rte_pci_device *pci_dev) rte_memcpy(ð_dev->data->mac_addrs[0], &internals->eth_addrs[0], RTE_ETHER_ADDR_LEN); + NT_LOG_DBGX(DBG, NTNIC, "Setting up RX functions for SCG\n"); + eth_dev->rx_pkt_burst = eth_dev_rx_scg; + eth_dev->tx_pkt_burst = eth_dev_tx_scg; + eth_dev->tx_pkt_prepare = NULL; struct rte_eth_link pmd_link; pmd_link.link_speed = RTE_ETH_SPEED_NUM_NONE; -- 2.45.0