From mboxrd@z Thu Jan  1 00:00:00 1970
Return-Path: <dev-bounces@dpdk.org>
Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124])
	by inbox.dpdk.org (Postfix) with ESMTP id 55E3445AAF;
	Fri,  4 Oct 2024 17:15:12 +0200 (CEST)
Received: from mails.dpdk.org (localhost [127.0.0.1])
	by mails.dpdk.org (Postfix) with ESMTP id 5951342EAF;
	Fri,  4 Oct 2024 17:09:13 +0200 (CEST)
Received: from egress-ip42a.ess.de.barracuda.com
 (egress-ip42a.ess.de.barracuda.com [18.185.115.201])
 by mails.dpdk.org (Postfix) with ESMTP id 7F9F042DD3
 for <dev@dpdk.org>; Fri,  4 Oct 2024 17:08:41 +0200 (CEST)
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 (mail-am6eur05lp2111.outbound.protection.outlook.com [104.47.18.111]) by
 mx-outbound8-201.eu-central-1a.ess.aws.cudaops.com (version=TLSv1.2
 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NO);
 Fri, 04 Oct 2024 15:08:39 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none;
 b=b8bWvee23VDyTaXAWzxiNqqR8nEAOmC9Qw2Jw7H7IGbD/uood7iUrKKr2OAsMd/+zai8Rko4Xve77+UctImMoQh2DMFTlQ25q29xJTOF6eCPgZXgN81WgnMaqE91fQk76Q2RjxugmFDj9rseBlrJmZxmMkbn+5OGh/nEesJ/37i04MnbUV/42fjvQkLMnQeE7P3HFoqvxrY+X7dm8tNoy88RTCBmmKMbe14ozmlA+53BY6siu7qBUB0BGZosG8e/oZjo182am2ddtNcH7/e/TYp1+TvI2rCEs9YFqAj0d1h4Cf5pvDC+rRnxWjg359xgQKesWvoIG2g34Rd1/CJ1nA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; 
 s=arcselector10001;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=kl135QL+Wilsxaog3ZfKJhEIfQPHVBRKv6TVgSdqZI8=;
 b=WBG5UdWpLQIMxYK0HD+feoHiv4ZT/4H5tWOMKOn7hTnU3HZSWKPvnxSvsca/j6dm8pqnK+7U4SNBpTNAHshQa5qNwlxodlsXUUaIJUcBMK3tS5hCId9ZhgFgCkkb3IJhgrFFbtsrQwDXNkm5KchZcTd4YpG6JNzSY8rFZ0CoFlBSostx/MBJHqMPFtjDdSnpF8dbU93A/4z9WISue95D+g2hMUlX1bpWvXUvBJFejP/YfPbijzNqzy/3U2a6vvoM9k2BCqFzEwziUvvH3yaOyYSz2fOeoAVHp40gvGapzVo2jNFGYLto3DMF06I/r7WRBbuUMfCos+EVvyzDfE9Lug==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=fail (sender ip is
 178.72.21.4) smtp.rcpttodomain=dpdk.org smtp.mailfrom=napatech.com;
 dmarc=fail (p=reject sp=reject pct=100) action=oreject
 header.from=napatech.com; dkim=none (message not signed); arc=none (0)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=napatech.com;
 s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=kl135QL+Wilsxaog3ZfKJhEIfQPHVBRKv6TVgSdqZI8=;
 b=A8/ZYe4qWYPwfHdybzp2fXSkwo/2ZaHi52+Z0enUMNXwtItjbx06IuKJr1KBXaD96azeVtMaUIOCdr4W5OhhC3/WoXrouMaX31poCXeXety0BHe901KLys4Y2s65tcL7z0X+JbvGkaWN+SH28/4+VqfaELaFl5Oj2lbBBUE4SuM=
Received: from AM0PR02CA0020.eurprd02.prod.outlook.com (2603:10a6:208:3e::33)
 by AM9P190MB1107.EURP190.PROD.OUTLOOK.COM (2603:10a6:20b:271::6) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8005.22; Fri, 4 Oct
 2024 15:08:35 +0000
Received: from AMS0EPF000001AC.eurprd05.prod.outlook.com
 (2603:10a6:208:3e:cafe::ad) by AM0PR02CA0020.outlook.office365.com
 (2603:10a6:208:3e::33) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8026.17 via Frontend
 Transport; Fri, 4 Oct 2024 15:08:35 +0000
X-MS-Exchange-Authentication-Results: spf=fail (sender IP is 178.72.21.4)
 smtp.mailfrom=napatech.com; dkim=none (message not signed)
 header.d=none;dmarc=fail action=oreject header.from=napatech.com;
Received-SPF: Fail (protection.outlook.com: domain of napatech.com does not
 designate 178.72.21.4 as permitted sender) receiver=protection.outlook.com;
 client-ip=178.72.21.4; helo=localhost.localdomain;
Received: from localhost.localdomain (178.72.21.4) by
 AMS0EPF000001AC.mail.protection.outlook.com (10.167.16.152) with Microsoft
 SMTP Server id 15.20.7918.13 via Frontend Transport; Fri, 4 Oct 2024 15:08:35
 +0000
From: Serhii Iliushyk <sil-plv@napatech.com>
To: dev@dpdk.org
Cc: mko-plv@napatech.com, sil-plv@napatech.com, ckm@napatech.com,
 andrew.rybchenko@oktetlabs.ru, ferruh.yigit@amd.com,
 Danylo Vodopianov <dvo-plv@napatech.com>
Subject: [PATCH v1 05/14] net/ntnic: add packet handler for virtio queues
Date: Fri,  4 Oct 2024 17:07:30 +0200
Message-ID: <20241004150749.261020-44-sil-plv@napatech.com>
X-Mailer: git-send-email 2.45.0
In-Reply-To: <20241004150749.261020-1-sil-plv@napatech.com>
References: <20241004150749.261020-1-sil-plv@napatech.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: AMS0EPF000001AC:EE_|AM9P190MB1107:EE_
Content-Type: text/plain
X-MS-Office365-Filtering-Correlation-Id: 5c820dbb-efd8-4852-7610-08dce4866b1d
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
 ARA:13230040|36860700013|376014|82310400026|1800799024; 
X-Microsoft-Antispam-Message-Info: =?us-ascii?Q?g/RsBygQMr3MCjbhMrHGmz/9hBJWVfE7Y+9oDxtJ5ikn9Yk9c6BElGHfAey/?=
 =?us-ascii?Q?gG3MFGdGpgrXbJmaEBhbyQUx5KKe6FYmoC0jEUsqH48a+WzYVenf+ij/TNKr?=
 =?us-ascii?Q?vB2mwHruKKCUEWnOJOKGGc6QXSLFalA6rm9FjNVNuVmu0Xdeh9eWM0+LqXK/?=
 =?us-ascii?Q?KTpjGHueEMdBm7kmDwuVRWRQgsLHv0wWKvmihJtUB6AY2mlFRNuHd0yPO1sb?=
 =?us-ascii?Q?O2LvY5C81J6fjmawIqh1JZXcagBbqA7vpDs96TSJTCuhffVUeEQM3G72fxp5?=
 =?us-ascii?Q?aVsGRG4zGlHJcUfnmMhOq4123CNVUQhpyLDYduX41n2aBui9/kEkqnM3dvjt?=
 =?us-ascii?Q?0xL5BvOVrB36xWrYFavD/7uLsIbzEVtqPYiG/rMWnArJK74ciaAQxV8dnYov?=
 =?us-ascii?Q?XplnZpqD9Dhanx/1RtLPC7ho0RrbKZJQN5UNrov1OdxP7tbsJKUVBMchTkdy?=
 =?us-ascii?Q?PCVFjtjuva2xMNe2QeHiqrJ4m2aEN5dB8BZC1+wdYoWDvVx/Cv4EakmoCK5F?=
 =?us-ascii?Q?6sjMsQ3bKnf3OkDZdZAfbgJBsCapilUH7xQNQs2rjZhlOIksUUwQrWvqzgSG?=
 =?us-ascii?Q?80S+2Q7hU1FRfYTg5L1InCiIcy0qZ5/BoAVaeodwCli6pTtwvxvwT80O9Y2n?=
 =?us-ascii?Q?CZo4Bkni9fl5zliJwgr2G88/zSo/CxbGvoj2WsIAijX7ubT+yRI439wD2oqP?=
 =?us-ascii?Q?Dp+dqktEL1nnlTV9OlYjLDv6viWOxafzf8KOCSSwwbx5Ua4skjtReUGq0zss?=
 =?us-ascii?Q?Psu/EL6lMoEVz7jkNtYDm/IJTib54itSeS9bnrZuhUXr1oIUYEWryqj2cBEN?=
 =?us-ascii?Q?ijVcOFHPgyrHtxOStaCSfQiCrZtCke8naEhqPo3czWcKfdYfgHVKFkXzBERT?=
 =?us-ascii?Q?fDO7+u7pjhvN00MHsQ8Xcm2QBKFfH8Jw4t8FcKsSVYXwcrpa6Ryv8SlWY2K+?=
 =?us-ascii?Q?e4ayVBHvkDTnFdENkLAwpuCvoWGQDtI0IG05Qvw66C5+4uaso+hpn1wjVboz?=
 =?us-ascii?Q?7SKMAjjEFwSaawAbpjOZynsZCLCzNHQIYeu61DyF75cvDtkDrc0f1gjoaUnr?=
 =?us-ascii?Q?uNrlwmp8K6TVzusCGrNIVRi/qj66+DeQ2i+c4Mb4txgAVe49CFHek+krMF8F?=
 =?us-ascii?Q?OdQxnPG+cM7vkjKk+9iqs5qaFcRUA2Q6/INqGkpGpx/k43aywIQ5Ke5whAgl?=
 =?us-ascii?Q?Acw9iRhh5K8Hfjpg+aKSyLDu2eJoo8tO+ej139bX7BcsY5Y4pW0+pWMQbtyo?=
 =?us-ascii?Q?P3f+7zCmgF4Ugt8NR+owTkR13oDpcmbaoQpRvPnF6ywZmeRjEYSEeNQKQ2Fn?=
 =?us-ascii?Q?VWVhss70dzzTfCZeDB9L5f5RsjWO2pdHSmiisIKNJy61h5tqInRoECGdEJKb?=
 =?us-ascii?Q?gfoDw3o+/IiCJ9CSe8mLDywwv0EA?=
X-Forefront-Antispam-Report: CIP:178.72.21.4; CTRY:DK; LANG:en; SCL:1; SRV:;
 IPV:NLI; SFV:NSPM; H:localhost.localdomain; PTR:InfoDomainNonexistent;
 CAT:NONE; SFS:(13230040)(36860700013)(376014)(82310400026)(1800799024);
 DIR:OUT; SFP:1102; 
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0: h+7DGPIn6sQ1412whDYZ3wN2F1is4PKmURBvJ6ZqL0O1TF4xYCY2b7m4nsux4hRSAYVcAGP+FKNpUMik6+e/6oKZouysXovZjN3HjGZcrjb5B+7Vq8lKO4zTr9yPxRljkvKAq6TALH1C+b06imEyFo9rsxFt+Kz4A7dLbwQ4pxrIj0sZvhbwPySRIXKn8LHmXAIzO2tzZRrXQ4yT0otEVP+QLieDO/zgId3+lYD8GIC2lfXuKuwjPzGMFI2csxbdph4KMo5sbLczmuf5m3RwpDFROdAk/xgknxIvH07jliYbMfXyqZXJdi4OGRzZm48IvsMbTniNgArqQ5/HFnme32bcyI0Ll2dzhrb3jVg0G+BNDZrxNRG/upytCaVnLlPKpGw+mYaIscq1PtnsQ8sJ5ex7XRC3zkm2ntzVRFoKF6Lh3RyxlOpyiwyoE5cpg4jIQ/I2ai6AcEyHg2y2yTBbWFCssKG2dvP4wGdY9YO0UJAnWmz+9TnHQFCLcm7oLkFFIsYEdz0omZPL5pUXOvV8T9iA88BAdLRfQrySDEqFCT4BkR//Mm+6VbXaYG1Jck8irEFCdWpnavoqEUDEtmxH4jq38hB42oi5l9PpIorySJjrH7FK1X9nMzcuTlasyuBxfS/QAXNN9iUde1JF1HQMLA==
X-OriginatorOrg: napatech.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 04 Oct 2024 15:08:35.1892 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 5c820dbb-efd8-4852-7610-08dce4866b1d
X-MS-Exchange-CrossTenant-Id: c4540d0b-728a-4233-9da5-9ea30c7ec3ed
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=c4540d0b-728a-4233-9da5-9ea30c7ec3ed; Ip=[178.72.21.4];
 Helo=[localhost.localdomain]
X-MS-Exchange-CrossTenant-AuthSource: AMS0EPF000001AC.eurprd05.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM9P190MB1107
X-BESS-ID: 1728054510-302249-12642-31682-4
X-BESS-VER: 2019.1_20240924.1654
X-BESS-Apparent-Source-IP: 104.47.18.111
X-BESS-Parts: H4sIAAAAAAACA4uuVkqtKFGyUioBkjpK+cVKVoZmBuYWQGYGUDQ1Mc0oJTHVND
 HJ0DzNwtA4OcnA0jLJwsjA3Cw5KS0xVak2FgAKOWdTQgAAAA==
X-BESS-Outbound-Spam-Score: 0.00
X-BESS-Outbound-Spam-Report: Code version 3.2,
 rules version 3.2.2.259494 [from 
 cloudscan22-131.eu-central-1b.ess.aws.cudaops.com]
 Rule breakdown below
 pts rule name              description
 ---- ---------------------- --------------------------------
 0.00 BSF_BESS_OUTBOUND      META: BESS Outbound 
X-BESS-Outbound-Spam-Status: SCORE=0.00 using account:ESS113687 scores of
 KILL_LEVEL=7.0 tests=BSF_BESS_OUTBOUND
X-BESS-BRTS-Status: 1
X-BeenThere: dev@dpdk.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: DPDK patches and discussions <dev.dpdk.org>
List-Unsubscribe: <https://mails.dpdk.org/options/dev>,
 <mailto:dev-request@dpdk.org?subject=unsubscribe>
List-Archive: <http://mails.dpdk.org/archives/dev/>
List-Post: <mailto:dev@dpdk.org>
List-Help: <mailto:dev-request@dpdk.org?subject=help>
List-Subscribe: <https://mails.dpdk.org/listinfo/dev>,
 <mailto:dev-request@dpdk.org?subject=subscribe>
Errors-To: dev-bounces@dpdk.org

From: Danylo Vodopianov <dvo-plv@napatech.com>

Added functionality to handles the copying of segmented queue data
into a rte_mbuf and vice versa.

Added functionality to manages packet transmission for a
specified TX and RX queues.

Signed-off-by: Danylo Vodopianov <dvo-plv@napatech.com>
---
 drivers/net/ntnic/include/ntnic_virt_queue.h |  87 +++-
 drivers/net/ntnic/include/ntos_drv.h         |  14 +
 drivers/net/ntnic/ntnic_ethdev.c             | 441 +++++++++++++++++++
 3 files changed, 540 insertions(+), 2 deletions(-)

diff --git a/drivers/net/ntnic/include/ntnic_virt_queue.h b/drivers/net/ntnic/include/ntnic_virt_queue.h
index 821b23af6c..f8842819e4 100644
--- a/drivers/net/ntnic/include/ntnic_virt_queue.h
+++ b/drivers/net/ntnic/include/ntnic_virt_queue.h
@@ -14,10 +14,93 @@
 struct nthw_virt_queue;
 
 #define SPLIT_RING        0
+#define PACKED_RING       1
 #define IN_ORDER          1
 
-struct nthw_cvirtq_desc;
+/*
+ * SPLIT : This marks a buffer as continuing via the next field.
+ * PACKED: This marks a buffer as continuing. (packed does not have a next field, so must be
+ * contiguous) In Used descriptors it must be ignored
+ */
+#define VIRTQ_DESC_F_NEXT 1
+
+/*
+ * Split Ring virtq Descriptor
+ */
+struct __rte_aligned(8) virtq_desc {
+	/* Address (guest-physical). */
+	uint64_t addr;
+	/* Length. */
+	uint32_t len;
+	/* The flags as indicated above. */
+	uint16_t flags;
+	/* Next field if flags & NEXT */
+	uint16_t next;
+};
+
+/* descr phys address must be 16 byte aligned */
+struct __rte_aligned(16) pvirtq_desc {
+	/* Buffer Address. */
+	uint64_t addr;
+	/* Buffer Length. */
+	uint32_t len;
+	/* Buffer ID. */
+	uint16_t id;
+	/* The flags depending on descriptor type. */
+	uint16_t flags;
+};
+
+/*
+ * Common virtq descr
+ */
+#define vq_set_next(vq, index, nxt) \
+do { \
+	struct nthw_cvirtq_desc *temp_vq = (vq); \
+	if (temp_vq->vq_type == SPLIT_RING) \
+		temp_vq->s[index].next = nxt; \
+} while (0)
+
+#define vq_add_flags(vq, index, flgs) \
+do { \
+	struct nthw_cvirtq_desc *temp_vq = (vq); \
+	uint16_t tmp_index = (index); \
+	typeof(flgs) tmp_flgs = (flgs); \
+	if (temp_vq->vq_type == SPLIT_RING) \
+		temp_vq->s[tmp_index].flags |= tmp_flgs; \
+	else if (temp_vq->vq_type == PACKED_RING) \
+		temp_vq->p[tmp_index].flags |= tmp_flgs; \
+} while (0)
+
+#define vq_set_flags(vq, index, flgs) \
+do { \
+	struct nthw_cvirtq_desc *temp_vq = (vq); \
+	uint32_t temp_flags = (flgs); \
+	uint32_t temp_index = (index); \
+	if ((temp_vq)->vq_type == SPLIT_RING) \
+		(temp_vq)->s[temp_index].flags = temp_flags; \
+	else if ((temp_vq)->vq_type == PACKED_RING) \
+		(temp_vq)->p[temp_index].flags = temp_flags; \
+} while (0)
+
+struct nthw_virtq_desc_buf {
+	/* Address (guest-physical). */
+	alignas(16) uint64_t addr;
+	/* Length. */
+	uint32_t len;
+};
+
+struct nthw_cvirtq_desc {
+	union {
+		struct nthw_virtq_desc_buf *b;  /* buffer part as is common */
+		struct virtq_desc     *s;  /* SPLIT */
+		struct pvirtq_desc    *p;  /* PACKED */
+	};
+	uint16_t vq_type;
+};
 
-struct nthw_received_packets;
+struct nthw_received_packets {
+	void *addr;
+	uint32_t len;
+};
 
 #endif /* __NTOSS_VIRT_QUEUE_H__ */
diff --git a/drivers/net/ntnic/include/ntos_drv.h b/drivers/net/ntnic/include/ntos_drv.h
index 933b012e07..d51d1e3677 100644
--- a/drivers/net/ntnic/include/ntos_drv.h
+++ b/drivers/net/ntnic/include/ntos_drv.h
@@ -27,6 +27,20 @@
 #define MAX_QUEUES       125
 
 /* Structs: */
+#define SG_HDR_SIZE    12
+
+struct _pkt_hdr_rx {
+	uint32_t cap_len:14;
+	uint32_t fid:10;
+	uint32_t ofs1:8;
+	uint32_t ip_prot:8;
+	uint32_t port:13;
+	uint32_t descr:8;
+	uint32_t descr_12b:1;
+	uint32_t color_type:2;
+	uint32_t color:32;
+};
+
 struct nthw_memory_descriptor {
 	void *phys_addr;
 	void *virt_addr;
diff --git a/drivers/net/ntnic/ntnic_ethdev.c b/drivers/net/ntnic/ntnic_ethdev.c
index 57827d73d5..b0ec41e2bb 100644
--- a/drivers/net/ntnic/ntnic_ethdev.c
+++ b/drivers/net/ntnic/ntnic_ethdev.c
@@ -39,9 +39,29 @@
 /* Max RSS queues */
 #define MAX_QUEUES 125
 
+#define NUM_VQ_SEGS(_data_size_)                                                                  \
+	({                                                                                        \
+		size_t _size = (_data_size_);                                                     \
+		size_t _segment_count = ((_size + SG_HDR_SIZE) > SG_HW_TX_PKT_BUFFER_SIZE)        \
+			? (((_size + SG_HDR_SIZE) + SG_HW_TX_PKT_BUFFER_SIZE - 1) /               \
+			   SG_HW_TX_PKT_BUFFER_SIZE)                                              \
+			: 1;                                                                      \
+		_segment_count;                                                                   \
+	})
+
+#define VIRTQ_DESCR_IDX(_tx_pkt_idx_)                                                             \
+	(((_tx_pkt_idx_) + first_vq_descr_idx) % SG_NB_HW_TX_DESCRIPTORS)
+
+#define VIRTQ_DESCR_IDX_NEXT(_vq_descr_idx_) (((_vq_descr_idx_) + 1) % SG_NB_HW_TX_DESCRIPTORS)
+
 #define ONE_G_SIZE  0x40000000
 #define ONE_G_MASK  (ONE_G_SIZE - 1)
 
+#define MAX_RX_PACKETS   128
+#define MAX_TX_PACKETS   128
+
+int kill_pmd;
+
 #define ETH_DEV_NTNIC_HELP_ARG "help"
 #define ETH_DEV_NTHW_RXQUEUES_ARG "rxqs"
 #define ETH_DEV_NTHW_TXQUEUES_ARG "txqs"
@@ -193,6 +213,423 @@ eth_dev_infos_get(struct rte_eth_dev *eth_dev, struct rte_eth_dev_info *dev_info
 	return 0;
 }
 
+static __rte_always_inline int copy_virtqueue_to_mbuf(struct rte_mbuf *mbuf,
+	struct rte_mempool *mb_pool,
+	struct nthw_received_packets *hw_recv,
+	int max_segs,
+	uint16_t data_len)
+{
+	int src_pkt = 0;
+	/*
+	 * 1. virtqueue packets may be segmented
+	 * 2. the mbuf size may be too small and may need to be segmented
+	 */
+	char *data = (char *)hw_recv->addr + SG_HDR_SIZE;
+	char *dst = (char *)mbuf->buf_addr + RTE_PKTMBUF_HEADROOM;
+
+	/* set packet length */
+	mbuf->pkt_len = data_len - SG_HDR_SIZE;
+
+	int remain = mbuf->pkt_len;
+	/* First cpy_size is without header */
+	int cpy_size = (data_len > SG_HW_RX_PKT_BUFFER_SIZE)
+		? SG_HW_RX_PKT_BUFFER_SIZE - SG_HDR_SIZE
+		: remain;
+
+	struct rte_mbuf *m = mbuf;	/* if mbuf segmentation is needed */
+
+	while (++src_pkt <= max_segs) {
+		/* keep track of space in dst */
+		int cpto_size = rte_pktmbuf_tailroom(m);
+
+		if (cpy_size > cpto_size) {
+			int new_cpy_size = cpto_size;
+
+			rte_memcpy((void *)dst, (void *)data, new_cpy_size);
+			m->data_len += new_cpy_size;
+			remain -= new_cpy_size;
+			cpy_size -= new_cpy_size;
+
+			data += new_cpy_size;
+
+			/*
+			 * loop if remaining data from this virtqueue seg
+			 * cannot fit in one extra mbuf
+			 */
+			do {
+				m->next = rte_pktmbuf_alloc(mb_pool);
+
+				if (unlikely(!m->next))
+					return -1;
+
+				m = m->next;
+
+				/* Headroom is not needed in chained mbufs */
+				rte_pktmbuf_prepend(m, rte_pktmbuf_headroom(m));
+				dst = (char *)m->buf_addr;
+				m->data_len = 0;
+				m->pkt_len = 0;
+
+				cpto_size = rte_pktmbuf_tailroom(m);
+
+				int actual_cpy_size =
+					(cpy_size > cpto_size) ? cpto_size : cpy_size;
+
+				rte_memcpy((void *)dst, (void *)data, actual_cpy_size);
+				m->pkt_len += actual_cpy_size;
+				m->data_len += actual_cpy_size;
+
+				remain -= actual_cpy_size;
+				cpy_size -= actual_cpy_size;
+
+				data += actual_cpy_size;
+
+				mbuf->nb_segs++;
+
+			} while (cpy_size && remain);
+
+		} else {
+			/* all data from this virtqueue segment can fit in current mbuf */
+			rte_memcpy((void *)dst, (void *)data, cpy_size);
+			m->data_len += cpy_size;
+
+			if (mbuf->nb_segs > 1)
+				m->pkt_len += cpy_size;
+
+			remain -= cpy_size;
+		}
+
+		/* packet complete - all data from current virtqueue packet has been copied */
+		if (remain == 0)
+			break;
+
+		/* increment dst to data end */
+		dst = rte_pktmbuf_mtod_offset(m, char *, m->data_len);
+		/* prepare for next virtqueue segment */
+		data = (char *)hw_recv[src_pkt].addr;	/* following packets are full data */
+
+		cpy_size = (remain > SG_HW_RX_PKT_BUFFER_SIZE) ? SG_HW_RX_PKT_BUFFER_SIZE : remain;
+	};
+
+	if (src_pkt > max_segs) {
+		NT_LOG(ERR, NTNIC,
+			"Did not receive correct number of segment for a whole packet");
+		return -1;
+	}
+
+	return src_pkt;
+}
+
+static uint16_t eth_dev_rx_scg(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
+{
+	unsigned int i;
+	struct rte_mbuf *mbuf;
+	struct ntnic_rx_queue *rx_q = queue;
+	uint16_t num_rx = 0;
+
+	struct nthw_received_packets hw_recv[MAX_RX_PACKETS];
+
+	if (kill_pmd)
+		return 0;
+
+	if (unlikely(nb_pkts == 0))
+		return 0;
+
+	if (nb_pkts > MAX_RX_PACKETS)
+		nb_pkts = MAX_RX_PACKETS;
+
+	uint16_t whole_pkts = 0;
+	uint16_t hw_recv_pkt_segs = 0;
+
+	if (sg_ops != NULL) {
+		hw_recv_pkt_segs =
+			sg_ops->nthw_get_rx_packets(rx_q->vq, nb_pkts, hw_recv, &whole_pkts);
+
+		if (!hw_recv_pkt_segs)
+			return 0;
+	}
+
+	nb_pkts = whole_pkts;
+
+	int src_pkt = 0;/* from 0 to hw_recv_pkt_segs */
+
+	for (i = 0; i < nb_pkts; i++) {
+		bufs[i] = rte_pktmbuf_alloc(rx_q->mb_pool);
+
+		if (!bufs[i]) {
+			NT_LOG(ERR, NTNIC, "ERROR - no more buffers mbuf in mempool\n");
+			goto err_exit;
+		}
+
+		mbuf = bufs[i];
+
+		struct _pkt_hdr_rx *phdr = (struct _pkt_hdr_rx *)hw_recv[src_pkt].addr;
+
+		if (phdr->cap_len < SG_HDR_SIZE) {
+			NT_LOG(ERR, NTNIC,
+				"Pkt len of zero received. No header!! - dropping packets\n");
+			rte_pktmbuf_free(mbuf);
+			goto err_exit;
+		}
+
+		{
+			if (phdr->cap_len <= SG_HW_RX_PKT_BUFFER_SIZE &&
+				(phdr->cap_len - SG_HDR_SIZE) <= rte_pktmbuf_tailroom(mbuf)) {
+				mbuf->data_len = phdr->cap_len - SG_HDR_SIZE;
+				rte_memcpy(rte_pktmbuf_mtod(mbuf, char *),
+					(char *)hw_recv[src_pkt].addr + SG_HDR_SIZE,
+					mbuf->data_len);
+
+				mbuf->pkt_len = mbuf->data_len;
+				src_pkt++;
+
+			} else {
+				int cpy_segs = copy_virtqueue_to_mbuf(mbuf, rx_q->mb_pool,
+						&hw_recv[src_pkt],
+						hw_recv_pkt_segs - src_pkt,
+						phdr->cap_len);
+
+				if (cpy_segs < 0) {
+					/* Error */
+					rte_pktmbuf_free(mbuf);
+					goto err_exit;
+				}
+
+				src_pkt += cpy_segs;
+			}
+
+			num_rx++;
+
+			mbuf->ol_flags &= ~(RTE_MBUF_F_RX_FDIR_ID | RTE_MBUF_F_RX_FDIR);
+			mbuf->port = (uint16_t)-1;
+		}
+	}
+
+err_exit:
+
+	if (sg_ops != NULL)
+		sg_ops->nthw_release_rx_packets(rx_q->vq, hw_recv_pkt_segs);
+
+	return num_rx;
+}
+
+static int copy_mbuf_to_virtqueue(struct nthw_cvirtq_desc *cvq_desc,
+	uint16_t vq_descr_idx,
+	struct nthw_memory_descriptor *vq_bufs,
+	int max_segs,
+	struct rte_mbuf *mbuf)
+{
+	/*
+	 * 1. mbuf packet may be segmented
+	 * 2. the virtqueue buffer size may be too small and may need to be segmented
+	 */
+
+	char *data = rte_pktmbuf_mtod(mbuf, char *);
+	char *dst = (char *)vq_bufs[vq_descr_idx].virt_addr + SG_HDR_SIZE;
+
+	int remain = mbuf->pkt_len;
+	int cpy_size = mbuf->data_len;
+
+	struct rte_mbuf *m = mbuf;
+	int cpto_size = SG_HW_TX_PKT_BUFFER_SIZE - SG_HDR_SIZE;
+
+	cvq_desc->b[vq_descr_idx].len = SG_HDR_SIZE;
+
+	int cur_seg_num = 0;	/* start from 0 */
+
+	while (m) {
+		/* Can all data in current src segment be in current dest segment */
+		if (cpy_size > cpto_size) {
+			int new_cpy_size = cpto_size;
+
+			rte_memcpy((void *)dst, (void *)data, new_cpy_size);
+
+			cvq_desc->b[vq_descr_idx].len += new_cpy_size;
+
+			remain -= new_cpy_size;
+			cpy_size -= new_cpy_size;
+
+			data += new_cpy_size;
+
+			/*
+			 * Loop if remaining data from this virtqueue seg cannot fit in one extra
+			 * mbuf
+			 */
+			do {
+				vq_add_flags(cvq_desc, vq_descr_idx, VIRTQ_DESC_F_NEXT);
+
+				int next_vq_descr_idx = VIRTQ_DESCR_IDX_NEXT(vq_descr_idx);
+
+				vq_set_next(cvq_desc, vq_descr_idx, next_vq_descr_idx);
+
+				vq_descr_idx = next_vq_descr_idx;
+
+				vq_set_flags(cvq_desc, vq_descr_idx, 0);
+				vq_set_next(cvq_desc, vq_descr_idx, 0);
+
+				if (++cur_seg_num > max_segs)
+					break;
+
+				dst = (char *)vq_bufs[vq_descr_idx].virt_addr;
+				cpto_size = SG_HW_TX_PKT_BUFFER_SIZE;
+
+				int actual_cpy_size =
+					(cpy_size > cpto_size) ? cpto_size : cpy_size;
+				rte_memcpy((void *)dst, (void *)data, actual_cpy_size);
+
+				cvq_desc->b[vq_descr_idx].len = actual_cpy_size;
+
+				remain -= actual_cpy_size;
+				cpy_size -= actual_cpy_size;
+				cpto_size -= actual_cpy_size;
+
+				data += actual_cpy_size;
+
+			} while (cpy_size && remain);
+
+		} else {
+			/* All data from this segment can fit in current virtqueue buffer */
+			rte_memcpy((void *)dst, (void *)data, cpy_size);
+
+			cvq_desc->b[vq_descr_idx].len += cpy_size;
+
+			remain -= cpy_size;
+			cpto_size -= cpy_size;
+		}
+
+		/* Packet complete - all segments from current mbuf has been copied */
+		if (remain == 0)
+			break;
+
+		/* increment dst to data end */
+		dst = (char *)vq_bufs[vq_descr_idx].virt_addr + cvq_desc->b[vq_descr_idx].len;
+
+		m = m->next;
+
+		if (!m) {
+			NT_LOG(ERR, NTNIC, "ERROR: invalid packet size\n");
+			break;
+		}
+
+		/* Prepare for next mbuf segment */
+		data = rte_pktmbuf_mtod(m, char *);
+		cpy_size = m->data_len;
+	};
+
+	cur_seg_num++;
+
+	if (cur_seg_num > max_segs) {
+		NT_LOG(ERR, NTNIC,
+			"Did not receive correct number of segment for a whole packet");
+		return -1;
+	}
+
+	return cur_seg_num;
+}
+
+static uint16_t eth_dev_tx_scg(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
+{
+	uint16_t pkt;
+	uint16_t first_vq_descr_idx = 0;
+
+	struct nthw_cvirtq_desc cvq_desc;
+
+	struct nthw_memory_descriptor *vq_bufs;
+
+	struct ntnic_tx_queue *tx_q = queue;
+
+	int nb_segs = 0, i;
+	int pkts_sent = 0;
+	uint16_t nb_segs_arr[MAX_TX_PACKETS];
+
+	if (kill_pmd)
+		return 0;
+
+	if (nb_pkts > MAX_TX_PACKETS)
+		nb_pkts = MAX_TX_PACKETS;
+
+	/*
+	 * count all segments needed to contain all packets in vq buffers
+	 */
+	for (i = 0; i < nb_pkts; i++) {
+		/* build the num segments array for segmentation control and release function */
+		int vq_segs = NUM_VQ_SEGS(bufs[i]->pkt_len);
+		nb_segs_arr[i] = vq_segs;
+		nb_segs += vq_segs;
+	}
+
+	if (!nb_segs)
+		goto exit_out;
+
+	if (sg_ops == NULL)
+		goto exit_out;
+
+	int got_nb_segs = sg_ops->nthw_get_tx_packets(tx_q->vq, nb_segs, &first_vq_descr_idx,
+			&cvq_desc /*&vq_descr,*/, &vq_bufs);
+
+	if (!got_nb_segs)
+		goto exit_out;
+
+	/*
+	 * we may get less vq buffers than we have asked for
+	 * calculate last whole packet that can fit into what
+	 * we have got
+	 */
+	while (got_nb_segs < nb_segs) {
+		if (!--nb_pkts)
+			goto exit_out;
+
+		nb_segs -= NUM_VQ_SEGS(bufs[nb_pkts]->pkt_len);
+
+		if (nb_segs <= 0)
+			goto exit_out;
+	}
+
+	/*
+	 * nb_pkts & nb_segs, got it all, ready to copy
+	 */
+	int seg_idx = 0;
+	int last_seg_idx = seg_idx;
+
+	for (pkt = 0; pkt < nb_pkts; ++pkt) {
+		uint16_t vq_descr_idx = VIRTQ_DESCR_IDX(seg_idx);
+
+		vq_set_flags(&cvq_desc, vq_descr_idx, 0);
+		vq_set_next(&cvq_desc, vq_descr_idx, 0);
+
+		if (bufs[pkt]->nb_segs == 1 && nb_segs_arr[pkt] == 1) {
+			rte_memcpy((void *)((char *)vq_bufs[vq_descr_idx].virt_addr + SG_HDR_SIZE),
+				rte_pktmbuf_mtod(bufs[pkt], void *), bufs[pkt]->pkt_len);
+
+			cvq_desc.b[vq_descr_idx].len = bufs[pkt]->pkt_len + SG_HDR_SIZE;
+
+			seg_idx++;
+
+		} else {
+			int cpy_segs = copy_mbuf_to_virtqueue(&cvq_desc, vq_descr_idx, vq_bufs,
+					nb_segs - last_seg_idx, bufs[pkt]);
+
+			if (cpy_segs < 0)
+				break;
+
+			seg_idx += cpy_segs;
+		}
+
+		last_seg_idx = seg_idx;
+		rte_pktmbuf_free(bufs[pkt]);
+		pkts_sent++;
+	}
+
+exit_out:
+
+	if (sg_ops != NULL) {
+		if (pkts_sent)
+			sg_ops->nthw_release_tx_packets(tx_q->vq, pkts_sent, nb_segs_arr);
+	}
+
+	return pkts_sent;
+}
+
 static int allocate_hw_virtio_queues(struct rte_eth_dev *eth_dev, int vf_num, struct hwq_s *hwq,
 	int num_descr, int buf_size)
 {
@@ -1252,6 +1689,10 @@ nthw_pci_dev_init(struct rte_pci_device *pci_dev)
 		rte_memcpy(&eth_dev->data->mac_addrs[0],
 					&internals->eth_addrs[0], RTE_ETHER_ADDR_LEN);
 
+		NT_LOG_DBGX(DBG, NTNIC, "Setting up RX functions for SCG\n");
+		eth_dev->rx_pkt_burst = eth_dev_rx_scg;
+		eth_dev->tx_pkt_burst = eth_dev_tx_scg;
+		eth_dev->tx_pkt_prepare = NULL;
 
 		struct rte_eth_link pmd_link;
 		pmd_link.link_speed = RTE_ETH_SPEED_NUM_NONE;
-- 
2.45.0