From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 9E21B45B04; Thu, 10 Oct 2024 16:20:14 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 7CFB340E1F; Thu, 10 Oct 2024 16:15:40 +0200 (CEST) Received: from egress-ip11b.ess.de.barracuda.com (egress-ip11b.ess.de.barracuda.com [18.185.115.215]) by mails.dpdk.org (Postfix) with ESMTP id 4875F40E3F for ; Thu, 10 Oct 2024 16:15:15 +0200 (CEST) Received: from EUR05-DB8-obe.outbound.protection.outlook.com (mail-db8eur05lp2106.outbound.protection.outlook.com [104.47.17.106]) by mx-outbound47-21.eu-central-1c.ess.aws.cudaops.com (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NO); Thu, 10 Oct 2024 14:15:12 +0000 ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=aO14DJNLaOSdy7jkedmyXymVJStMWC1ycgj0ZveonTjEdkNsmerUARWvVkHDm3Qp6d38Aj3MVCMFP+HbQn7rWdQitesu9gYxlPB1olh8AsXlGZjI0uabguz+PMpmZjls1aX+NuO8Z44wJY2vgW3eousAo0iDWU+bZNWoruRyXogzhIYtdriiarTnqmCAcluwZUbTb3LuK4ctkHFPIGO7PBmsU3vvPgtuMTqaiAOXHKlUlg8qvnoL2PzB6csl1DP2Lox0hsXB0Vd/QVugPpAc+MD5IIYK5ZJNsToLv2KKT6kyKImqVoZjVBFSsq7Cv8GhmJMgwn8f9b9gymZyj4v2gQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=cJCt4HtpON6CwGa206eAzl0qR88TDop3GzNaX7qk3DI=; b=bdt521HcDXW02FhUB9hoigPoauksgCVOKBTvtSYwtjLNexYtOJr6SJObBGsUjPVl3y/nZEbCNDVcef4pgCxuoeZTWE0JYp80PrRWqmi6FWttYYMGFUzBAtWucSbXmoS0tguDC461gwjGvtbKoEq0mVw8Gfk+p79IbtdIAclZ0nm869c0JqTFhU6p0dcqbZu+dPnOKPl5426bfWxXGPD2qqv0mS2ef2QSwJUzY0xxJW+3WZhyjsjVygcnflLkXrOBrXlo+P+STK87tI/iJfg2caNDadZf/tgGI2hYTXFs5RBRE57nXB7jrhjtInNIZdPMMwSNHLuZ+9G2azHEXIcORg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=fail (sender ip is 178.72.21.4) smtp.rcpttodomain=dpdk.org smtp.mailfrom=napatech.com; dmarc=fail (p=reject sp=reject pct=100) action=oreject header.from=napatech.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=napatech.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=cJCt4HtpON6CwGa206eAzl0qR88TDop3GzNaX7qk3DI=; b=ZoQvvictTnVfAHIdiy5b2+7FNESsv0Wu4MJ0PTmdmRV/CraUNlwFNUvabcXbNt9sHxjysaHh26MuDMtO6g9bUxXpaGwrHe90lClFvAFNPsdx/EJURLJ1sN4BnRzh++I3yUT2yaqNLYZhfuNmno7yhI65G1ctDRnO7gzrsma5KwY= Received: from DU2PR04CA0068.eurprd04.prod.outlook.com (2603:10a6:10:232::13) by FRZP190MB2215.EURP190.PROD.OUTLOOK.COM (2603:10a6:d10:139::5) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8026.23; Thu, 10 Oct 2024 14:15:05 +0000 Received: from DU6PEPF0000B61B.eurprd02.prod.outlook.com (2603:10a6:10:232:cafe::32) by DU2PR04CA0068.outlook.office365.com (2603:10a6:10:232::13) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8048.17 via Frontend Transport; Thu, 10 Oct 2024 14:15:05 +0000 X-MS-Exchange-Authentication-Results: spf=fail (sender IP is 178.72.21.4) smtp.mailfrom=napatech.com; dkim=none (message not signed) header.d=none;dmarc=fail action=oreject header.from=napatech.com; Received-SPF: Fail (protection.outlook.com: domain of napatech.com does not designate 178.72.21.4 as permitted sender) receiver=protection.outlook.com; client-ip=178.72.21.4; helo=localhost.localdomain; Received: from localhost.localdomain (178.72.21.4) by DU6PEPF0000B61B.mail.protection.outlook.com (10.167.8.132) with Microsoft SMTP Server id 15.20.8048.13 via Frontend Transport; Thu, 10 Oct 2024 14:15:05 +0000 From: Serhii Iliushyk To: dev@dpdk.org Cc: mko-plv@napatech.com, sil-plv@napatech.com, ckm@napatech.com, andrew.rybchenko@oktetlabs.ru, ferruh.yigit@amd.com, Danylo Vodopianov Subject: [PATCH v3 41/50] net/ntnic: add packet handler for virtio queues Date: Thu, 10 Oct 2024 16:13:56 +0200 Message-ID: <20241010141416.4063591-42-sil-plv@napatech.com> X-Mailer: git-send-email 2.45.0 In-Reply-To: <20241010141416.4063591-1-sil-plv@napatech.com> References: <20241006203728.330792-2-sil-plv@napatech.com> <20241010141416.4063591-1-sil-plv@napatech.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: DU6PEPF0000B61B:EE_|FRZP190MB2215:EE_ Content-Type: text/plain X-MS-Office365-Filtering-Correlation-Id: df27887b-d73f-44a1-70c1-08dce935f07a X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; ARA:13230040|82310400026|36860700013|1800799024|376014; X-Microsoft-Antispam-Message-Info: =?us-ascii?Q?BqbO2N9Ovd5LAutc3pHVpKZukklKA3KOdakG/4erdmSdHtALh6DhbrIcmioX?= =?us-ascii?Q?8/MCWHfAs+SrhL+9Mzh1Zg7iIHI3SLF5YSagklJnadr2i+gXJlduANr2CMLD?= =?us-ascii?Q?VPeiWgic9UR3yy9sOh20S3o+SnydO8e+7cHvVDeRaNan2VnAcRbqtMUcPoO6?= =?us-ascii?Q?B/sD5aJCUHon8Ibqs4lT/EftN/zvRUhby84Ux0leIxInRUYjaok7cNQd1qDP?= =?us-ascii?Q?uXqKkjt5uNnw1M8bc22a2M+ctLC+a54DgoRa4SH1w1D43NYx32Ej4iwdZr/E?= =?us-ascii?Q?crwVVrkGmz4efBugjZquTOhHnJY5gZFBNcdOxnQpf5nTkq3BTmGtnK5HEWLk?= =?us-ascii?Q?ATDi2pHz8mjxL8QSWfMProOE1nTL2aAH0W4tMEsEBpAZdyfhNXEw18cmNSHH?= =?us-ascii?Q?WyulS74IKNuh719a/VLUN5NirEgxVlMVAbvu1qjq81mpNMiW/dOUk41JjQN2?= =?us-ascii?Q?LKIkhSxKUrpfwjsp5W6FYMIhPYZRfU9dhw+NAAcd4H8oQvdwBfGE7zpG2PgT?= =?us-ascii?Q?bmhGSXJkh0XUsxD4XWD3ftvC9D3Thm90BrKI/Wmrr0APgMe3/U2GZ6C3HrU2?= =?us-ascii?Q?Iba9xsjyaYdJm0H1Ojb1hM8tEKiDfbzVCkvQPPpsFuvW6/a9D/1W21pHwc3c?= =?us-ascii?Q?jL8vNBg5aQWNbfaJ4QXZggoFg9STQxPtVUdW9LbNjrDPwo1SGwmJcNKCg0tK?= =?us-ascii?Q?X9G5za8g/XGpx5haYb3B3gv/HacVz9kq4aPgCvuzj83zVDGOOrumt2uHAwdn?= =?us-ascii?Q?Kmtnxr4UdCjFqyIt4FSE1gQZTyb65RHcgl0mNowtyqNpNuv2Jjt110IIGACY?= =?us-ascii?Q?h/4D3PWFW5hcGa3J1SPcHLseMoQSPlJIK5WaDe+MrQQjsgivtVN5mFZJWQtl?= =?us-ascii?Q?hrLG7FSIIpJGe1jaMqECn9q105cMN7dZuu99KlMmG+pexRrEedv+mPiO0XOv?= =?us-ascii?Q?hPuewbTUBFSU0FJyhKlDYS3pIQxzd8aiOswDGty0OmvvE+60Wfr76yODgsKA?= =?us-ascii?Q?qiGanz5yshNCza0Ck7vNcS6NwaNNuishx8M/rTntakkRCEMPgU7fJp7AST/w?= =?us-ascii?Q?YEt97FVZ6CPsWU2h3V7q9tQj97OPc8SOI+amWHYfjqT8z1/szZoCnSxtVNHD?= =?us-ascii?Q?4A3ScECfmUBGs0by8KEIMx3l9yH9J2tVr/2UmoAXg7RZMs+CbojSb0LjqXfu?= =?us-ascii?Q?tGYthxWeLjU7a2kBHMaO0DvAunUrG2KrDz35oj6ZAAvcTeEPqKq3OhBt7tkG?= =?us-ascii?Q?alfrkxn5XR88qqOp3tDBbkPhWc4CTMA6cpVbd7DilSINrwZJL1jnj2iSkfHi?= =?us-ascii?Q?jpJQTPMgD9fhmP6pHqzMFnOxYNasWJv+QT9jUjr+Hn6D8YbpZu4GinG2b+o/?= =?us-ascii?Q?tdUB5A+ezEDTBHtFp7KZVlHYPLhS?= X-Forefront-Antispam-Report: CIP:178.72.21.4; CTRY:DK; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:localhost.localdomain; PTR:InfoDomainNonexistent; CAT:NONE; SFS:(13230040)(82310400026)(36860700013)(1800799024)(376014); DIR:OUT; SFP:1102; X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0: 1fbsTIH7+zGebmDkX6XA3JjfHhr1rqlgEI6XR38nIiNXe8HT9E53z+kmFvFQtCf+uBq5lH/OXBeJaAEJI9F8/33RXAt5zQOcjUMKRggntDssdmlKjJAee4rhLvfejzN8X3Lq0VsNUga9vJL/matXT31rF70Ug1+J/pw0lP4vkoaH8DzqeeLuTBDPsYFu7mPgfYFdSY6r4dndLZodlHRh+rO9+eXqgmjdr/dqsLnODubkwOW6hzmvUPaIrSb8jN3kYAtaRcrG1niVEWjfUK3aSSprk4G3nX0nOTDYEECo7qEVgpDvi6l1pBLnfl2RMbOIdvQU07c0zL0SiA+I5gqgzQw+Db8jq98Tq0bSelM5PPOgnX+cyGq56bIcqIKLl+/sUVonLs4ATDk3p0YeK1T14dvOZRvJxsQIW8xz/4+wn29WDfZ3/E4e9M4UEd9g76epURe1TxXwDPJNd4R3KyEsDf04eQ5VxgMibDtvQQhNG636HjjqZazlkWx6Ox/zgZ/BidkOTkqJxrw/1zkA17hi1L3YpdngGUj5MD8k5Rs2fVkFCU7l1jnaYv+ACFRhdbJBL2FXrV4ZkQisMMDH1L40RUWUCMYSib3MmX2tAoifXKZ/jlTG4vkaYcp8fgi+67GxxZ858v03xGpEX5BZ913JbCA2QH5V5wbXhgDKzuKlceQ= X-OriginatorOrg: napatech.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 10 Oct 2024 14:15:05.3581 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: df27887b-d73f-44a1-70c1-08dce935f07a X-MS-Exchange-CrossTenant-Id: c4540d0b-728a-4233-9da5-9ea30c7ec3ed X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=c4540d0b-728a-4233-9da5-9ea30c7ec3ed; Ip=[178.72.21.4]; Helo=[localhost.localdomain] X-MS-Exchange-CrossTenant-AuthSource: DU6PEPF0000B61B.eurprd02.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: FRZP190MB2215 X-BESS-ID: 1728569709-312053-12639-23997-2 X-BESS-VER: 2019.1_20241004.2057 X-BESS-Apparent-Source-IP: 104.47.17.106 X-BESS-Parts: H4sIAAAAAAACA4uuVkqtKFGyUioBkjpK+cVKVoZmhoZmQGYGUDTNxCwlKdHMyN zEPM3MwjAlydgiMdEkNSXF2Mw8KdHSQqk2FgCPUzOcQgAAAA== X-BESS-Outbound-Spam-Score: 0.00 X-BESS-Outbound-Spam-Report: Code version 3.2, rules version 3.2.2.259630 [from cloudscan21-251.eu-central-1b.ess.aws.cudaops.com] Rule breakdown below pts rule name description ---- ---------------------- -------------------------------- 0.00 BSF_BESS_OUTBOUND META: BESS Outbound X-BESS-Outbound-Spam-Status: SCORE=0.00 using account:ESS113687 scores of KILL_LEVEL=7.0 tests=BSF_BESS_OUTBOUND X-BESS-BRTS-Status: 1 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Danylo Vodopianov Added functionality to handles the copying of segmented queue data into a rte_mbuf and vice versa. Added functionality to manages packet transmission for a specified TX and RX queues. Signed-off-by: Danylo Vodopianov --- v3 * Remove newline characters from logs. --- drivers/net/ntnic/include/ntnic_virt_queue.h | 87 +++- drivers/net/ntnic/include/ntos_drv.h | 14 + drivers/net/ntnic/ntnic_ethdev.c | 441 +++++++++++++++++++ 3 files changed, 540 insertions(+), 2 deletions(-) diff --git a/drivers/net/ntnic/include/ntnic_virt_queue.h b/drivers/net/ntnic/include/ntnic_virt_queue.h index 821b23af6c..f8842819e4 100644 --- a/drivers/net/ntnic/include/ntnic_virt_queue.h +++ b/drivers/net/ntnic/include/ntnic_virt_queue.h @@ -14,10 +14,93 @@ struct nthw_virt_queue; #define SPLIT_RING 0 +#define PACKED_RING 1 #define IN_ORDER 1 -struct nthw_cvirtq_desc; +/* + * SPLIT : This marks a buffer as continuing via the next field. + * PACKED: This marks a buffer as continuing. (packed does not have a next field, so must be + * contiguous) In Used descriptors it must be ignored + */ +#define VIRTQ_DESC_F_NEXT 1 + +/* + * Split Ring virtq Descriptor + */ +struct __rte_aligned(8) virtq_desc { + /* Address (guest-physical). */ + uint64_t addr; + /* Length. */ + uint32_t len; + /* The flags as indicated above. */ + uint16_t flags; + /* Next field if flags & NEXT */ + uint16_t next; +}; + +/* descr phys address must be 16 byte aligned */ +struct __rte_aligned(16) pvirtq_desc { + /* Buffer Address. */ + uint64_t addr; + /* Buffer Length. */ + uint32_t len; + /* Buffer ID. */ + uint16_t id; + /* The flags depending on descriptor type. */ + uint16_t flags; +}; + +/* + * Common virtq descr + */ +#define vq_set_next(vq, index, nxt) \ +do { \ + struct nthw_cvirtq_desc *temp_vq = (vq); \ + if (temp_vq->vq_type == SPLIT_RING) \ + temp_vq->s[index].next = nxt; \ +} while (0) + +#define vq_add_flags(vq, index, flgs) \ +do { \ + struct nthw_cvirtq_desc *temp_vq = (vq); \ + uint16_t tmp_index = (index); \ + typeof(flgs) tmp_flgs = (flgs); \ + if (temp_vq->vq_type == SPLIT_RING) \ + temp_vq->s[tmp_index].flags |= tmp_flgs; \ + else if (temp_vq->vq_type == PACKED_RING) \ + temp_vq->p[tmp_index].flags |= tmp_flgs; \ +} while (0) + +#define vq_set_flags(vq, index, flgs) \ +do { \ + struct nthw_cvirtq_desc *temp_vq = (vq); \ + uint32_t temp_flags = (flgs); \ + uint32_t temp_index = (index); \ + if ((temp_vq)->vq_type == SPLIT_RING) \ + (temp_vq)->s[temp_index].flags = temp_flags; \ + else if ((temp_vq)->vq_type == PACKED_RING) \ + (temp_vq)->p[temp_index].flags = temp_flags; \ +} while (0) + +struct nthw_virtq_desc_buf { + /* Address (guest-physical). */ + alignas(16) uint64_t addr; + /* Length. */ + uint32_t len; +}; + +struct nthw_cvirtq_desc { + union { + struct nthw_virtq_desc_buf *b; /* buffer part as is common */ + struct virtq_desc *s; /* SPLIT */ + struct pvirtq_desc *p; /* PACKED */ + }; + uint16_t vq_type; +}; -struct nthw_received_packets; +struct nthw_received_packets { + void *addr; + uint32_t len; +}; #endif /* __NTOSS_VIRT_QUEUE_H__ */ diff --git a/drivers/net/ntnic/include/ntos_drv.h b/drivers/net/ntnic/include/ntos_drv.h index 933b012e07..d51d1e3677 100644 --- a/drivers/net/ntnic/include/ntos_drv.h +++ b/drivers/net/ntnic/include/ntos_drv.h @@ -27,6 +27,20 @@ #define MAX_QUEUES 125 /* Structs: */ +#define SG_HDR_SIZE 12 + +struct _pkt_hdr_rx { + uint32_t cap_len:14; + uint32_t fid:10; + uint32_t ofs1:8; + uint32_t ip_prot:8; + uint32_t port:13; + uint32_t descr:8; + uint32_t descr_12b:1; + uint32_t color_type:2; + uint32_t color:32; +}; + struct nthw_memory_descriptor { void *phys_addr; void *virt_addr; diff --git a/drivers/net/ntnic/ntnic_ethdev.c b/drivers/net/ntnic/ntnic_ethdev.c index ffbd07f5d8..bff893ec7a 100644 --- a/drivers/net/ntnic/ntnic_ethdev.c +++ b/drivers/net/ntnic/ntnic_ethdev.c @@ -39,9 +39,29 @@ /* Max RSS queues */ #define MAX_QUEUES 125 +#define NUM_VQ_SEGS(_data_size_) \ + ({ \ + size_t _size = (_data_size_); \ + size_t _segment_count = ((_size + SG_HDR_SIZE) > SG_HW_TX_PKT_BUFFER_SIZE) \ + ? (((_size + SG_HDR_SIZE) + SG_HW_TX_PKT_BUFFER_SIZE - 1) / \ + SG_HW_TX_PKT_BUFFER_SIZE) \ + : 1; \ + _segment_count; \ + }) + +#define VIRTQ_DESCR_IDX(_tx_pkt_idx_) \ + (((_tx_pkt_idx_) + first_vq_descr_idx) % SG_NB_HW_TX_DESCRIPTORS) + +#define VIRTQ_DESCR_IDX_NEXT(_vq_descr_idx_) (((_vq_descr_idx_) + 1) % SG_NB_HW_TX_DESCRIPTORS) + #define ONE_G_SIZE 0x40000000 #define ONE_G_MASK (ONE_G_SIZE - 1) +#define MAX_RX_PACKETS 128 +#define MAX_TX_PACKETS 128 + +int kill_pmd; + #define ETH_DEV_NTNIC_HELP_ARG "help" #define ETH_DEV_NTHW_RXQUEUES_ARG "rxqs" #define ETH_DEV_NTHW_TXQUEUES_ARG "txqs" @@ -193,6 +213,423 @@ eth_dev_infos_get(struct rte_eth_dev *eth_dev, struct rte_eth_dev_info *dev_info return 0; } +static __rte_always_inline int copy_virtqueue_to_mbuf(struct rte_mbuf *mbuf, + struct rte_mempool *mb_pool, + struct nthw_received_packets *hw_recv, + int max_segs, + uint16_t data_len) +{ + int src_pkt = 0; + /* + * 1. virtqueue packets may be segmented + * 2. the mbuf size may be too small and may need to be segmented + */ + char *data = (char *)hw_recv->addr + SG_HDR_SIZE; + char *dst = (char *)mbuf->buf_addr + RTE_PKTMBUF_HEADROOM; + + /* set packet length */ + mbuf->pkt_len = data_len - SG_HDR_SIZE; + + int remain = mbuf->pkt_len; + /* First cpy_size is without header */ + int cpy_size = (data_len > SG_HW_RX_PKT_BUFFER_SIZE) + ? SG_HW_RX_PKT_BUFFER_SIZE - SG_HDR_SIZE + : remain; + + struct rte_mbuf *m = mbuf; /* if mbuf segmentation is needed */ + + while (++src_pkt <= max_segs) { + /* keep track of space in dst */ + int cpto_size = rte_pktmbuf_tailroom(m); + + if (cpy_size > cpto_size) { + int new_cpy_size = cpto_size; + + rte_memcpy((void *)dst, (void *)data, new_cpy_size); + m->data_len += new_cpy_size; + remain -= new_cpy_size; + cpy_size -= new_cpy_size; + + data += new_cpy_size; + + /* + * loop if remaining data from this virtqueue seg + * cannot fit in one extra mbuf + */ + do { + m->next = rte_pktmbuf_alloc(mb_pool); + + if (unlikely(!m->next)) + return -1; + + m = m->next; + + /* Headroom is not needed in chained mbufs */ + rte_pktmbuf_prepend(m, rte_pktmbuf_headroom(m)); + dst = (char *)m->buf_addr; + m->data_len = 0; + m->pkt_len = 0; + + cpto_size = rte_pktmbuf_tailroom(m); + + int actual_cpy_size = + (cpy_size > cpto_size) ? cpto_size : cpy_size; + + rte_memcpy((void *)dst, (void *)data, actual_cpy_size); + m->pkt_len += actual_cpy_size; + m->data_len += actual_cpy_size; + + remain -= actual_cpy_size; + cpy_size -= actual_cpy_size; + + data += actual_cpy_size; + + mbuf->nb_segs++; + + } while (cpy_size && remain); + + } else { + /* all data from this virtqueue segment can fit in current mbuf */ + rte_memcpy((void *)dst, (void *)data, cpy_size); + m->data_len += cpy_size; + + if (mbuf->nb_segs > 1) + m->pkt_len += cpy_size; + + remain -= cpy_size; + } + + /* packet complete - all data from current virtqueue packet has been copied */ + if (remain == 0) + break; + + /* increment dst to data end */ + dst = rte_pktmbuf_mtod_offset(m, char *, m->data_len); + /* prepare for next virtqueue segment */ + data = (char *)hw_recv[src_pkt].addr; /* following packets are full data */ + + cpy_size = (remain > SG_HW_RX_PKT_BUFFER_SIZE) ? SG_HW_RX_PKT_BUFFER_SIZE : remain; + }; + + if (src_pkt > max_segs) { + NT_LOG(ERR, NTNIC, + "Did not receive correct number of segment for a whole packet"); + return -1; + } + + return src_pkt; +} + +static uint16_t eth_dev_rx_scg(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) +{ + unsigned int i; + struct rte_mbuf *mbuf; + struct ntnic_rx_queue *rx_q = queue; + uint16_t num_rx = 0; + + struct nthw_received_packets hw_recv[MAX_RX_PACKETS]; + + if (kill_pmd) + return 0; + + if (unlikely(nb_pkts == 0)) + return 0; + + if (nb_pkts > MAX_RX_PACKETS) + nb_pkts = MAX_RX_PACKETS; + + uint16_t whole_pkts = 0; + uint16_t hw_recv_pkt_segs = 0; + + if (sg_ops != NULL) { + hw_recv_pkt_segs = + sg_ops->nthw_get_rx_packets(rx_q->vq, nb_pkts, hw_recv, &whole_pkts); + + if (!hw_recv_pkt_segs) + return 0; + } + + nb_pkts = whole_pkts; + + int src_pkt = 0;/* from 0 to hw_recv_pkt_segs */ + + for (i = 0; i < nb_pkts; i++) { + bufs[i] = rte_pktmbuf_alloc(rx_q->mb_pool); + + if (!bufs[i]) { + NT_LOG(ERR, NTNIC, "ERROR - no more buffers mbuf in mempool"); + goto err_exit; + } + + mbuf = bufs[i]; + + struct _pkt_hdr_rx *phdr = (struct _pkt_hdr_rx *)hw_recv[src_pkt].addr; + + if (phdr->cap_len < SG_HDR_SIZE) { + NT_LOG(ERR, NTNIC, + "Pkt len of zero received. No header!! - dropping packets"); + rte_pktmbuf_free(mbuf); + goto err_exit; + } + + { + if (phdr->cap_len <= SG_HW_RX_PKT_BUFFER_SIZE && + (phdr->cap_len - SG_HDR_SIZE) <= rte_pktmbuf_tailroom(mbuf)) { + mbuf->data_len = phdr->cap_len - SG_HDR_SIZE; + rte_memcpy(rte_pktmbuf_mtod(mbuf, char *), + (char *)hw_recv[src_pkt].addr + SG_HDR_SIZE, + mbuf->data_len); + + mbuf->pkt_len = mbuf->data_len; + src_pkt++; + + } else { + int cpy_segs = copy_virtqueue_to_mbuf(mbuf, rx_q->mb_pool, + &hw_recv[src_pkt], + hw_recv_pkt_segs - src_pkt, + phdr->cap_len); + + if (cpy_segs < 0) { + /* Error */ + rte_pktmbuf_free(mbuf); + goto err_exit; + } + + src_pkt += cpy_segs; + } + + num_rx++; + + mbuf->ol_flags &= ~(RTE_MBUF_F_RX_FDIR_ID | RTE_MBUF_F_RX_FDIR); + mbuf->port = (uint16_t)-1; + } + } + +err_exit: + + if (sg_ops != NULL) + sg_ops->nthw_release_rx_packets(rx_q->vq, hw_recv_pkt_segs); + + return num_rx; +} + +static int copy_mbuf_to_virtqueue(struct nthw_cvirtq_desc *cvq_desc, + uint16_t vq_descr_idx, + struct nthw_memory_descriptor *vq_bufs, + int max_segs, + struct rte_mbuf *mbuf) +{ + /* + * 1. mbuf packet may be segmented + * 2. the virtqueue buffer size may be too small and may need to be segmented + */ + + char *data = rte_pktmbuf_mtod(mbuf, char *); + char *dst = (char *)vq_bufs[vq_descr_idx].virt_addr + SG_HDR_SIZE; + + int remain = mbuf->pkt_len; + int cpy_size = mbuf->data_len; + + struct rte_mbuf *m = mbuf; + int cpto_size = SG_HW_TX_PKT_BUFFER_SIZE - SG_HDR_SIZE; + + cvq_desc->b[vq_descr_idx].len = SG_HDR_SIZE; + + int cur_seg_num = 0; /* start from 0 */ + + while (m) { + /* Can all data in current src segment be in current dest segment */ + if (cpy_size > cpto_size) { + int new_cpy_size = cpto_size; + + rte_memcpy((void *)dst, (void *)data, new_cpy_size); + + cvq_desc->b[vq_descr_idx].len += new_cpy_size; + + remain -= new_cpy_size; + cpy_size -= new_cpy_size; + + data += new_cpy_size; + + /* + * Loop if remaining data from this virtqueue seg cannot fit in one extra + * mbuf + */ + do { + vq_add_flags(cvq_desc, vq_descr_idx, VIRTQ_DESC_F_NEXT); + + int next_vq_descr_idx = VIRTQ_DESCR_IDX_NEXT(vq_descr_idx); + + vq_set_next(cvq_desc, vq_descr_idx, next_vq_descr_idx); + + vq_descr_idx = next_vq_descr_idx; + + vq_set_flags(cvq_desc, vq_descr_idx, 0); + vq_set_next(cvq_desc, vq_descr_idx, 0); + + if (++cur_seg_num > max_segs) + break; + + dst = (char *)vq_bufs[vq_descr_idx].virt_addr; + cpto_size = SG_HW_TX_PKT_BUFFER_SIZE; + + int actual_cpy_size = + (cpy_size > cpto_size) ? cpto_size : cpy_size; + rte_memcpy((void *)dst, (void *)data, actual_cpy_size); + + cvq_desc->b[vq_descr_idx].len = actual_cpy_size; + + remain -= actual_cpy_size; + cpy_size -= actual_cpy_size; + cpto_size -= actual_cpy_size; + + data += actual_cpy_size; + + } while (cpy_size && remain); + + } else { + /* All data from this segment can fit in current virtqueue buffer */ + rte_memcpy((void *)dst, (void *)data, cpy_size); + + cvq_desc->b[vq_descr_idx].len += cpy_size; + + remain -= cpy_size; + cpto_size -= cpy_size; + } + + /* Packet complete - all segments from current mbuf has been copied */ + if (remain == 0) + break; + + /* increment dst to data end */ + dst = (char *)vq_bufs[vq_descr_idx].virt_addr + cvq_desc->b[vq_descr_idx].len; + + m = m->next; + + if (!m) { + NT_LOG(ERR, NTNIC, "ERROR: invalid packet size"); + break; + } + + /* Prepare for next mbuf segment */ + data = rte_pktmbuf_mtod(m, char *); + cpy_size = m->data_len; + }; + + cur_seg_num++; + + if (cur_seg_num > max_segs) { + NT_LOG(ERR, NTNIC, + "Did not receive correct number of segment for a whole packet"); + return -1; + } + + return cur_seg_num; +} + +static uint16_t eth_dev_tx_scg(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) +{ + uint16_t pkt; + uint16_t first_vq_descr_idx = 0; + + struct nthw_cvirtq_desc cvq_desc; + + struct nthw_memory_descriptor *vq_bufs; + + struct ntnic_tx_queue *tx_q = queue; + + int nb_segs = 0, i; + int pkts_sent = 0; + uint16_t nb_segs_arr[MAX_TX_PACKETS]; + + if (kill_pmd) + return 0; + + if (nb_pkts > MAX_TX_PACKETS) + nb_pkts = MAX_TX_PACKETS; + + /* + * count all segments needed to contain all packets in vq buffers + */ + for (i = 0; i < nb_pkts; i++) { + /* build the num segments array for segmentation control and release function */ + int vq_segs = NUM_VQ_SEGS(bufs[i]->pkt_len); + nb_segs_arr[i] = vq_segs; + nb_segs += vq_segs; + } + + if (!nb_segs) + goto exit_out; + + if (sg_ops == NULL) + goto exit_out; + + int got_nb_segs = sg_ops->nthw_get_tx_packets(tx_q->vq, nb_segs, &first_vq_descr_idx, + &cvq_desc /*&vq_descr,*/, &vq_bufs); + + if (!got_nb_segs) + goto exit_out; + + /* + * we may get less vq buffers than we have asked for + * calculate last whole packet that can fit into what + * we have got + */ + while (got_nb_segs < nb_segs) { + if (!--nb_pkts) + goto exit_out; + + nb_segs -= NUM_VQ_SEGS(bufs[nb_pkts]->pkt_len); + + if (nb_segs <= 0) + goto exit_out; + } + + /* + * nb_pkts & nb_segs, got it all, ready to copy + */ + int seg_idx = 0; + int last_seg_idx = seg_idx; + + for (pkt = 0; pkt < nb_pkts; ++pkt) { + uint16_t vq_descr_idx = VIRTQ_DESCR_IDX(seg_idx); + + vq_set_flags(&cvq_desc, vq_descr_idx, 0); + vq_set_next(&cvq_desc, vq_descr_idx, 0); + + if (bufs[pkt]->nb_segs == 1 && nb_segs_arr[pkt] == 1) { + rte_memcpy((void *)((char *)vq_bufs[vq_descr_idx].virt_addr + SG_HDR_SIZE), + rte_pktmbuf_mtod(bufs[pkt], void *), bufs[pkt]->pkt_len); + + cvq_desc.b[vq_descr_idx].len = bufs[pkt]->pkt_len + SG_HDR_SIZE; + + seg_idx++; + + } else { + int cpy_segs = copy_mbuf_to_virtqueue(&cvq_desc, vq_descr_idx, vq_bufs, + nb_segs - last_seg_idx, bufs[pkt]); + + if (cpy_segs < 0) + break; + + seg_idx += cpy_segs; + } + + last_seg_idx = seg_idx; + rte_pktmbuf_free(bufs[pkt]); + pkts_sent++; + } + +exit_out: + + if (sg_ops != NULL) { + if (pkts_sent) + sg_ops->nthw_release_tx_packets(tx_q->vq, pkts_sent, nb_segs_arr); + } + + return pkts_sent; +} + static int allocate_hw_virtio_queues(struct rte_eth_dev *eth_dev, int vf_num, struct hwq_s *hwq, int num_descr, int buf_size) { @@ -1252,6 +1689,10 @@ nthw_pci_dev_init(struct rte_pci_device *pci_dev) rte_memcpy(ð_dev->data->mac_addrs[0], &internals->eth_addrs[0], RTE_ETHER_ADDR_LEN); + NT_LOG_DBGX(DBG, NTNIC, "Setting up RX functions for SCG"); + eth_dev->rx_pkt_burst = eth_dev_rx_scg; + eth_dev->tx_pkt_burst = eth_dev_tx_scg; + eth_dev->tx_pkt_prepare = NULL; struct rte_eth_link pmd_link; pmd_link.link_speed = RTE_ETH_SPEED_NUM_NONE; -- 2.45.0