From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 6F43E42910; Mon, 10 Apr 2023 13:02:03 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 9465642D37; Mon, 10 Apr 2023 13:01:10 +0200 (CEST) Received: from NAM12-DM6-obe.outbound.protection.outlook.com (mail-dm6nam12on2132.outbound.protection.outlook.com [40.107.243.132]) by mails.dpdk.org (Postfix) with ESMTP id E297342D1D for ; Mon, 10 Apr 2023 13:01:07 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=lp5j6yc/dbzIq6HKATgt7PZSRDxLz77nePjBF/C46G3Tu6jkki/I8zNpDn60Z0MLaAC1uiOxcwTd8repU3TxIckN4wrp3mkbUv6HueLFYAOTYuP3W89nd/kuSWH4PRsEgW0snx6KQTO7KIaD1S0qJRpehDLn5zdveY9S2yJEsyWkOiGIu59iK+/aipaLjNuf0hJgTDwPQ/hdJUePFerzhLPCU0BKu8oJYANEX75LI6FA1jwtJzTNpTwik0JSeIbUJfQ6hW/003i0sxvQKHJnBMR3e1hqIea7b/2FoHrci/4iL7d/asCqs13DhYxF258SRR3wgqTBFZEzbesSBblAcQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=WjtrdteYFyVZt6ZBxDdXt6/iXGtpeEAxh7gq9v8EFLU=; b=WOm8McbBfH+CmHw5rXgPS/FkjWJ9LT5tdhSOX8JCaxXTc+AuM0H5hJuwoIMDLL5kpgQRFCusMdHcEu3Osbi8ZElnPkxMAN7hYULvnN3sC6qI8QH8UTniJuoOY4joKD2EOw00JM/QcdRJdtTfqB7vmKSdniAHZcN1Yf0KyQjwPhODGlfxCLex/nJxa0B1qeZOVXMjL5c1EcV8NizM9T7h4Pp/pk3guvLaUlF3X1GK17mo/VE1x/nVT2jPV1dK+/7Oj6lerb3mzgPCMUnSi8VXVObkkxNTjORJfTh/jW4fWyIi9xgoYEZbQ1qF4oF4Q8PnC9fEOQyT2NTate5LQoyghw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=corigine.com; dmarc=pass action=none header.from=corigine.com; dkim=pass header.d=corigine.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=corigine.onmicrosoft.com; s=selector2-corigine-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=WjtrdteYFyVZt6ZBxDdXt6/iXGtpeEAxh7gq9v8EFLU=; b=V0HU+c+6DMUQbuy8KemJQLzdhRWnE41XhdrHp1E8nhnO0KkgN/6r33FnAJZZ8rpenlgu1FpkXl5EiKbIzrv+rgHsAqefqF+qpWcKHycbtCw3w5qV4hdsGbBagaJIqOm7wckRGCtaG3p//lPjmwkAcXRTMo+OcyG59EnLLzF7Aqw= Authentication-Results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=corigine.com; Received: from SJ0PR13MB5545.namprd13.prod.outlook.com (2603:10b6:a03:424::5) by DM6PR13MB3882.namprd13.prod.outlook.com (2603:10b6:5:22a::22) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6277.31; Mon, 10 Apr 2023 11:01:05 +0000 Received: from SJ0PR13MB5545.namprd13.prod.outlook.com ([fe80::baa6:b49:d2f4:c60e]) by SJ0PR13MB5545.namprd13.prod.outlook.com ([fe80::baa6:b49:d2f4:c60e%4]) with mapi id 15.20.6277.036; Mon, 10 Apr 2023 11:01:05 +0000 From: Chaoyong He To: dev@dpdk.org Cc: oss-drivers@corigine.com, niklas.soderlund@corigine.com, Chaoyong He Subject: [PATCH 08/13] net/nfp: move NFD3 logic to own source file Date: Mon, 10 Apr 2023 19:00:10 +0800 Message-Id: <20230410110015.2973660-9-chaoyong.he@corigine.com> X-Mailer: git-send-email 2.39.1 In-Reply-To: <20230410110015.2973660-1-chaoyong.he@corigine.com> References: <20230410110015.2973660-1-chaoyong.he@corigine.com> Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-ClientProxiedBy: SI2PR01CA0040.apcprd01.prod.exchangelabs.com (2603:1096:4:193::14) To SJ0PR13MB5545.namprd13.prod.outlook.com (2603:10b6:a03:424::5) MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: SJ0PR13MB5545:EE_|DM6PR13MB3882:EE_ X-MS-Office365-Filtering-Correlation-Id: 2334cff9-8a10-4aff-0feb-08db39b2e112 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: JyYQjtKarC9trcWHrDfL6CKZYdTYWmFyxMrQxfnXe5Cs4yLL1Ztua20yioQdz6H7WCf66tdo2MS196DL0uRA+QZAw8KZIFsT9CmYr7+mxuBLL4Rpx9L2uosV6fJP9mAMkdlFRrYd1qfjYVpM1ngCEjMkZ7PBPT9Sq/bWKZ4BqTDM3GcqEy6EcBgzQdVdS4BK9rim6I+65YwA/0QxCViNZXSfJTdwT26sCRL4RvN5ENQnq42mFQPUopYqzCGFvw+HQ5cWZYQqK7MuXIlcFWBP13zfiOva8AvAmlxtu4VHG9r0BkRQLeyMtX5J7XHh4PFgkskI5EerPfoSa21qHThozxFghZogcgXq1cUQ0Y6+b0bAUYhRTgdp8QTJyMGiWGAREFst+rfw00z767kq6oJb3rMGAjA+GvSjMeJvO2Zc+Pl8CVp3gQrgL7KHkAijkt7BxK4j6Nr3zB0u0ZsJo7/SKo/P/UM2fqUiRRlhUdleho5MUNxbbgmWHVTHFH24ged0P6EF4btkgnMlvzGS4dO/SKTxMYHVXeiFHHe7ukvfK3y6OQj34fGtHPaKYwErQL++sXjWAOxjdBHbYkxGTT3i2IM7nq5XCvcdMQ3tpKB5lrOk3BISjF8vwCYk5VjtyMwo X-Forefront-Antispam-Report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:SJ0PR13MB5545.namprd13.prod.outlook.com; PTR:; CAT:NONE; SFS:(13230028)(4636009)(396003)(39840400004)(366004)(376002)(136003)(346002)(451199021)(316002)(86362001)(66556008)(66574015)(66476007)(478600001)(83380400001)(66946007)(38100700002)(38350700002)(36756003)(186003)(6512007)(4326008)(8676002)(2616005)(6916009)(41300700001)(2906002)(1076003)(6506007)(26005)(107886003)(8936002)(52116002)(30864003)(6666004)(5660300002)(6486002)(44832011); DIR:OUT; SFP:1102; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: =?utf-8?B?Q0R5VlJUZXNIQUF6aS8vNThieUJiczZ4QnEyUmVac0EzL2cwOUVuQlhRRUlT?= =?utf-8?B?bUJ4TUhyYmsrOTZsVlRWbnYzQVZaQmVYVkU4OEdseXYrT3ZPYUtXWU1ndFUy?= =?utf-8?B?T0kySGJLbndvNGJVdUxmM0FvSmN1TjBqbmlSdkxQMzNEd0g3QW51VVRRVzY0?= =?utf-8?B?Q0x3bko2em03NytwamVMV0drQ1d4R1JleVNZSXdIZU5Qc0FnMFZEdlhBOXJJ?= =?utf-8?B?ZkE1bW9Za1UwQjdMRmZUbnQwSU5Qbis0c2NKL2gxTkcxZnZ2a0xrckprcUVm?= =?utf-8?B?WFU1bk1yRkZmbWU0dVRSTkE0Zk5zdFRUZ2JMdnIydkJhbHVmQkRQNG95YnNT?= =?utf-8?B?azVyK1k3S2pVL0ZSdGJNb0FBaU12V2J6VnZFMzh0OTYrQmVCNkF4d1dMdTdK?= =?utf-8?B?VGZpQU40ZFF1KzFETUt0MERTbkROWGYyZ2JNeS9tY3F2NHF0SWVMQ2tHV0JZ?= =?utf-8?B?elhqSVlaUEZYZTNmUGNZd1VRaEIyWC85RGRrMmYrUFdocGFNdmhhWGs0dVdQ?= =?utf-8?B?U1lCRTVCM0MwUVpiSmVnMkNMVGU2a1NGWlRiNEFROSt1YXBJS3RiaE84VGpF?= =?utf-8?B?ei9zL1IzdURnWEx6aFB5aEhUMGVWcE1iQ1BwOW9OR0xWNUpicHpRTGhSenRz?= =?utf-8?B?cEEwQXB5QWNJWEdTdlNjU0doK2l0MXZvakNvT0pyeVd0NENPWmw1WmVQaXBu?= =?utf-8?B?S29kWXloMWU3dWlna1dOc01QM2hLemZLdmJrM0xKZ2xiS2pIOWp4Njl3Umpj?= =?utf-8?B?VmFmRmtZemV4bE1lWXRVQ29ZUTFhVUtoSXdPc3hjY2RVbDFIclZ3amcwdTBH?= =?utf-8?B?VkljbHJNTFpuVy9SSHBiY1gyaGV1bnhQQlNGL2U2bW8xY0VRVm9tc1RKRVFn?= =?utf-8?B?WVl0NzlnOVB3UkJOYUlwdXpUZTFZUWNCM0VoR3o2L3kxY1ZrZXhWN1ovWTIy?= =?utf-8?B?aGozYkRIU205WlZoMk4yQS9KUDdmeVhPNXRJOVBxd0VsNnNsMmxHZmlzcFVz?= =?utf-8?B?c3M4OG9jd0lJczFxUnFUOGVSRFdHUkdrUVFLbFBid2FvMTFqTnlaTUs3aFZJ?= =?utf-8?B?UEVWVmRvSnk2dHF4aElNNjR2RFBuK01DR3ZWQVJzSU11ZVdLZUdpUlBEN3o5?= =?utf-8?B?NzVNRndqSWZQRTFLZUFzNUkvbTlPSm9PczdtaS9tSm9HSVZPUzkrd3Q3K3li?= =?utf-8?B?NDFIWVZhVnhwLzcvV2JiTnk3N29adUpYRjZiMFJqUXoxTkpJV2N0WlVTdUo2?= =?utf-8?B?d0FrbWdjSldxVGxsZGV0NXo2T3hOUTdObjlRNDdsR2RiQnJvWHlEbUN6Nmhs?= =?utf-8?B?MG1rc0VjRm1nM1BoWDZJZU5GV1ZXWjlkbHJNMGVzVlY1T2R1aklybUI4TGdr?= =?utf-8?B?YnhvTkdwWFhuU09lcDFsR0FhM00velVyOHRiMk1yV3NtMndCREI0SGVsVC9l?= =?utf-8?B?Zkd6c2NUNzhsZGZ5T1Zld05QdEF4WGZ4ZlFNVklQYmFwV3V5ZWpZUmhMU0JI?= =?utf-8?B?SEM1WktlSFlEejNqQVhlSHBCQVhCRDZveXNSZkhGSFNmQjhJL0pzSWJsZ3RN?= =?utf-8?B?cUJoK1ZzNG54VGJSS0ZzK2MwS3lwVzBNditYMHc4c0VOMDUvQU50T1F5bTFC?= =?utf-8?B?NWJrZHlITFJoblhtVkhsMkRLV3RHK3hFVGYyazhocG5XdG51R0ZkRGw3aG10?= =?utf-8?B?MnFoV1ZxR2J0WnZTRGk0MDhjK0dzQ0M2c0xFR3ZjdGFoa0dOZzZzY2x5OWMv?= =?utf-8?B?Skt1cHQ5Wllib2lqemhiRVVWODlUaG90OG5FNW84TE1tZ2huZ0NMOXpIVlI0?= =?utf-8?B?R2piMm9ORUpuYVczQVlZakZQd2tWTXVmQkhXMFdic0N1RFJXb3E2eHcyQXZK?= =?utf-8?B?VTRQWWc3RkJtVlNMNXNsMzI0U3U1aFIwb3JydkZrVG1VZjdKTE9BWmJ1cmth?= =?utf-8?B?NkpvRWtCbWhyNUl2MzlMZmNXMEpVQTd5ZVZVY09EeFZHbjdPUTBSMGphWmxm?= =?utf-8?B?R0xZVmdJaVVFSWMyUkQ2bHExMC80T28xYU1FM3pKeXRKODhLT2MzN09lQ3BE?= =?utf-8?B?ZEJxWUJwbTVXaHJ4T2VPTEdtRzNRL0psN1c3U3Z4OFQyMnd3MFVNdkJYSDJk?= =?utf-8?B?ckNQU241QWFFZkhRRURQRFEvSjlLRG55aWxOMVZGVXFET09RNEZMR0ZHRUdD?= =?utf-8?B?UlE9PQ==?= X-OriginatorOrg: corigine.com X-MS-Exchange-CrossTenant-Network-Message-Id: 2334cff9-8a10-4aff-0feb-08db39b2e112 X-MS-Exchange-CrossTenant-AuthSource: SJ0PR13MB5545.namprd13.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 10 Apr 2023 11:01:05.0060 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: fe128f2c-073b-4c20-818e-7246a585940c X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: ac2KDRzQN+yDUnPHkqgoFbmouoX6zt5aR+q13tFkNQJGa5l5qDrklPvHB2nbQSHAaZGETGgNcw0jYdspJ6StNWLRGoC5wHnM1PoNfFbvBJ0= X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR13MB3882 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Split out the data struct and logics of NFD3 into new file. The code is moved verbatim, no functional change. Signed-off-by: Chaoyong He Reviewed-by: Niklas Söderlund --- drivers/net/nfp/flower/nfp_flower.c | 1 + drivers/net/nfp/flower/nfp_flower_ctrl.c | 1 + .../net/nfp/flower/nfp_flower_representor.c | 1 + drivers/net/nfp/meson.build | 1 + drivers/net/nfp/nfd3/nfp_nfd3.h | 166 +++++++++ drivers/net/nfp/nfd3/nfp_nfd3_dp.c | 346 ++++++++++++++++++ drivers/net/nfp/nfp_common.c | 2 + drivers/net/nfp/nfp_ethdev.c | 1 + drivers/net/nfp/nfp_ethdev_vf.c | 1 + drivers/net/nfp/nfp_rxtx.c | 336 +---------------- drivers/net/nfp/nfp_rxtx.h | 153 +------- 11 files changed, 526 insertions(+), 483 deletions(-) create mode 100644 drivers/net/nfp/nfd3/nfp_nfd3.h create mode 100644 drivers/net/nfp/nfd3/nfp_nfd3_dp.c diff --git a/drivers/net/nfp/flower/nfp_flower.c b/drivers/net/nfp/flower/nfp_flower.c index 4af1900bde..9212e6606b 100644 --- a/drivers/net/nfp/flower/nfp_flower.c +++ b/drivers/net/nfp/flower/nfp_flower.c @@ -15,6 +15,7 @@ #include "../nfp_ctrl.h" #include "../nfp_cpp_bridge.h" #include "../nfp_rxtx.h" +#include "../nfd3/nfp_nfd3.h" #include "../nfpcore/nfp_mip.h" #include "../nfpcore/nfp_rtsym.h" #include "../nfpcore/nfp_nsp.h" diff --git a/drivers/net/nfp/flower/nfp_flower_ctrl.c b/drivers/net/nfp/flower/nfp_flower_ctrl.c index 3e083d948e..7f9dc5683b 100644 --- a/drivers/net/nfp/flower/nfp_flower_ctrl.c +++ b/drivers/net/nfp/flower/nfp_flower_ctrl.c @@ -11,6 +11,7 @@ #include "../nfp_logs.h" #include "../nfp_ctrl.h" #include "../nfp_rxtx.h" +#include "../nfd3/nfp_nfd3.h" #include "nfp_flower.h" #include "nfp_flower_ctrl.h" #include "nfp_flower_cmsg.h" diff --git a/drivers/net/nfp/flower/nfp_flower_representor.c b/drivers/net/nfp/flower/nfp_flower_representor.c index 362c67f7b5..3eb76cb489 100644 --- a/drivers/net/nfp/flower/nfp_flower_representor.c +++ b/drivers/net/nfp/flower/nfp_flower_representor.c @@ -10,6 +10,7 @@ #include "../nfp_logs.h" #include "../nfp_ctrl.h" #include "../nfp_rxtx.h" +#include "../nfd3/nfp_nfd3.h" #include "../nfpcore/nfp_mip.h" #include "../nfpcore/nfp_rtsym.h" #include "../nfpcore/nfp_nsp.h" diff --git a/drivers/net/nfp/meson.build b/drivers/net/nfp/meson.build index 6d122f5ce9..697a1479c8 100644 --- a/drivers/net/nfp/meson.build +++ b/drivers/net/nfp/meson.build @@ -10,6 +10,7 @@ sources = files( 'flower/nfp_flower_cmsg.c', 'flower/nfp_flower_ctrl.c', 'flower/nfp_flower_representor.c', + 'nfd3/nfp_nfd3_dp.c', 'nfpcore/nfp_cpp_pcie_ops.c', 'nfpcore/nfp_nsp.c', 'nfpcore/nfp_cppcore.c', diff --git a/drivers/net/nfp/nfd3/nfp_nfd3.h b/drivers/net/nfp/nfd3/nfp_nfd3.h new file mode 100644 index 0000000000..5c6162aada --- /dev/null +++ b/drivers/net/nfp/nfd3/nfp_nfd3.h @@ -0,0 +1,166 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2023 Corigine, Inc. + * All rights reserved. + */ + +#ifndef _NFP_NFD3_H_ +#define _NFP_NFD3_H_ + +/* TX descriptor format */ +#define PCIE_DESC_TX_EOP (1 << 7) +#define PCIE_DESC_TX_OFFSET_MASK (0x7f) + +/* Flags in the host TX descriptor */ +#define PCIE_DESC_TX_CSUM (1 << 7) +#define PCIE_DESC_TX_IP4_CSUM (1 << 6) +#define PCIE_DESC_TX_TCP_CSUM (1 << 5) +#define PCIE_DESC_TX_UDP_CSUM (1 << 4) +#define PCIE_DESC_TX_VLAN (1 << 3) +#define PCIE_DESC_TX_LSO (1 << 2) +#define PCIE_DESC_TX_ENCAP_NONE (0) +#define PCIE_DESC_TX_ENCAP (1 << 1) +#define PCIE_DESC_TX_O_IP4_CSUM (1 << 0) + +#define NFD3_TX_DESC_PER_SIMPLE_PKT 1 + +struct nfp_net_nfd3_tx_desc { + union { + struct { + uint8_t dma_addr_hi; /* High bits of host buf address */ + __le16 dma_len; /* Length to DMA for this desc */ + uint8_t offset_eop; /* Offset in buf where pkt starts + + * highest bit is eop flag, low 7bit is meta_len. + */ + __le32 dma_addr_lo; /* Low 32bit of host buf addr */ + + __le16 mss; /* MSS to be used for LSO */ + uint8_t lso_hdrlen; /* LSO, where the data starts */ + uint8_t flags; /* TX Flags, see @PCIE_DESC_TX_* */ + + union { + struct { + /* + * L3 and L4 header offsets required + * for TSOv2 + */ + uint8_t l3_offset; + uint8_t l4_offset; + }; + __le16 vlan; /* VLAN tag to add if indicated */ + }; + __le16 data_len; /* Length of frame + meta data */ + } __rte_packed; + __le32 vals[4]; + }; +}; + +/* Leaving always free descriptors for avoiding wrapping confusion */ +static inline uint32_t +nfp_net_nfd3_free_tx_desc(struct nfp_net_txq *txq) +{ + if (txq->wr_p >= txq->rd_p) + return txq->tx_count - (txq->wr_p - txq->rd_p) - 8; + else + return txq->rd_p - txq->wr_p - 8; +} + +/* + * nfp_net_nfd3_txq_full() - Check if the TX queue free descriptors + * is below tx_free_threshold for firmware of nfd3 + * + * @txq: TX queue to check + * + * This function uses the host copy* of read/write pointers. + */ +static inline uint32_t +nfp_net_nfd3_txq_full(struct nfp_net_txq *txq) +{ + return (nfp_net_nfd3_free_tx_desc(txq) < txq->tx_free_thresh); +} + +/* nfp_net_nfd3_tx_tso() - Set NFD3 TX descriptor for TSO */ +static inline void +nfp_net_nfd3_tx_tso(struct nfp_net_txq *txq, + struct nfp_net_nfd3_tx_desc *txd, + struct rte_mbuf *mb) +{ + uint64_t ol_flags; + struct nfp_net_hw *hw = txq->hw; + + if (!(hw->cap & NFP_NET_CFG_CTRL_LSO_ANY)) + goto clean_txd; + + ol_flags = mb->ol_flags; + + if (!(ol_flags & RTE_MBUF_F_TX_TCP_SEG)) + goto clean_txd; + + txd->l3_offset = mb->l2_len; + txd->l4_offset = mb->l2_len + mb->l3_len; + txd->lso_hdrlen = mb->l2_len + mb->l3_len + mb->l4_len; + + if (ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK) { + txd->l3_offset += mb->outer_l2_len + mb->outer_l3_len; + txd->l4_offset += mb->outer_l2_len + mb->outer_l3_len; + txd->lso_hdrlen += mb->outer_l2_len + mb->outer_l3_len; + } + + txd->mss = rte_cpu_to_le_16(mb->tso_segsz); + txd->flags = PCIE_DESC_TX_LSO; + return; + +clean_txd: + txd->flags = 0; + txd->l3_offset = 0; + txd->l4_offset = 0; + txd->lso_hdrlen = 0; + txd->mss = 0; +} + +/* nfp_net_nfd3_tx_cksum() - Set TX CSUM offload flags in NFD3 TX descriptor */ +static inline void +nfp_net_nfd3_tx_cksum(struct nfp_net_txq *txq, struct nfp_net_nfd3_tx_desc *txd, + struct rte_mbuf *mb) +{ + uint64_t ol_flags; + struct nfp_net_hw *hw = txq->hw; + + if (!(hw->cap & NFP_NET_CFG_CTRL_TXCSUM)) + return; + + ol_flags = mb->ol_flags; + + /* Set TCP csum offload if TSO enabled. */ + if (ol_flags & RTE_MBUF_F_TX_TCP_SEG) + txd->flags |= PCIE_DESC_TX_TCP_CSUM; + + /* IPv6 does not need checksum */ + if (ol_flags & RTE_MBUF_F_TX_IP_CKSUM) + txd->flags |= PCIE_DESC_TX_IP4_CSUM; + + if (ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK) + txd->flags |= PCIE_DESC_TX_ENCAP; + + switch (ol_flags & RTE_MBUF_F_TX_L4_MASK) { + case RTE_MBUF_F_TX_UDP_CKSUM: + txd->flags |= PCIE_DESC_TX_UDP_CSUM; + break; + case RTE_MBUF_F_TX_TCP_CKSUM: + txd->flags |= PCIE_DESC_TX_TCP_CSUM; + break; + } + + if (ol_flags & (RTE_MBUF_F_TX_IP_CKSUM | RTE_MBUF_F_TX_L4_MASK)) + txd->flags |= PCIE_DESC_TX_CSUM; +} + +uint16_t nfp_net_nfd3_xmit_pkts(void *tx_queue, + struct rte_mbuf **tx_pkts, + uint16_t nb_pkts); +int nfp_net_nfd3_tx_queue_setup(struct rte_eth_dev *dev, + uint16_t queue_idx, + uint16_t nb_desc, + unsigned int socket_id, + const struct rte_eth_txconf *tx_conf); + +#endif /* _NFP_NFD3_H_ */ diff --git a/drivers/net/nfp/nfd3/nfp_nfd3_dp.c b/drivers/net/nfp/nfd3/nfp_nfd3_dp.c new file mode 100644 index 0000000000..88bcd26ad8 --- /dev/null +++ b/drivers/net/nfp/nfd3/nfp_nfd3_dp.c @@ -0,0 +1,346 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2023 Corigine, Inc. + * All rights reserved. + */ + +#include +#include +#include + +#include "../nfp_logs.h" +#include "../nfp_common.h" +#include "../nfp_rxtx.h" +#include "nfp_nfd3.h" + +/* + * nfp_net_nfd3_tx_vlan() - Set vlan info in the nfd3 tx desc + * + * If enable NFP_NET_CFG_CTRL_TXVLAN_V2 + * Vlan_info is stored in the meta and + * is handled in the nfp_net_nfd3_set_meta_vlan + * else if enable NFP_NET_CFG_CTRL_TXVLAN + * Vlan_info is stored in the tx_desc and + * is handled in the nfp_net_nfd3_tx_vlan + */ +static void +nfp_net_nfd3_tx_vlan(struct nfp_net_txq *txq, + struct nfp_net_nfd3_tx_desc *txd, + struct rte_mbuf *mb) +{ + struct nfp_net_hw *hw = txq->hw; + + if ((hw->cap & NFP_NET_CFG_CTRL_TXVLAN_V2) != 0 || + (hw->cap & NFP_NET_CFG_CTRL_TXVLAN) == 0) + return; + + if ((mb->ol_flags & RTE_MBUF_F_TX_VLAN) != 0) { + txd->flags |= PCIE_DESC_TX_VLAN; + txd->vlan = mb->vlan_tci; + } +} + +static void +nfp_net_nfd3_set_meta_data(struct nfp_net_meta_raw *meta_data, + struct nfp_net_txq *txq, + struct rte_mbuf *pkt) +{ + uint8_t vlan_layer = 0; + struct nfp_net_hw *hw; + uint32_t meta_info; + uint8_t layer = 0; + char *meta; + + hw = txq->hw; + + if ((pkt->ol_flags & RTE_MBUF_F_TX_VLAN) != 0 && + (hw->ctrl & NFP_NET_CFG_CTRL_TXVLAN_V2) != 0) { + if (meta_data->length == 0) + meta_data->length = NFP_NET_META_HEADER_SIZE; + meta_data->length += NFP_NET_META_FIELD_SIZE; + meta_data->header |= NFP_NET_META_VLAN; + } + + if (meta_data->length == 0) + return; + + meta_info = meta_data->header; + meta_data->header = rte_cpu_to_be_32(meta_data->header); + meta = rte_pktmbuf_prepend(pkt, meta_data->length); + memcpy(meta, &meta_data->header, sizeof(meta_data->header)); + meta += NFP_NET_META_HEADER_SIZE; + + for (; meta_info != 0; meta_info >>= NFP_NET_META_FIELD_SIZE, layer++, + meta += NFP_NET_META_FIELD_SIZE) { + switch (meta_info & NFP_NET_META_FIELD_MASK) { + case NFP_NET_META_VLAN: + if (vlan_layer > 0) { + PMD_DRV_LOG(ERR, "At most 1 layers of vlan is supported"); + return; + } + nfp_net_set_meta_vlan(meta_data, pkt, layer); + vlan_layer++; + break; + default: + PMD_DRV_LOG(ERR, "The metadata type not supported"); + return; + } + + memcpy(meta, &meta_data->data[layer], sizeof(meta_data->data[layer])); + } +} + +uint16_t +nfp_net_nfd3_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) +{ + struct nfp_net_txq *txq; + struct nfp_net_hw *hw; + struct nfp_net_nfd3_tx_desc *txds, txd; + struct nfp_net_meta_raw meta_data; + struct rte_mbuf *pkt; + uint64_t dma_addr; + int pkt_size, dma_size; + uint16_t free_descs, issued_descs; + struct rte_mbuf **lmbuf; + int i; + + txq = tx_queue; + hw = txq->hw; + txds = &txq->txds[txq->wr_p]; + + PMD_TX_LOG(DEBUG, "working for queue %u at pos %d and %u packets", + txq->qidx, txq->wr_p, nb_pkts); + + if (nfp_net_nfd3_free_tx_desc(txq) < NFD3_TX_DESC_PER_SIMPLE_PKT * nb_pkts || + nfp_net_nfd3_txq_full(txq)) + nfp_net_tx_free_bufs(txq); + + free_descs = (uint16_t)nfp_net_nfd3_free_tx_desc(txq); + if (unlikely(free_descs == 0)) + return 0; + + pkt = *tx_pkts; + + issued_descs = 0; + PMD_TX_LOG(DEBUG, "queue: %u. Sending %u packets", + txq->qidx, nb_pkts); + /* Sending packets */ + for (i = 0; i < nb_pkts && free_descs > 0; i++) { + memset(&meta_data, 0, sizeof(meta_data)); + /* Grabbing the mbuf linked to the current descriptor */ + lmbuf = &txq->txbufs[txq->wr_p].mbuf; + /* Warming the cache for releasing the mbuf later on */ + RTE_MBUF_PREFETCH_TO_FREE(*lmbuf); + + pkt = *(tx_pkts + i); + + nfp_net_nfd3_set_meta_data(&meta_data, txq, pkt); + + if (unlikely(pkt->nb_segs > 1 && + !(hw->cap & NFP_NET_CFG_CTRL_GATHER))) { + PMD_INIT_LOG(ERR, "Multisegment packet not supported"); + goto xmit_end; + } + + /* Checking if we have enough descriptors */ + if (unlikely(pkt->nb_segs > free_descs)) + goto xmit_end; + + /* + * Checksum and VLAN flags just in the first descriptor for a + * multisegment packet, but TSO info needs to be in all of them. + */ + txd.data_len = pkt->pkt_len; + nfp_net_nfd3_tx_tso(txq, &txd, pkt); + nfp_net_nfd3_tx_cksum(txq, &txd, pkt); + nfp_net_nfd3_tx_vlan(txq, &txd, pkt); + + /* + * mbuf data_len is the data in one segment and pkt_len data + * in the whole packet. When the packet is just one segment, + * then data_len = pkt_len + */ + pkt_size = pkt->pkt_len; + + while (pkt != NULL && free_descs > 0) { + /* Copying TSO, VLAN and cksum info */ + *txds = txd; + + /* Releasing mbuf used by this descriptor previously*/ + if (*lmbuf) + rte_pktmbuf_free_seg(*lmbuf); + + /* + * Linking mbuf with descriptor for being released + * next time descriptor is used + */ + *lmbuf = pkt; + + dma_size = pkt->data_len; + dma_addr = rte_mbuf_data_iova(pkt); + PMD_TX_LOG(DEBUG, "Working with mbuf at dma address:" + "%" PRIx64 "", dma_addr); + + /* Filling descriptors fields */ + txds->dma_len = dma_size; + txds->data_len = txd.data_len; + txds->dma_addr_hi = (dma_addr >> 32) & 0xff; + txds->dma_addr_lo = (dma_addr & 0xffffffff); + free_descs--; + + txq->wr_p++; + if (unlikely(txq->wr_p == txq->tx_count)) /* wrapping?*/ + txq->wr_p = 0; + + pkt_size -= dma_size; + + /* + * Making the EOP, packets with just one segment + * the priority + */ + if (likely(pkt_size == 0)) + txds->offset_eop = PCIE_DESC_TX_EOP; + else + txds->offset_eop = 0; + + /* Set the meta_len */ + txds->offset_eop |= meta_data.length; + + pkt = pkt->next; + /* Referencing next free TX descriptor */ + txds = &txq->txds[txq->wr_p]; + lmbuf = &txq->txbufs[txq->wr_p].mbuf; + issued_descs++; + } + } + +xmit_end: + /* Increment write pointers. Force memory write before we let HW know */ + rte_wmb(); + nfp_qcp_ptr_add(txq->qcp_q, NFP_QCP_WRITE_PTR, issued_descs); + + return i; +} + +int +nfp_net_nfd3_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx, + uint16_t nb_desc, unsigned int socket_id, + const struct rte_eth_txconf *tx_conf) +{ + int ret; + uint16_t min_tx_desc; + uint16_t max_tx_desc; + const struct rte_memzone *tz; + struct nfp_net_txq *txq; + uint16_t tx_free_thresh; + struct nfp_net_hw *hw; + uint32_t tx_desc_sz; + + hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private); + + PMD_INIT_FUNC_TRACE(); + + ret = nfp_net_tx_desc_limits(hw, &min_tx_desc, &max_tx_desc); + if (ret != 0) + return ret; + + /* Validating number of descriptors */ + tx_desc_sz = nb_desc * sizeof(struct nfp_net_nfd3_tx_desc); + if ((NFD3_TX_DESC_PER_SIMPLE_PKT * tx_desc_sz) % NFP_ALIGN_RING_DESC != 0 || + nb_desc > max_tx_desc || nb_desc < min_tx_desc) { + PMD_DRV_LOG(ERR, "Wrong nb_desc value"); + return -EINVAL; + } + + tx_free_thresh = (uint16_t)((tx_conf->tx_free_thresh) ? + tx_conf->tx_free_thresh : + DEFAULT_TX_FREE_THRESH); + + if (tx_free_thresh > (nb_desc)) { + PMD_DRV_LOG(ERR, + "tx_free_thresh must be less than the number of TX " + "descriptors. (tx_free_thresh=%u port=%d " + "queue=%d)", (unsigned int)tx_free_thresh, + dev->data->port_id, (int)queue_idx); + return -(EINVAL); + } + + /* + * Free memory prior to re-allocation if needed. This is the case after + * calling nfp_net_stop + */ + if (dev->data->tx_queues[queue_idx]) { + PMD_TX_LOG(DEBUG, "Freeing memory prior to re-allocation %d", + queue_idx); + nfp_net_tx_queue_release(dev, queue_idx); + dev->data->tx_queues[queue_idx] = NULL; + } + + /* Allocating tx queue data structure */ + txq = rte_zmalloc_socket("ethdev TX queue", sizeof(struct nfp_net_txq), + RTE_CACHE_LINE_SIZE, socket_id); + if (txq == NULL) { + PMD_DRV_LOG(ERR, "Error allocating tx dma"); + return -ENOMEM; + } + + dev->data->tx_queues[queue_idx] = txq; + + /* + * Allocate TX ring hardware descriptors. A memzone large enough to + * handle the maximum ring size is allocated in order to allow for + * resizing in later calls to the queue setup function. + */ + tz = rte_eth_dma_zone_reserve(dev, "tx_ring", queue_idx, + sizeof(struct nfp_net_nfd3_tx_desc) * + NFD3_TX_DESC_PER_SIMPLE_PKT * + max_tx_desc, NFP_MEMZONE_ALIGN, + socket_id); + if (tz == NULL) { + PMD_DRV_LOG(ERR, "Error allocating tx dma"); + nfp_net_tx_queue_release(dev, queue_idx); + dev->data->tx_queues[queue_idx] = NULL; + return -ENOMEM; + } + + txq->tx_count = nb_desc * NFD3_TX_DESC_PER_SIMPLE_PKT; + txq->tx_free_thresh = tx_free_thresh; + txq->tx_pthresh = tx_conf->tx_thresh.pthresh; + txq->tx_hthresh = tx_conf->tx_thresh.hthresh; + txq->tx_wthresh = tx_conf->tx_thresh.wthresh; + + /* queue mapping based on firmware configuration */ + txq->qidx = queue_idx; + txq->tx_qcidx = queue_idx * hw->stride_tx; + txq->qcp_q = hw->tx_bar + NFP_QCP_QUEUE_OFF(txq->tx_qcidx); + + txq->port_id = dev->data->port_id; + + /* Saving physical and virtual addresses for the TX ring */ + txq->dma = (uint64_t)tz->iova; + txq->txds = (struct nfp_net_nfd3_tx_desc *)tz->addr; + + /* mbuf pointers array for referencing mbufs linked to TX descriptors */ + txq->txbufs = rte_zmalloc_socket("txq->txbufs", + sizeof(*txq->txbufs) * txq->tx_count, + RTE_CACHE_LINE_SIZE, socket_id); + if (txq->txbufs == NULL) { + nfp_net_tx_queue_release(dev, queue_idx); + dev->data->tx_queues[queue_idx] = NULL; + return -ENOMEM; + } + PMD_TX_LOG(DEBUG, "txbufs=%p hw_ring=%p dma_addr=0x%" PRIx64, + txq->txbufs, txq->txds, (unsigned long)txq->dma); + + nfp_net_reset_tx_queue(txq); + + txq->hw = hw; + + /* + * Telling the HW about the physical address of the TX ring and number + * of descriptors in log2 format + */ + nn_cfg_writeq(hw, NFP_NET_CFG_TXR_ADDR(queue_idx), txq->dma); + nn_cfg_writeb(hw, NFP_NET_CFG_TXR_SZ(queue_idx), rte_log2_u32(txq->tx_count)); + + return 0; +} diff --git a/drivers/net/nfp/nfp_common.c b/drivers/net/nfp/nfp_common.c index f300d6d892..d1b6ef3bc9 100644 --- a/drivers/net/nfp/nfp_common.c +++ b/drivers/net/nfp/nfp_common.c @@ -44,6 +44,8 @@ #include "nfp_logs.h" #include "nfp_cpp_bridge.h" +#include "nfd3/nfp_nfd3.h" + #include #include #include diff --git a/drivers/net/nfp/nfp_ethdev.c b/drivers/net/nfp/nfp_ethdev.c index 26cf9cd01c..f212a4a10e 100644 --- a/drivers/net/nfp/nfp_ethdev.c +++ b/drivers/net/nfp/nfp_ethdev.c @@ -38,6 +38,7 @@ #include "nfp_logs.h" #include "nfp_cpp_bridge.h" +#include "nfd3/nfp_nfd3.h" #include "flower/nfp_flower.h" static int diff --git a/drivers/net/nfp/nfp_ethdev_vf.c b/drivers/net/nfp/nfp_ethdev_vf.c index d69ac8cd37..80a8983deb 100644 --- a/drivers/net/nfp/nfp_ethdev_vf.c +++ b/drivers/net/nfp/nfp_ethdev_vf.c @@ -22,6 +22,7 @@ #include "nfp_ctrl.h" #include "nfp_rxtx.h" #include "nfp_logs.h" +#include "nfd3/nfp_nfd3.h" static void nfp_netvf_read_mac(struct nfp_net_hw *hw) diff --git a/drivers/net/nfp/nfp_rxtx.c b/drivers/net/nfp/nfp_rxtx.c index 16a124fd7d..76021b64ee 100644 --- a/drivers/net/nfp/nfp_rxtx.c +++ b/drivers/net/nfp/nfp_rxtx.c @@ -20,6 +20,7 @@ #include "nfp_ctrl.h" #include "nfp_rxtx.h" #include "nfp_logs.h" +#include "nfd3/nfp_nfd3.h" #include "nfpcore/nfp_mip.h" #include "nfpcore/nfp_rtsym.h" @@ -749,158 +750,7 @@ nfp_net_reset_tx_queue(struct nfp_net_txq *txq) txq->rd_p = 0; } -static int -nfp_net_nfd3_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx, - uint16_t nb_desc, unsigned int socket_id, - const struct rte_eth_txconf *tx_conf) -{ - int ret; - uint16_t min_tx_desc; - uint16_t max_tx_desc; - const struct rte_memzone *tz; - struct nfp_net_txq *txq; - uint16_t tx_free_thresh; - struct nfp_net_hw *hw; - uint32_t tx_desc_sz; - - hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private); - - PMD_INIT_FUNC_TRACE(); - - ret = nfp_net_tx_desc_limits(hw, &min_tx_desc, &max_tx_desc); - if (ret != 0) - return ret; - - /* Validating number of descriptors */ - tx_desc_sz = nb_desc * sizeof(struct nfp_net_nfd3_tx_desc); - if ((NFD3_TX_DESC_PER_SIMPLE_PKT * tx_desc_sz) % NFP_ALIGN_RING_DESC != 0 || - nb_desc > max_tx_desc || nb_desc < min_tx_desc) { - PMD_DRV_LOG(ERR, "Wrong nb_desc value"); - return -EINVAL; - } - - tx_free_thresh = (uint16_t)((tx_conf->tx_free_thresh) ? - tx_conf->tx_free_thresh : - DEFAULT_TX_FREE_THRESH); - - if (tx_free_thresh > (nb_desc)) { - PMD_DRV_LOG(ERR, - "tx_free_thresh must be less than the number of TX " - "descriptors. (tx_free_thresh=%u port=%d " - "queue=%d)", (unsigned int)tx_free_thresh, - dev->data->port_id, (int)queue_idx); - return -(EINVAL); - } - - /* - * Free memory prior to re-allocation if needed. This is the case after - * calling nfp_net_stop - */ - if (dev->data->tx_queues[queue_idx]) { - PMD_TX_LOG(DEBUG, "Freeing memory prior to re-allocation %d", - queue_idx); - nfp_net_tx_queue_release(dev, queue_idx); - dev->data->tx_queues[queue_idx] = NULL; - } - - /* Allocating tx queue data structure */ - txq = rte_zmalloc_socket("ethdev TX queue", sizeof(struct nfp_net_txq), - RTE_CACHE_LINE_SIZE, socket_id); - if (txq == NULL) { - PMD_DRV_LOG(ERR, "Error allocating tx dma"); - return -ENOMEM; - } - - dev->data->tx_queues[queue_idx] = txq; - - /* - * Allocate TX ring hardware descriptors. A memzone large enough to - * handle the maximum ring size is allocated in order to allow for - * resizing in later calls to the queue setup function. - */ - tz = rte_eth_dma_zone_reserve(dev, "tx_ring", queue_idx, - sizeof(struct nfp_net_nfd3_tx_desc) * - NFD3_TX_DESC_PER_SIMPLE_PKT * - max_tx_desc, NFP_MEMZONE_ALIGN, - socket_id); - if (tz == NULL) { - PMD_DRV_LOG(ERR, "Error allocating tx dma"); - nfp_net_tx_queue_release(dev, queue_idx); - dev->data->tx_queues[queue_idx] = NULL; - return -ENOMEM; - } - - txq->tx_count = nb_desc * NFD3_TX_DESC_PER_SIMPLE_PKT; - txq->tx_free_thresh = tx_free_thresh; - txq->tx_pthresh = tx_conf->tx_thresh.pthresh; - txq->tx_hthresh = tx_conf->tx_thresh.hthresh; - txq->tx_wthresh = tx_conf->tx_thresh.wthresh; - - /* queue mapping based on firmware configuration */ - txq->qidx = queue_idx; - txq->tx_qcidx = queue_idx * hw->stride_tx; - txq->qcp_q = hw->tx_bar + NFP_QCP_QUEUE_OFF(txq->tx_qcidx); - - txq->port_id = dev->data->port_id; - - /* Saving physical and virtual addresses for the TX ring */ - txq->dma = (uint64_t)tz->iova; - txq->txds = (struct nfp_net_nfd3_tx_desc *)tz->addr; - - /* mbuf pointers array for referencing mbufs linked to TX descriptors */ - txq->txbufs = rte_zmalloc_socket("txq->txbufs", - sizeof(*txq->txbufs) * txq->tx_count, - RTE_CACHE_LINE_SIZE, socket_id); - if (txq->txbufs == NULL) { - nfp_net_tx_queue_release(dev, queue_idx); - dev->data->tx_queues[queue_idx] = NULL; - return -ENOMEM; - } - PMD_TX_LOG(DEBUG, "txbufs=%p hw_ring=%p dma_addr=0x%" PRIx64, - txq->txbufs, txq->txds, (unsigned long)txq->dma); - - nfp_net_reset_tx_queue(txq); - - txq->hw = hw; - - /* - * Telling the HW about the physical address of the TX ring and number - * of descriptors in log2 format - */ - nn_cfg_writeq(hw, NFP_NET_CFG_TXR_ADDR(queue_idx), txq->dma); - nn_cfg_writeb(hw, NFP_NET_CFG_TXR_SZ(queue_idx), rte_log2_u32(txq->tx_count)); - - return 0; -} - -/* - * nfp_net_nfd3_tx_vlan() - Set vlan info in the nfd3 tx desc - * - * If enable NFP_NET_CFG_CTRL_TXVLAN_V2 - * Vlan_info is stored in the meta and - * is handled in the nfp_net_nfd3_set_meta_vlan - * else if enable NFP_NET_CFG_CTRL_TXVLAN - * Vlan_info is stored in the tx_desc and - * is handled in the nfp_net_nfd3_tx_vlan - */ -static void -nfp_net_nfd3_tx_vlan(struct nfp_net_txq *txq, - struct nfp_net_nfd3_tx_desc *txd, - struct rte_mbuf *mb) -{ - struct nfp_net_hw *hw = txq->hw; - - if ((hw->cap & NFP_NET_CFG_CTRL_TXVLAN_V2) != 0 || - (hw->cap & NFP_NET_CFG_CTRL_TXVLAN) == 0) - return; - - if ((mb->ol_flags & RTE_MBUF_F_TX_VLAN) != 0) { - txd->flags |= PCIE_DESC_TX_VLAN; - txd->vlan = mb->vlan_tci; - } -} - -static void +void nfp_net_set_meta_vlan(struct nfp_net_meta_raw *meta_data, struct rte_mbuf *pkt, uint8_t layer) @@ -914,188 +764,6 @@ nfp_net_set_meta_vlan(struct nfp_net_meta_raw *meta_data, meta_data->data[layer] = rte_cpu_to_be_32(tpid << 16 | vlan_tci); } -static void -nfp_net_nfd3_set_meta_data(struct nfp_net_meta_raw *meta_data, - struct nfp_net_txq *txq, - struct rte_mbuf *pkt) -{ - uint8_t vlan_layer = 0; - struct nfp_net_hw *hw; - uint32_t meta_info; - uint8_t layer = 0; - char *meta; - - hw = txq->hw; - - if ((pkt->ol_flags & RTE_MBUF_F_TX_VLAN) != 0 && - (hw->ctrl & NFP_NET_CFG_CTRL_TXVLAN_V2) != 0) { - if (meta_data->length == 0) - meta_data->length = NFP_NET_META_HEADER_SIZE; - meta_data->length += NFP_NET_META_FIELD_SIZE; - meta_data->header |= NFP_NET_META_VLAN; - } - - if (meta_data->length == 0) - return; - - meta_info = meta_data->header; - meta_data->header = rte_cpu_to_be_32(meta_data->header); - meta = rte_pktmbuf_prepend(pkt, meta_data->length); - memcpy(meta, &meta_data->header, sizeof(meta_data->header)); - meta += NFP_NET_META_HEADER_SIZE; - - for (; meta_info != 0; meta_info >>= NFP_NET_META_FIELD_SIZE, layer++, - meta += NFP_NET_META_FIELD_SIZE) { - switch (meta_info & NFP_NET_META_FIELD_MASK) { - case NFP_NET_META_VLAN: - if (vlan_layer > 0) { - PMD_DRV_LOG(ERR, "At most 1 layers of vlan is supported"); - return; - } - nfp_net_set_meta_vlan(meta_data, pkt, layer); - vlan_layer++; - break; - default: - PMD_DRV_LOG(ERR, "The metadata type not supported"); - return; - } - - memcpy(meta, &meta_data->data[layer], sizeof(meta_data->data[layer])); - } -} - -uint16_t -nfp_net_nfd3_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) -{ - struct nfp_net_txq *txq; - struct nfp_net_hw *hw; - struct nfp_net_nfd3_tx_desc *txds, txd; - struct nfp_net_meta_raw meta_data; - struct rte_mbuf *pkt; - uint64_t dma_addr; - int pkt_size, dma_size; - uint16_t free_descs, issued_descs; - struct rte_mbuf **lmbuf; - int i; - - txq = tx_queue; - hw = txq->hw; - txds = &txq->txds[txq->wr_p]; - - PMD_TX_LOG(DEBUG, "working for queue %u at pos %d and %u packets", - txq->qidx, txq->wr_p, nb_pkts); - - if (nfp_net_nfd3_free_tx_desc(txq) < NFD3_TX_DESC_PER_SIMPLE_PKT * nb_pkts || - nfp_net_nfd3_txq_full(txq)) - nfp_net_tx_free_bufs(txq); - - free_descs = (uint16_t)nfp_net_nfd3_free_tx_desc(txq); - if (unlikely(free_descs == 0)) - return 0; - - pkt = *tx_pkts; - - issued_descs = 0; - PMD_TX_LOG(DEBUG, "queue: %u. Sending %u packets", - txq->qidx, nb_pkts); - /* Sending packets */ - for (i = 0; i < nb_pkts && free_descs > 0; i++) { - memset(&meta_data, 0, sizeof(meta_data)); - /* Grabbing the mbuf linked to the current descriptor */ - lmbuf = &txq->txbufs[txq->wr_p].mbuf; - /* Warming the cache for releasing the mbuf later on */ - RTE_MBUF_PREFETCH_TO_FREE(*lmbuf); - - pkt = *(tx_pkts + i); - - nfp_net_nfd3_set_meta_data(&meta_data, txq, pkt); - - if (unlikely(pkt->nb_segs > 1 && - !(hw->cap & NFP_NET_CFG_CTRL_GATHER))) { - PMD_INIT_LOG(ERR, "Multisegment packet not supported"); - goto xmit_end; - } - - /* Checking if we have enough descriptors */ - if (unlikely(pkt->nb_segs > free_descs)) - goto xmit_end; - - /* - * Checksum and VLAN flags just in the first descriptor for a - * multisegment packet, but TSO info needs to be in all of them. - */ - txd.data_len = pkt->pkt_len; - nfp_net_nfd3_tx_tso(txq, &txd, pkt); - nfp_net_nfd3_tx_cksum(txq, &txd, pkt); - nfp_net_nfd3_tx_vlan(txq, &txd, pkt); - - /* - * mbuf data_len is the data in one segment and pkt_len data - * in the whole packet. When the packet is just one segment, - * then data_len = pkt_len - */ - pkt_size = pkt->pkt_len; - - while (pkt != NULL && free_descs > 0) { - /* Copying TSO, VLAN and cksum info */ - *txds = txd; - - /* Releasing mbuf used by this descriptor previously*/ - if (*lmbuf) - rte_pktmbuf_free_seg(*lmbuf); - - /* - * Linking mbuf with descriptor for being released - * next time descriptor is used - */ - *lmbuf = pkt; - - dma_size = pkt->data_len; - dma_addr = rte_mbuf_data_iova(pkt); - PMD_TX_LOG(DEBUG, "Working with mbuf at dma address:" - "%" PRIx64 "", dma_addr); - - /* Filling descriptors fields */ - txds->dma_len = dma_size; - txds->data_len = txd.data_len; - txds->dma_addr_hi = (dma_addr >> 32) & 0xff; - txds->dma_addr_lo = (dma_addr & 0xffffffff); - free_descs--; - - txq->wr_p++; - if (unlikely(txq->wr_p == txq->tx_count)) /* wrapping?*/ - txq->wr_p = 0; - - pkt_size -= dma_size; - - /* - * Making the EOP, packets with just one segment - * the priority - */ - if (likely(pkt_size == 0)) - txds->offset_eop = PCIE_DESC_TX_EOP; - else - txds->offset_eop = 0; - - /* Set the meta_len */ - txds->offset_eop |= meta_data.length; - - pkt = pkt->next; - /* Referencing next free TX descriptor */ - txds = &txq->txds[txq->wr_p]; - lmbuf = &txq->txbufs[txq->wr_p].mbuf; - issued_descs++; - } - } - -xmit_end: - /* Increment write pointers. Force memory write before we let HW know */ - rte_wmb(); - nfp_qcp_ptr_add(txq->qcp_q, NFP_QCP_WRITE_PTR, issued_descs); - - return i; -} - static void nfp_net_nfdk_set_meta_data(struct rte_mbuf *pkt, struct nfp_net_txq *txq, diff --git a/drivers/net/nfp/nfp_rxtx.h b/drivers/net/nfp/nfp_rxtx.h index f016bf732c..6c81a98ae0 100644 --- a/drivers/net/nfp/nfp_rxtx.h +++ b/drivers/net/nfp/nfp_rxtx.h @@ -96,26 +96,10 @@ struct nfp_meta_parsed { /* Descriptor alignment */ #define NFP_ALIGN_RING_DESC 128 -/* TX descriptor format */ -#define PCIE_DESC_TX_EOP (1 << 7) -#define PCIE_DESC_TX_OFFSET_MASK (0x7f) - -/* Flags in the host TX descriptor */ -#define PCIE_DESC_TX_CSUM (1 << 7) -#define PCIE_DESC_TX_IP4_CSUM (1 << 6) -#define PCIE_DESC_TX_TCP_CSUM (1 << 5) -#define PCIE_DESC_TX_UDP_CSUM (1 << 4) -#define PCIE_DESC_TX_VLAN (1 << 3) -#define PCIE_DESC_TX_LSO (1 << 2) -#define PCIE_DESC_TX_ENCAP_NONE (0) -#define PCIE_DESC_TX_ENCAP (1 << 1) -#define PCIE_DESC_TX_O_IP4_CSUM (1 << 0) - #define NFDK_TX_MAX_DATA_PER_HEAD 0x00001000 #define NFDK_DESC_TX_DMA_LEN_HEAD 0x0fff #define NFDK_DESC_TX_TYPE_HEAD 0xf000 #define NFDK_DESC_TX_DMA_LEN 0x3fff -#define NFD3_TX_DESC_PER_SIMPLE_PKT 1 #define NFDK_TX_DESC_PER_SIMPLE_PKT 2 #define NFDK_DESC_TX_TYPE_TSO 2 #define NFDK_DESC_TX_TYPE_SIMPLE 8 @@ -139,37 +123,6 @@ struct nfp_meta_parsed { (idx) % NFDK_TX_DESC_BLOCK_CNT) #define D_IDX(ring, idx) ((idx) & ((ring)->tx_count - 1)) -struct nfp_net_nfd3_tx_desc { - union { - struct { - uint8_t dma_addr_hi; /* High bits of host buf address */ - __le16 dma_len; /* Length to DMA for this desc */ - uint8_t offset_eop; /* Offset in buf where pkt starts + - * highest bit is eop flag, low 7bit is meta_len. - */ - __le32 dma_addr_lo; /* Low 32bit of host buf addr */ - - __le16 mss; /* MSS to be used for LSO */ - uint8_t lso_hdrlen; /* LSO, where the data starts */ - uint8_t flags; /* TX Flags, see @PCIE_DESC_TX_* */ - - union { - struct { - /* - * L3 and L4 header offsets required - * for TSOv2 - */ - uint8_t l3_offset; - uint8_t l4_offset; - }; - __le16 vlan; /* VLAN tag to add if indicated */ - }; - __le16 data_len; /* Length of frame + meta data */ - } __rte_packed; - __le32 vals[4]; - }; -}; - struct nfp_net_nfdk_tx_desc { union { struct { @@ -397,30 +350,6 @@ nfp_net_mbuf_alloc_failed(struct nfp_net_rxq *rxq) rte_eth_devices[rxq->port_id].data->rx_mbuf_alloc_failed++; } -/* Leaving always free descriptors for avoiding wrapping confusion */ -static inline uint32_t -nfp_net_nfd3_free_tx_desc(struct nfp_net_txq *txq) -{ - if (txq->wr_p >= txq->rd_p) - return txq->tx_count - (txq->wr_p - txq->rd_p) - 8; - else - return txq->rd_p - txq->wr_p - 8; -} - -/* - * nfp_net_nfd3_txq_full() - Check if the TX queue free descriptors - * is below tx_free_threshold for firmware of nfd3 - * - * @txq: TX queue to check - * - * This function uses the host copy* of read/write pointers. - */ -static inline uint32_t -nfp_net_nfd3_txq_full(struct nfp_net_txq *txq) -{ - return (nfp_net_nfd3_free_tx_desc(txq) < txq->tx_free_thresh); -} - /* set mbuf checksum flags based on RX descriptor flags */ static inline void nfp_net_rx_cksum(struct nfp_net_rxq *rxq, struct nfp_net_rx_desc *rxd, @@ -449,82 +378,6 @@ nfp_net_rx_cksum(struct nfp_net_rxq *rxq, struct nfp_net_rx_desc *rxd, mb->ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_BAD; } -/* nfp_net_nfd3_tx_tso() - Set NFD3 TX descriptor for TSO */ -static inline void -nfp_net_nfd3_tx_tso(struct nfp_net_txq *txq, - struct nfp_net_nfd3_tx_desc *txd, - struct rte_mbuf *mb) -{ - uint64_t ol_flags; - struct nfp_net_hw *hw = txq->hw; - - if (!(hw->cap & NFP_NET_CFG_CTRL_LSO_ANY)) - goto clean_txd; - - ol_flags = mb->ol_flags; - - if (!(ol_flags & RTE_MBUF_F_TX_TCP_SEG)) - goto clean_txd; - - txd->l3_offset = mb->l2_len; - txd->l4_offset = mb->l2_len + mb->l3_len; - txd->lso_hdrlen = mb->l2_len + mb->l3_len + mb->l4_len; - - if (ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK) { - txd->l3_offset += mb->outer_l2_len + mb->outer_l3_len; - txd->l4_offset += mb->outer_l2_len + mb->outer_l3_len; - txd->lso_hdrlen += mb->outer_l2_len + mb->outer_l3_len; - } - - txd->mss = rte_cpu_to_le_16(mb->tso_segsz); - txd->flags = PCIE_DESC_TX_LSO; - return; - -clean_txd: - txd->flags = 0; - txd->l3_offset = 0; - txd->l4_offset = 0; - txd->lso_hdrlen = 0; - txd->mss = 0; -} - -/* nfp_net_nfd3_tx_cksum() - Set TX CSUM offload flags in NFD3 TX descriptor */ -static inline void -nfp_net_nfd3_tx_cksum(struct nfp_net_txq *txq, struct nfp_net_nfd3_tx_desc *txd, - struct rte_mbuf *mb) -{ - uint64_t ol_flags; - struct nfp_net_hw *hw = txq->hw; - - if (!(hw->cap & NFP_NET_CFG_CTRL_TXCSUM)) - return; - - ol_flags = mb->ol_flags; - - /* Set TCP csum offload if TSO enabled. */ - if (ol_flags & RTE_MBUF_F_TX_TCP_SEG) - txd->flags |= PCIE_DESC_TX_TCP_CSUM; - - /* IPv6 does not need checksum */ - if (ol_flags & RTE_MBUF_F_TX_IP_CKSUM) - txd->flags |= PCIE_DESC_TX_IP4_CSUM; - - if (ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK) - txd->flags |= PCIE_DESC_TX_ENCAP; - - switch (ol_flags & RTE_MBUF_F_TX_L4_MASK) { - case RTE_MBUF_F_TX_UDP_CKSUM: - txd->flags |= PCIE_DESC_TX_UDP_CSUM; - break; - case RTE_MBUF_F_TX_TCP_CKSUM: - txd->flags |= PCIE_DESC_TX_TCP_CSUM; - break; - } - - if (ol_flags & (RTE_MBUF_F_TX_IP_CKSUM | RTE_MBUF_F_TX_L4_MASK)) - txd->flags |= PCIE_DESC_TX_CSUM; -} - int nfp_net_rx_freelist_setup(struct rte_eth_dev *dev); uint32_t nfp_net_rx_queue_count(void *rx_queue); uint16_t nfp_net_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, @@ -537,8 +390,7 @@ int nfp_net_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx, struct rte_mempool *mp); void nfp_net_tx_queue_release(struct rte_eth_dev *dev, uint16_t queue_idx); void nfp_net_reset_tx_queue(struct nfp_net_txq *txq); -uint16_t nfp_net_nfd3_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, - uint16_t nb_pkts); + int nfp_net_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx, uint16_t nb_desc, @@ -548,6 +400,9 @@ uint16_t nfp_net_nfdk_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts); int nfp_net_tx_free_bufs(struct nfp_net_txq *txq); +void nfp_net_set_meta_vlan(struct nfp_net_meta_raw *meta_data, + struct rte_mbuf *pkt, + uint8_t layer); #endif /* _NFP_RXTX_H_ */ /* -- 2.39.1