From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id DEE2842910; Mon, 10 Apr 2023 13:02:27 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id CDE9042D67; Mon, 10 Apr 2023 13:01:15 +0200 (CEST) Received: from NAM12-DM6-obe.outbound.protection.outlook.com (mail-dm6nam12on2132.outbound.protection.outlook.com [40.107.243.132]) by mails.dpdk.org (Postfix) with ESMTP id 9D65E42D5A for ; Mon, 10 Apr 2023 13:01:13 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=gXl9KaTDSUfhPJvItnR3hW8lp0BW4NZq18ovwt9DgMy65L6poo0gu0o1os53Z/+ys+K1SpIg0x/6Ov2qAZiLVnzEBhrHKwCP6tiIp4YU02rlM6NHUqYV1zgE0RwSb8/1TrKNqFCWgd2HqYTg/yTDU8MTo/AnKjgekOKqPiUIGHCneueS/MaU9PvIEwliNRkBSS83fJC6LrO2xAWxaGK5EAbhauFRtZYFJqOGXK6LgG/wQah0CPN51Xdlq4RfBtqMVlGGXpHvo26wsOdA/eqM6odRqhVPIaRO41440h8ooxm3WpGIXekVtCDaNtxq9yZxa2W7m1Qm46JOp7UBaIylYA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=AdA6YLZ6Z2hAEVZH+VF2RA41dPYHTG+CTrOWcuMUAVA=; b=Ats8IyBZg57JPQLLWqMbH+fwdA9QYzb7UUkJ0Jebnc618GjZ4HnQuFyiszKN02QGe8B2F9dxXKbxxFpJupNTezE4h4dQwXH5p41NvnlEK4FzmGQwOx3DuarEzLB9c7F+c7GwCnJJKNpjEPL4CUZlPsQO/yTUWCrWMYI/PTqoRrH2CEezRcHm7GYUdFg6OUlHwD3lZutr+2USa1NeE1tnSb+3NoW1njZJSSHerIJEFWXJBvixpagrUqTa2U39gqIlD6yQtw5zyAKTtGU0Vl4h1Urfy4QYgpFSUCGWgqva/3R+YsRPLOYzOC5CrJUpFPjvkhTiziN9dIAmt+HKLcnBNw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=corigine.com; dmarc=pass action=none header.from=corigine.com; dkim=pass header.d=corigine.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=corigine.onmicrosoft.com; s=selector2-corigine-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=AdA6YLZ6Z2hAEVZH+VF2RA41dPYHTG+CTrOWcuMUAVA=; b=MTf9VxtLOfzNfuPkf9Tt3MDgMNa8URbT7WEyxDL47Kj7er+N8EJSBM+bu5Doy05erlilL/pAEJP5Jni/EYBkFnvICqr0a1sy0QRx9is31e6eeN3JPtuRV+wxsgBcv99eHd4MQ8DVfyhy43NVo1/sukkaeqN3fixIFLnPgPuXGk4= Authentication-Results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=corigine.com; Received: from SJ0PR13MB5545.namprd13.prod.outlook.com (2603:10b6:a03:424::5) by DM6PR13MB3882.namprd13.prod.outlook.com (2603:10b6:5:22a::22) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6277.31; Mon, 10 Apr 2023 11:01:11 +0000 Received: from SJ0PR13MB5545.namprd13.prod.outlook.com ([fe80::baa6:b49:d2f4:c60e]) by SJ0PR13MB5545.namprd13.prod.outlook.com ([fe80::baa6:b49:d2f4:c60e%4]) with mapi id 15.20.6277.036; Mon, 10 Apr 2023 11:01:11 +0000 From: Chaoyong He To: dev@dpdk.org Cc: oss-drivers@corigine.com, niklas.soderlund@corigine.com, Chaoyong He Subject: [PATCH 11/13] net/nfp: move NFDk logic to own source file Date: Mon, 10 Apr 2023 19:00:13 +0800 Message-Id: <20230410110015.2973660-12-chaoyong.he@corigine.com> X-Mailer: git-send-email 2.39.1 In-Reply-To: <20230410110015.2973660-1-chaoyong.he@corigine.com> References: <20230410110015.2973660-1-chaoyong.he@corigine.com> Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-ClientProxiedBy: SI2PR01CA0040.apcprd01.prod.exchangelabs.com (2603:1096:4:193::14) To SJ0PR13MB5545.namprd13.prod.outlook.com (2603:10b6:a03:424::5) MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: SJ0PR13MB5545:EE_|DM6PR13MB3882:EE_ X-MS-Office365-Filtering-Correlation-Id: 529738b2-2a2d-4b47-eed0-08db39b2e4cd X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: b06bNbtMMcFMiLBDAjamWoWh1NMICLukgpbgcanbziZxazIgVevoQyNJR1lfxz65Fa2+DeUnBHlO+oz6Hq9C1i7E3ey2f5YmA4XtjchFYnUB8lVUUNNugOKmy1wdIut0E3KdCImTaoBRI8aYLmPTwKsbYco0xwBsNg3eBrayrXbfl0GrEoTVzk9ssl9LBQzJWZ6s36Q5pv7zm0tv3uraw6ypsnRnor+M9l007ZVembjC2Opbs+4LnQEzU+XGmOvL0TRPBxthVVTXVGPPDcsb/nkBE2O6ZdkGRzWHVJp++fupytScmDfrXWcu4sBmNArVbaz7NEHL97ZHE9+XoYR7IVw0OOVPys/9q8A2rl2K7I5qiMTmeC++i1DqoOTgC2l2L1QjNc/jM4aTDj8LC1sUr2a2FFKwqjemdf8YtrZ/Tqyf0FgLxZ6o20KNoyNeOylRBgVyyQTrAfFt8+luYiQc6xCB8GXeBSKEUaCPyx/NNdemoimeUGd5ZmtWhXObH1iwqgtkTf64VaEAB2dL0m2ybacYcq/t3XAnCnUtm2uZ+PT2uchVt0DA9BDQ7imvg1zDnJRcI8PyCF3xjGBt8Tp0x4399H6tiztbUImpfnTrQUNyYrbfI2//ZDNUYspoWc6z X-Forefront-Antispam-Report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:SJ0PR13MB5545.namprd13.prod.outlook.com; PTR:; CAT:NONE; SFS:(13230028)(4636009)(396003)(39840400004)(366004)(376002)(136003)(346002)(451199021)(316002)(86362001)(66556008)(66574015)(66476007)(478600001)(83380400001)(66946007)(38100700002)(38350700002)(36756003)(186003)(6512007)(4326008)(8676002)(2616005)(6916009)(41300700001)(2906002)(1076003)(6506007)(26005)(107886003)(8936002)(52116002)(30864003)(5660300002)(6486002)(44832011); DIR:OUT; SFP:1102; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: =?utf-8?B?WVNPMUV5V2YrZWtjRjRTM1M4eVBKZ3dYY2lYb2trMGwrM2lwbmNpMzFwc29L?= =?utf-8?B?WXJjUElCYmZmeDRYRXlHTjZ5QitiV1lFcFRvVk1EV0VjWk8wUDRzVm42RU1N?= =?utf-8?B?bEhzemFiaG9Tdng1RkFITG1VZ1h0YXpXR0tRSi81NGhnLzhEZ2xSSXFiTHRS?= =?utf-8?B?bjFkOVhkZGdvRFVHdy9BV05LQkMrdWN4NUJQVHErYW4yQnE3TjVwQlRiTjF1?= =?utf-8?B?VHR3NzRzb0RqM3RpL2g2U056RDRDVkNNbkxTU2l3anUzNDBub0ZOWkpMKzY3?= =?utf-8?B?dStBS1lwMDZQTllKVkhHYys2b3Z2NTlSVGIyZHY3Tmx4YXRheUhNQWJyeXdS?= =?utf-8?B?WVdNRmtGVzFPSXZxT0xMTWVCWjlHNngvdlIyUmZXZVhOTW9ibldrT2ZDTStz?= =?utf-8?B?emliaElIeVB2KzNJeUcvd3hzUEcvSWFWOFhoKzZ2V1hVTytwVW9Ma2JHbzds?= =?utf-8?B?ODJEVmtHT1htSnEyUGV6aEhkdEpxYmpxaVNlMkpQOE53Z0JrQnVWdys2Tkc4?= =?utf-8?B?NTN4Y0dpcVV0eFFpNmFBWHdoWTFVTEZNZ0tUcmpCNnd1TlNuTWdVSmNiWjZz?= =?utf-8?B?ZmdsUW5FUldMemw4REs3YjEwelZ2WDU2UjQzQnY4VDk4aU5Dc29QMXp5c1Bo?= =?utf-8?B?S2hLL3RyTjcwWVVBMG5INitXZEU5ODhnM3Q1ZDJxKzBrUkx1U0JrcnB3dE5t?= =?utf-8?B?MlJQMWRlajdoY2FRZXI1UXFma1kxWHFPWFVwM3hhS1g2Z3BJRWJHWkpZbFhy?= =?utf-8?B?WUw3a25ERDJDdERkYWVZSjdWVWNoVlZhYXpkZzNCZUQ3UCtzMTBmNHROOWRD?= =?utf-8?B?SkZxTWdCbzhlbEViTE8rTURoSHlONkNOdUp5TEVZNGV3akdmK005MVdqVWZy?= =?utf-8?B?Y0dxcDFsSFNVMnhlRWZWQmZXNUExcTlGdFdmakppOXljdk4vVG91R05WSnRl?= =?utf-8?B?VEE2NW1VVU9OQlB1THB4RWhMWDZzK1BmTUdnYnorU0VnckJqZWEwTkp3ZWlk?= =?utf-8?B?dCsxT3VqVUs0cTZRYXRYLy9vWE9GSFNBY1VyYmY4ZTBBQXlYMHMyeUdIMENG?= =?utf-8?B?QWV6SU9Sd3hYV0l6d2ExazUwWEthd0RHTVcwNEVhUVVtMitpZUE3OUhvWXds?= =?utf-8?B?ZFJiQ2dZVVNsdkJlTzJKNkZMN3hNMUlaaFZyeTcxRk1xWngwbXl6VENvS09B?= =?utf-8?B?ZVU3dGRFK0pwYkcva1pXdnhobG53RGhNaE1QWjJacS9BaDNXaFdrNWJjWFA1?= =?utf-8?B?N2p6YlBQUzJoZWVEL2FMZWFyUVk5ZFJUY3ovSWp6VHhXT3ZZRlhDenZCTHpB?= =?utf-8?B?T2hOcjFaaUErQU42Yks4Y3VhdWxoaHdjUEo2YUJiUW94Ym0vTnZ1cEZ1TnQ5?= =?utf-8?B?SkhkenpqY3E0QS9RTS8wZ1ZpOEsxd3lHQXkzVFFFVVErdnNMRG5jczQ2RUow?= =?utf-8?B?L1hibGYzQXdaYnRsUTIzclBLUHdUbWhoTVJCRWZqdTkyaDNVVFJRa2R3Q2RC?= =?utf-8?B?ZW9DWVN4c2ZadUVlSUlBbmJld0hXamhPMm54bTZwdlZ2c0VlRU5xZjRza3ZY?= =?utf-8?B?YVhSS3UvZHI0UHUwR1VMbit3Q0VaeDNYb3FqMDFiRWZvbjAxTzdwOTQ4TjFy?= =?utf-8?B?blRtZHVGY0VXeEJ4Y2JmNGxtSEhWVTFQWTBySld3MjllWHRuUTh5cnpBaVRL?= =?utf-8?B?TVhDY0FlQ0NlM2t0Sm4zMTd0aGdQVEJleU5PK21kczFLSEg1dEtGYlgwRnNU?= =?utf-8?B?Y1VxdEJRNmNGUE95NHNnZkRhdDMxcGRvVVVQL2xHbThESVg0OGlTQlFhcUJH?= =?utf-8?B?ak9lM1FGODVwR2RGNVlNTWR5K0ZyV2dQQWM5aXAxbXhBcG00V2FjSmhqNTdv?= =?utf-8?B?TnJ1VnJzR25qb1gvV0d0L1piWFZRR2lDWmZ2YTYreVNHMmtpbWNlSHhIZFl1?= =?utf-8?B?RjdjbkYvY1ZBMmE3aVlLYmpmN1dJTlVLQ3NJK2h0ajAwcTQ3Z2EvaTNqSWpt?= =?utf-8?B?TE1xZmV3L1BaNVlWdGhyMnNGVjcxYW43UzhZUmtyK0dwRXFTWFNweGxHQlY3?= =?utf-8?B?OUZFbWw5SEVabkdGRXE2L3FDSndsY2NuR05PQnFFSktyaDlDMnVzQVE0ekFw?= =?utf-8?B?c2xYYVMvMVBIWk41ZWJ3ZGsrZnJqM3FNTGhTQ28vUktrN1BqekkwRjJhMGxt?= =?utf-8?B?UVE9PQ==?= X-OriginatorOrg: corigine.com X-MS-Exchange-CrossTenant-Network-Message-Id: 529738b2-2a2d-4b47-eed0-08db39b2e4cd X-MS-Exchange-CrossTenant-AuthSource: SJ0PR13MB5545.namprd13.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 10 Apr 2023 11:01:11.1719 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: fe128f2c-073b-4c20-818e-7246a585940c X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: uBvnaoZIxy0sAYk/RDDks9sTQDp36nA70Hf9RQHihaUfR71ie8sjH6IyYHGMeVtlTk0V5yPmHEfx4yH1oRSt6KspmnIdMdU6D8rhRmPrT0Q= X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR13MB3882 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Split out the data structure and logics of NFDk into new file. The code is moved verbatim, no functional change. Signed-off-by: Chaoyong He Reviewed-by: Niklas Söderlund --- drivers/net/nfp/meson.build | 1 + drivers/net/nfp/nfdk/nfp_nfdk.h | 179 ++++++++++ drivers/net/nfp/nfdk/nfp_nfdk_dp.c | 421 ++++++++++++++++++++++++ drivers/net/nfp/nfp_common.c | 1 + drivers/net/nfp/nfp_ethdev.c | 1 + drivers/net/nfp/nfp_ethdev_vf.c | 1 + drivers/net/nfp/nfp_rxtx.c | 507 +---------------------------- drivers/net/nfp/nfp_rxtx.h | 55 ---- 8 files changed, 605 insertions(+), 561 deletions(-) create mode 100644 drivers/net/nfp/nfdk/nfp_nfdk.h create mode 100644 drivers/net/nfp/nfdk/nfp_nfdk_dp.c diff --git a/drivers/net/nfp/meson.build b/drivers/net/nfp/meson.build index 697a1479c8..93c708959c 100644 --- a/drivers/net/nfp/meson.build +++ b/drivers/net/nfp/meson.build @@ -11,6 +11,7 @@ sources = files( 'flower/nfp_flower_ctrl.c', 'flower/nfp_flower_representor.c', 'nfd3/nfp_nfd3_dp.c', + 'nfdk/nfp_nfdk_dp.c', 'nfpcore/nfp_cpp_pcie_ops.c', 'nfpcore/nfp_nsp.c', 'nfpcore/nfp_cppcore.c', diff --git a/drivers/net/nfp/nfdk/nfp_nfdk.h b/drivers/net/nfp/nfdk/nfp_nfdk.h new file mode 100644 index 0000000000..43e4d75432 --- /dev/null +++ b/drivers/net/nfp/nfdk/nfp_nfdk.h @@ -0,0 +1,179 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2023 Corigine, Inc. + * All rights reserved. + */ + +#ifndef _NFP_NFDK_H_ +#define _NFP_NFDK_H_ + +#define NFDK_TX_DESC_PER_SIMPLE_PKT 2 +#define NFDK_TX_DESC_GATHER_MAX 17 + +#define NFDK_TX_MAX_DATA_PER_HEAD 0x00001000 +#define NFDK_TX_MAX_DATA_PER_DESC 0x00004000 +#define NFDK_TX_MAX_DATA_PER_BLOCK 0x00010000 + +#define NFDK_DESC_TX_DMA_LEN_HEAD 0x0FFF /* [0,11] */ +#define NFDK_DESC_TX_DMA_LEN 0x3FFF /* [0,13] */ +#define NFDK_DESC_TX_TYPE_HEAD 0xF000 /* [12,15] */ + +#define NFDK_DESC_TX_TYPE_GATHER 1 +#define NFDK_DESC_TX_TYPE_TSO 2 +#define NFDK_DESC_TX_TYPE_SIMPLE 8 + +/* TX descriptor format */ +#define NFDK_DESC_TX_EOP RTE_BIT32(14) + +/* Flags in the host TX descriptor */ +#define NFDK_DESC_TX_CHAIN_META RTE_BIT32(3) +#define NFDK_DESC_TX_ENCAP RTE_BIT32(2) +#define NFDK_DESC_TX_L4_CSUM RTE_BIT32(1) +#define NFDK_DESC_TX_L3_CSUM RTE_BIT32(0) + +#define NFDK_TX_DESC_BLOCK_SZ 256 +#define NFDK_TX_DESC_BLOCK_CNT (NFDK_TX_DESC_BLOCK_SZ / \ + sizeof(struct nfp_net_nfdk_tx_desc)) +#define NFDK_TX_DESC_STOP_CNT (NFDK_TX_DESC_BLOCK_CNT * \ + NFDK_TX_DESC_PER_SIMPLE_PKT) +#define D_BLOCK_CPL(idx) (NFDK_TX_DESC_BLOCK_CNT - \ + (idx) % NFDK_TX_DESC_BLOCK_CNT) +/* Convenience macro for wrapping descriptor index on ring size */ +#define D_IDX(ring, idx) ((idx) & ((ring)->tx_count - 1)) + +struct nfp_net_nfdk_tx_desc { + union { + struct { + __le16 dma_addr_hi; /* High bits of host buf address */ + __le16 dma_len_type; /* Length to DMA for this desc */ + __le32 dma_addr_lo; /* Low 32bit of host buf addr */ + }; + + struct { + __le16 mss; /* MSS to be used for LSO */ + uint8_t lso_hdrlen; /* LSO, TCP payload offset */ + uint8_t lso_totsegs; /* LSO, total segments */ + uint8_t l3_offset; /* L3 header offset */ + uint8_t l4_offset; /* L4 header offset */ + __le16 lso_meta_res; /* Rsvd bits in TSO metadata */ + }; + + struct { + uint8_t flags; /* TX Flags, see @NFDK_DESC_TX_* */ + uint8_t reserved[7]; /* meta byte placeholder */ + }; + + __le32 vals[2]; + __le64 raw; + }; +}; + +static inline uint32_t +nfp_net_nfdk_free_tx_desc(struct nfp_net_txq *txq) +{ + uint32_t free_desc; + + if (txq->wr_p >= txq->rd_p) + free_desc = txq->tx_count - (txq->wr_p - txq->rd_p); + else + free_desc = txq->rd_p - txq->wr_p; + + return (free_desc > NFDK_TX_DESC_STOP_CNT) ? + (free_desc - NFDK_TX_DESC_STOP_CNT) : 0; +} + +/* + * nfp_net_nfdk_txq_full() - Check if the TX queue free descriptors + * is below tx_free_threshold for firmware of nfdk + * + * @txq: TX queue to check + * + * This function uses the host copy* of read/write pointers. + */ +static inline uint32_t +nfp_net_nfdk_txq_full(struct nfp_net_txq *txq) +{ + return (nfp_net_nfdk_free_tx_desc(txq) < txq->tx_free_thresh); +} + +/* nfp_net_nfdk_tx_cksum() - Set TX CSUM offload flags in TX descriptor of nfdk */ +static inline uint64_t +nfp_net_nfdk_tx_cksum(struct nfp_net_txq *txq, struct rte_mbuf *mb, + uint64_t flags) +{ + uint64_t ol_flags; + struct nfp_net_hw *hw = txq->hw; + + if ((hw->cap & NFP_NET_CFG_CTRL_TXCSUM) == 0) + return flags; + + ol_flags = mb->ol_flags; + + /* Set TCP csum offload if TSO enabled. */ + if (ol_flags & RTE_MBUF_F_TX_TCP_SEG) + flags |= NFDK_DESC_TX_L4_CSUM; + + if (ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK) + flags |= NFDK_DESC_TX_ENCAP; + + /* IPv6 does not need checksum */ + if (ol_flags & RTE_MBUF_F_TX_IP_CKSUM) + flags |= NFDK_DESC_TX_L3_CSUM; + + if (ol_flags & RTE_MBUF_F_TX_L4_MASK) + flags |= NFDK_DESC_TX_L4_CSUM; + + return flags; +} + +/* nfp_net_nfdk_tx_tso() - Set TX descriptor for TSO of nfdk */ +static inline uint64_t +nfp_net_nfdk_tx_tso(struct nfp_net_txq *txq, struct rte_mbuf *mb) +{ + uint64_t ol_flags; + struct nfp_net_nfdk_tx_desc txd; + struct nfp_net_hw *hw = txq->hw; + + if ((hw->cap & NFP_NET_CFG_CTRL_LSO_ANY) == 0) + goto clean_txd; + + ol_flags = mb->ol_flags; + + if ((ol_flags & RTE_MBUF_F_TX_TCP_SEG) == 0) + goto clean_txd; + + txd.l3_offset = mb->l2_len; + txd.l4_offset = mb->l2_len + mb->l3_len; + txd.lso_meta_res = 0; + txd.mss = rte_cpu_to_le_16(mb->tso_segsz); + txd.lso_hdrlen = mb->l2_len + mb->l3_len + mb->l4_len; + txd.lso_totsegs = (mb->pkt_len + mb->tso_segsz) / mb->tso_segsz; + + if (ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK) { + txd.l3_offset += mb->outer_l2_len + mb->outer_l3_len; + txd.l4_offset += mb->outer_l2_len + mb->outer_l3_len; + txd.lso_hdrlen += mb->outer_l2_len + mb->outer_l3_len; + } + + return txd.raw; + +clean_txd: + txd.l3_offset = 0; + txd.l4_offset = 0; + txd.lso_hdrlen = 0; + txd.mss = 0; + txd.lso_totsegs = 0; + txd.lso_meta_res = 0; + + return txd.raw; +} + +uint16_t nfp_net_nfdk_xmit_pkts(void *tx_queue, + struct rte_mbuf **tx_pkts, + uint16_t nb_pkts); +int nfp_net_nfdk_tx_queue_setup(struct rte_eth_dev *dev, + uint16_t queue_idx, + uint16_t nb_desc, + unsigned int socket_id, + const struct rte_eth_txconf *tx_conf); + +#endif /* _NFP_NFDK_H_ */ diff --git a/drivers/net/nfp/nfdk/nfp_nfdk_dp.c b/drivers/net/nfp/nfdk/nfp_nfdk_dp.c new file mode 100644 index 0000000000..ec937c1f50 --- /dev/null +++ b/drivers/net/nfp/nfdk/nfp_nfdk_dp.c @@ -0,0 +1,421 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2023 Corigine, Inc. + * All rights reserved. + */ + +#include +#include +#include + +#include "../nfp_logs.h" +#include "../nfp_common.h" +#include "../nfp_rxtx.h" +#include "../nfpcore/nfp_mip.h" +#include "../nfpcore/nfp_rtsym.h" +#include "nfp_nfdk.h" + +static inline int +nfp_net_nfdk_headlen_to_segs(unsigned int headlen) +{ + return DIV_ROUND_UP(headlen + + NFDK_TX_MAX_DATA_PER_DESC - + NFDK_TX_MAX_DATA_PER_HEAD, + NFDK_TX_MAX_DATA_PER_DESC); +} + +static int +nfp_net_nfdk_tx_maybe_close_block(struct nfp_net_txq *txq, struct rte_mbuf *pkt) +{ + unsigned int n_descs, wr_p, i, nop_slots; + struct rte_mbuf *pkt_temp; + + pkt_temp = pkt; + n_descs = nfp_net_nfdk_headlen_to_segs(pkt_temp->data_len); + while (pkt_temp->next) { + pkt_temp = pkt_temp->next; + n_descs += DIV_ROUND_UP(pkt_temp->data_len, NFDK_TX_MAX_DATA_PER_DESC); + } + + if (unlikely(n_descs > NFDK_TX_DESC_GATHER_MAX)) + return -EINVAL; + + /* Under count by 1 (don't count meta) for the round down to work out */ + n_descs += !!(pkt->ol_flags & RTE_MBUF_F_TX_TCP_SEG); + + if (round_down(txq->wr_p, NFDK_TX_DESC_BLOCK_CNT) != + round_down(txq->wr_p + n_descs, NFDK_TX_DESC_BLOCK_CNT)) + goto close_block; + + if ((uint32_t)txq->data_pending + pkt->pkt_len > NFDK_TX_MAX_DATA_PER_BLOCK) + goto close_block; + + return 0; + +close_block: + wr_p = txq->wr_p; + nop_slots = D_BLOCK_CPL(wr_p); + + memset(&txq->ktxds[wr_p], 0, nop_slots * sizeof(struct nfp_net_nfdk_tx_desc)); + for (i = wr_p; i < nop_slots + wr_p; i++) { + if (txq->txbufs[i].mbuf) { + rte_pktmbuf_free_seg(txq->txbufs[i].mbuf); + txq->txbufs[i].mbuf = NULL; + } + } + txq->data_pending = 0; + txq->wr_p = D_IDX(txq, txq->wr_p + nop_slots); + + return nop_slots; +} + +static void +nfp_net_nfdk_set_meta_data(struct rte_mbuf *pkt, + struct nfp_net_txq *txq, + uint64_t *metadata) +{ + char *meta; + uint8_t layer = 0; + uint32_t meta_type; + struct nfp_net_hw *hw; + uint32_t header_offset; + uint8_t vlan_layer = 0; + struct nfp_net_meta_raw meta_data; + + memset(&meta_data, 0, sizeof(meta_data)); + hw = txq->hw; + + if ((pkt->ol_flags & RTE_MBUF_F_TX_VLAN) != 0 && + (hw->ctrl & NFP_NET_CFG_CTRL_TXVLAN_V2) != 0) { + if (meta_data.length == 0) + meta_data.length = NFP_NET_META_HEADER_SIZE; + meta_data.length += NFP_NET_META_FIELD_SIZE; + meta_data.header |= NFP_NET_META_VLAN; + } + + if (meta_data.length == 0) + return; + + meta_type = meta_data.header; + header_offset = meta_type << NFP_NET_META_NFDK_LENGTH; + meta_data.header = header_offset | meta_data.length; + meta_data.header = rte_cpu_to_be_32(meta_data.header); + meta = rte_pktmbuf_prepend(pkt, meta_data.length); + memcpy(meta, &meta_data.header, sizeof(meta_data.header)); + meta += NFP_NET_META_HEADER_SIZE; + + for (; meta_type != 0; meta_type >>= NFP_NET_META_FIELD_SIZE, layer++, + meta += NFP_NET_META_FIELD_SIZE) { + switch (meta_type & NFP_NET_META_FIELD_MASK) { + case NFP_NET_META_VLAN: + if (vlan_layer > 0) { + PMD_DRV_LOG(ERR, "At most 1 layers of vlan is supported"); + return; + } + nfp_net_set_meta_vlan(&meta_data, pkt, layer); + vlan_layer++; + break; + default: + PMD_DRV_LOG(ERR, "The metadata type not supported"); + return; + } + + memcpy(meta, &meta_data.data[layer], sizeof(meta_data.data[layer])); + } + + *metadata = NFDK_DESC_TX_CHAIN_META; +} + +uint16_t +nfp_net_nfdk_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) +{ + uint32_t buf_idx; + uint64_t dma_addr; + uint16_t free_descs; + uint32_t npkts = 0; + uint64_t metadata = 0; + uint16_t issued_descs = 0; + struct nfp_net_txq *txq; + struct nfp_net_hw *hw; + struct nfp_net_nfdk_tx_desc *ktxds; + struct rte_mbuf *pkt, *temp_pkt; + struct rte_mbuf **lmbuf; + + txq = tx_queue; + hw = txq->hw; + + PMD_TX_LOG(DEBUG, "working for queue %u at pos %d and %u packets", + txq->qidx, txq->wr_p, nb_pkts); + + if ((nfp_net_nfdk_free_tx_desc(txq) < NFDK_TX_DESC_PER_SIMPLE_PKT * + nb_pkts) || (nfp_net_nfdk_txq_full(txq))) + nfp_net_tx_free_bufs(txq); + + free_descs = (uint16_t)nfp_net_nfdk_free_tx_desc(txq); + if (unlikely(free_descs == 0)) + return 0; + + PMD_TX_LOG(DEBUG, "queue: %u. Sending %u packets", txq->qidx, nb_pkts); + /* Sending packets */ + while ((npkts < nb_pkts) && free_descs) { + uint32_t type, dma_len, dlen_type, tmp_dlen; + int nop_descs, used_descs; + + pkt = *(tx_pkts + npkts); + nop_descs = nfp_net_nfdk_tx_maybe_close_block(txq, pkt); + if (nop_descs < 0) + goto xmit_end; + + issued_descs += nop_descs; + ktxds = &txq->ktxds[txq->wr_p]; + /* Grabbing the mbuf linked to the current descriptor */ + buf_idx = txq->wr_p; + lmbuf = &txq->txbufs[buf_idx++].mbuf; + /* Warming the cache for releasing the mbuf later on */ + RTE_MBUF_PREFETCH_TO_FREE(*lmbuf); + + temp_pkt = pkt; + nfp_net_nfdk_set_meta_data(pkt, txq, &metadata); + + if (unlikely(pkt->nb_segs > 1 && + !(hw->cap & NFP_NET_CFG_CTRL_GATHER))) { + PMD_INIT_LOG(ERR, "Multisegment packet not supported"); + goto xmit_end; + } + + /* + * Checksum and VLAN flags just in the first descriptor for a + * multisegment packet, but TSO info needs to be in all of them. + */ + + dma_len = pkt->data_len; + if ((hw->cap & NFP_NET_CFG_CTRL_LSO_ANY) && + (pkt->ol_flags & RTE_MBUF_F_TX_TCP_SEG)) { + type = NFDK_DESC_TX_TYPE_TSO; + } else if (pkt->next == NULL && dma_len <= NFDK_TX_MAX_DATA_PER_HEAD) { + type = NFDK_DESC_TX_TYPE_SIMPLE; + } else { + type = NFDK_DESC_TX_TYPE_GATHER; + } + + /* Implicitly truncates to chunk in below logic */ + dma_len -= 1; + + /* + * We will do our best to pass as much data as we can in descriptor + * and we need to make sure the first descriptor includes whole + * head since there is limitation in firmware side. Sometimes the + * value of 'dma_len & NFDK_DESC_TX_DMA_LEN_HEAD' will be less + * than packet head len. + */ + dlen_type = (dma_len > NFDK_DESC_TX_DMA_LEN_HEAD ? + NFDK_DESC_TX_DMA_LEN_HEAD : dma_len) | + (NFDK_DESC_TX_TYPE_HEAD & (type << 12)); + ktxds->dma_len_type = rte_cpu_to_le_16(dlen_type); + dma_addr = rte_mbuf_data_iova(pkt); + PMD_TX_LOG(DEBUG, "Working with mbuf at dma address:" + "%" PRIx64 "", dma_addr); + ktxds->dma_addr_hi = rte_cpu_to_le_16(dma_addr >> 32); + ktxds->dma_addr_lo = rte_cpu_to_le_32(dma_addr & 0xffffffff); + ktxds++; + + /* + * Preserve the original dlen_type, this way below the EOP logic + * can use dlen_type. + */ + tmp_dlen = dlen_type & NFDK_DESC_TX_DMA_LEN_HEAD; + dma_len -= tmp_dlen; + dma_addr += tmp_dlen + 1; + + /* + * The rest of the data (if any) will be in larger DMA descriptors + * and is handled with the dma_len loop. + */ + while (pkt) { + if (*lmbuf) + rte_pktmbuf_free_seg(*lmbuf); + *lmbuf = pkt; + while (dma_len > 0) { + dma_len -= 1; + dlen_type = NFDK_DESC_TX_DMA_LEN & dma_len; + + ktxds->dma_len_type = rte_cpu_to_le_16(dlen_type); + ktxds->dma_addr_hi = rte_cpu_to_le_16(dma_addr >> 32); + ktxds->dma_addr_lo = rte_cpu_to_le_32(dma_addr & 0xffffffff); + ktxds++; + + dma_len -= dlen_type; + dma_addr += dlen_type + 1; + } + + if (pkt->next == NULL) + break; + + pkt = pkt->next; + dma_len = pkt->data_len; + dma_addr = rte_mbuf_data_iova(pkt); + PMD_TX_LOG(DEBUG, "Working with mbuf at dma address:" + "%" PRIx64 "", dma_addr); + + lmbuf = &txq->txbufs[buf_idx++].mbuf; + } + + (ktxds - 1)->dma_len_type = rte_cpu_to_le_16(dlen_type | NFDK_DESC_TX_EOP); + + ktxds->raw = rte_cpu_to_le_64(nfp_net_nfdk_tx_cksum(txq, temp_pkt, metadata)); + ktxds++; + + if ((hw->cap & NFP_NET_CFG_CTRL_LSO_ANY) && + (temp_pkt->ol_flags & RTE_MBUF_F_TX_TCP_SEG)) { + ktxds->raw = rte_cpu_to_le_64(nfp_net_nfdk_tx_tso(txq, temp_pkt)); + ktxds++; + } + + used_descs = ktxds - txq->ktxds - txq->wr_p; + if (round_down(txq->wr_p, NFDK_TX_DESC_BLOCK_CNT) != + round_down(txq->wr_p + used_descs - 1, NFDK_TX_DESC_BLOCK_CNT)) { + PMD_INIT_LOG(INFO, "Used descs cross block boundary"); + goto xmit_end; + } + + txq->wr_p = D_IDX(txq, txq->wr_p + used_descs); + if (txq->wr_p % NFDK_TX_DESC_BLOCK_CNT) + txq->data_pending += temp_pkt->pkt_len; + else + txq->data_pending = 0; + + issued_descs += used_descs; + npkts++; + free_descs = (uint16_t)nfp_net_nfdk_free_tx_desc(txq); + } + +xmit_end: + /* Increment write pointers. Force memory write before we let HW know */ + rte_wmb(); + nfp_qcp_ptr_add(txq->qcp_q, NFP_QCP_WRITE_PTR, issued_descs); + + return npkts; +} + +int +nfp_net_nfdk_tx_queue_setup(struct rte_eth_dev *dev, + uint16_t queue_idx, + uint16_t nb_desc, + unsigned int socket_id, + const struct rte_eth_txconf *tx_conf) +{ + int ret; + uint16_t min_tx_desc; + uint16_t max_tx_desc; + const struct rte_memzone *tz; + struct nfp_net_txq *txq; + uint16_t tx_free_thresh; + struct nfp_net_hw *hw; + uint32_t tx_desc_sz; + + hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private); + + PMD_INIT_FUNC_TRACE(); + + ret = nfp_net_tx_desc_limits(hw, &min_tx_desc, &max_tx_desc); + if (ret != 0) + return ret; + + /* Validating number of descriptors */ + tx_desc_sz = nb_desc * sizeof(struct nfp_net_nfdk_tx_desc); + if ((NFDK_TX_DESC_PER_SIMPLE_PKT * tx_desc_sz) % NFP_ALIGN_RING_DESC != 0 || + (NFDK_TX_DESC_PER_SIMPLE_PKT * nb_desc) % NFDK_TX_DESC_BLOCK_CNT != 0 || + nb_desc > max_tx_desc || nb_desc < min_tx_desc) { + PMD_DRV_LOG(ERR, "Wrong nb_desc value"); + return -EINVAL; + } + + tx_free_thresh = (uint16_t)((tx_conf->tx_free_thresh) ? + tx_conf->tx_free_thresh : + DEFAULT_TX_FREE_THRESH); + + if (tx_free_thresh > (nb_desc)) { + PMD_DRV_LOG(ERR, + "tx_free_thresh must be less than the number of TX " + "descriptors. (tx_free_thresh=%u port=%d " + "queue=%d)", (unsigned int)tx_free_thresh, + dev->data->port_id, (int)queue_idx); + return -(EINVAL); + } + + /* + * Free memory prior to re-allocation if needed. This is the case after + * calling nfp_net_stop + */ + if (dev->data->tx_queues[queue_idx]) { + PMD_TX_LOG(DEBUG, "Freeing memory prior to re-allocation %d", + queue_idx); + nfp_net_tx_queue_release(dev, queue_idx); + dev->data->tx_queues[queue_idx] = NULL; + } + + /* Allocating tx queue data structure */ + txq = rte_zmalloc_socket("ethdev TX queue", sizeof(struct nfp_net_txq), + RTE_CACHE_LINE_SIZE, socket_id); + if (txq == NULL) { + PMD_DRV_LOG(ERR, "Error allocating tx dma"); + return -ENOMEM; + } + + /* + * Allocate TX ring hardware descriptors. A memzone large enough to + * handle the maximum ring size is allocated in order to allow for + * resizing in later calls to the queue setup function. + */ + tz = rte_eth_dma_zone_reserve(dev, "tx_ring", queue_idx, + sizeof(struct nfp_net_nfdk_tx_desc) * + NFDK_TX_DESC_PER_SIMPLE_PKT * + max_tx_desc, NFP_MEMZONE_ALIGN, + socket_id); + if (tz == NULL) { + PMD_DRV_LOG(ERR, "Error allocating tx dma"); + nfp_net_tx_queue_release(dev, queue_idx); + return -ENOMEM; + } + + txq->tx_count = nb_desc * NFDK_TX_DESC_PER_SIMPLE_PKT; + txq->tx_free_thresh = tx_free_thresh; + txq->tx_pthresh = tx_conf->tx_thresh.pthresh; + txq->tx_hthresh = tx_conf->tx_thresh.hthresh; + txq->tx_wthresh = tx_conf->tx_thresh.wthresh; + + /* queue mapping based on firmware configuration */ + txq->qidx = queue_idx; + txq->tx_qcidx = queue_idx * hw->stride_tx; + txq->qcp_q = hw->tx_bar + NFP_QCP_QUEUE_OFF(txq->tx_qcidx); + + txq->port_id = dev->data->port_id; + + /* Saving physical and virtual addresses for the TX ring */ + txq->dma = (uint64_t)tz->iova; + txq->ktxds = (struct nfp_net_nfdk_tx_desc *)tz->addr; + + /* mbuf pointers array for referencing mbufs linked to TX descriptors */ + txq->txbufs = rte_zmalloc_socket("txq->txbufs", + sizeof(*txq->txbufs) * txq->tx_count, + RTE_CACHE_LINE_SIZE, socket_id); + + if (txq->txbufs == NULL) { + nfp_net_tx_queue_release(dev, queue_idx); + return -ENOMEM; + } + PMD_TX_LOG(DEBUG, "txbufs=%p hw_ring=%p dma_addr=0x%" PRIx64, + txq->txbufs, txq->ktxds, (unsigned long)txq->dma); + + nfp_net_reset_tx_queue(txq); + + dev->data->tx_queues[queue_idx] = txq; + txq->hw = hw; + /* + * Telling the HW about the physical address of the TX ring and number + * of descriptors in log2 format + */ + nn_cfg_writeq(hw, NFP_NET_CFG_TXR_ADDR(queue_idx), txq->dma); + nn_cfg_writeb(hw, NFP_NET_CFG_TXR_SZ(queue_idx), rte_log2_u32(txq->tx_count)); + + return 0; +} diff --git a/drivers/net/nfp/nfp_common.c b/drivers/net/nfp/nfp_common.c index ca334d56ab..f17632a364 100644 --- a/drivers/net/nfp/nfp_common.c +++ b/drivers/net/nfp/nfp_common.c @@ -45,6 +45,7 @@ #include "nfp_cpp_bridge.h" #include "nfd3/nfp_nfd3.h" +#include "nfdk/nfp_nfdk.h" #include #include diff --git a/drivers/net/nfp/nfp_ethdev.c b/drivers/net/nfp/nfp_ethdev.c index f212a4a10e..c2684ec268 100644 --- a/drivers/net/nfp/nfp_ethdev.c +++ b/drivers/net/nfp/nfp_ethdev.c @@ -39,6 +39,7 @@ #include "nfp_cpp_bridge.h" #include "nfd3/nfp_nfd3.h" +#include "nfdk/nfp_nfdk.h" #include "flower/nfp_flower.h" static int diff --git a/drivers/net/nfp/nfp_ethdev_vf.c b/drivers/net/nfp/nfp_ethdev_vf.c index 80a8983deb..5fd2dc11a3 100644 --- a/drivers/net/nfp/nfp_ethdev_vf.c +++ b/drivers/net/nfp/nfp_ethdev_vf.c @@ -23,6 +23,7 @@ #include "nfp_rxtx.h" #include "nfp_logs.h" #include "nfd3/nfp_nfd3.h" +#include "nfdk/nfp_nfdk.h" static void nfp_netvf_read_mac(struct nfp_net_hw *hw) diff --git a/drivers/net/nfp/nfp_rxtx.c b/drivers/net/nfp/nfp_rxtx.c index 76021b64ee..9eaa0b89c1 100644 --- a/drivers/net/nfp/nfp_rxtx.c +++ b/drivers/net/nfp/nfp_rxtx.c @@ -21,6 +21,7 @@ #include "nfp_rxtx.h" #include "nfp_logs.h" #include "nfd3/nfp_nfd3.h" +#include "nfdk/nfp_nfdk.h" #include "nfpcore/nfp_mip.h" #include "nfpcore/nfp_rtsym.h" @@ -764,187 +765,6 @@ nfp_net_set_meta_vlan(struct nfp_net_meta_raw *meta_data, meta_data->data[layer] = rte_cpu_to_be_32(tpid << 16 | vlan_tci); } -static void -nfp_net_nfdk_set_meta_data(struct rte_mbuf *pkt, - struct nfp_net_txq *txq, - uint64_t *metadata) -{ - char *meta; - uint8_t layer = 0; - uint32_t meta_type; - struct nfp_net_hw *hw; - uint32_t header_offset; - uint8_t vlan_layer = 0; - struct nfp_net_meta_raw meta_data; - - memset(&meta_data, 0, sizeof(meta_data)); - hw = txq->hw; - - if ((pkt->ol_flags & RTE_MBUF_F_TX_VLAN) != 0 && - (hw->ctrl & NFP_NET_CFG_CTRL_TXVLAN_V2) != 0) { - if (meta_data.length == 0) - meta_data.length = NFP_NET_META_HEADER_SIZE; - meta_data.length += NFP_NET_META_FIELD_SIZE; - meta_data.header |= NFP_NET_META_VLAN; - } - - if (meta_data.length == 0) - return; - - meta_type = meta_data.header; - header_offset = meta_type << NFP_NET_META_NFDK_LENGTH; - meta_data.header = header_offset | meta_data.length; - meta_data.header = rte_cpu_to_be_32(meta_data.header); - meta = rte_pktmbuf_prepend(pkt, meta_data.length); - memcpy(meta, &meta_data.header, sizeof(meta_data.header)); - meta += NFP_NET_META_HEADER_SIZE; - - for (; meta_type != 0; meta_type >>= NFP_NET_META_FIELD_SIZE, layer++, - meta += NFP_NET_META_FIELD_SIZE) { - switch (meta_type & NFP_NET_META_FIELD_MASK) { - case NFP_NET_META_VLAN: - if (vlan_layer > 0) { - PMD_DRV_LOG(ERR, "At most 1 layers of vlan is supported"); - return; - } - nfp_net_set_meta_vlan(&meta_data, pkt, layer); - vlan_layer++; - break; - default: - PMD_DRV_LOG(ERR, "The metadata type not supported"); - return; - } - - memcpy(meta, &meta_data.data[layer], sizeof(meta_data.data[layer])); - } - - *metadata = NFDK_DESC_TX_CHAIN_META; -} - -static int -nfp_net_nfdk_tx_queue_setup(struct rte_eth_dev *dev, - uint16_t queue_idx, - uint16_t nb_desc, - unsigned int socket_id, - const struct rte_eth_txconf *tx_conf) -{ - int ret; - uint16_t min_tx_desc; - uint16_t max_tx_desc; - const struct rte_memzone *tz; - struct nfp_net_txq *txq; - uint16_t tx_free_thresh; - struct nfp_net_hw *hw; - uint32_t tx_desc_sz; - - hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private); - - PMD_INIT_FUNC_TRACE(); - - ret = nfp_net_tx_desc_limits(hw, &min_tx_desc, &max_tx_desc); - if (ret != 0) - return ret; - - /* Validating number of descriptors */ - tx_desc_sz = nb_desc * sizeof(struct nfp_net_nfdk_tx_desc); - if ((NFDK_TX_DESC_PER_SIMPLE_PKT * tx_desc_sz) % NFP_ALIGN_RING_DESC != 0 || - (NFDK_TX_DESC_PER_SIMPLE_PKT * nb_desc) % NFDK_TX_DESC_BLOCK_CNT != 0 || - nb_desc > max_tx_desc || nb_desc < min_tx_desc) { - PMD_DRV_LOG(ERR, "Wrong nb_desc value"); - return -EINVAL; - } - - tx_free_thresh = (uint16_t)((tx_conf->tx_free_thresh) ? - tx_conf->tx_free_thresh : - DEFAULT_TX_FREE_THRESH); - - if (tx_free_thresh > (nb_desc)) { - PMD_DRV_LOG(ERR, - "tx_free_thresh must be less than the number of TX " - "descriptors. (tx_free_thresh=%u port=%d " - "queue=%d)", (unsigned int)tx_free_thresh, - dev->data->port_id, (int)queue_idx); - return -(EINVAL); - } - - /* - * Free memory prior to re-allocation if needed. This is the case after - * calling nfp_net_stop - */ - if (dev->data->tx_queues[queue_idx]) { - PMD_TX_LOG(DEBUG, "Freeing memory prior to re-allocation %d", - queue_idx); - nfp_net_tx_queue_release(dev, queue_idx); - dev->data->tx_queues[queue_idx] = NULL; - } - - /* Allocating tx queue data structure */ - txq = rte_zmalloc_socket("ethdev TX queue", sizeof(struct nfp_net_txq), - RTE_CACHE_LINE_SIZE, socket_id); - if (txq == NULL) { - PMD_DRV_LOG(ERR, "Error allocating tx dma"); - return -ENOMEM; - } - - /* - * Allocate TX ring hardware descriptors. A memzone large enough to - * handle the maximum ring size is allocated in order to allow for - * resizing in later calls to the queue setup function. - */ - tz = rte_eth_dma_zone_reserve(dev, "tx_ring", queue_idx, - sizeof(struct nfp_net_nfdk_tx_desc) * - NFDK_TX_DESC_PER_SIMPLE_PKT * - max_tx_desc, NFP_MEMZONE_ALIGN, - socket_id); - if (tz == NULL) { - PMD_DRV_LOG(ERR, "Error allocating tx dma"); - nfp_net_tx_queue_release(dev, queue_idx); - return -ENOMEM; - } - - txq->tx_count = nb_desc * NFDK_TX_DESC_PER_SIMPLE_PKT; - txq->tx_free_thresh = tx_free_thresh; - txq->tx_pthresh = tx_conf->tx_thresh.pthresh; - txq->tx_hthresh = tx_conf->tx_thresh.hthresh; - txq->tx_wthresh = tx_conf->tx_thresh.wthresh; - - /* queue mapping based on firmware configuration */ - txq->qidx = queue_idx; - txq->tx_qcidx = queue_idx * hw->stride_tx; - txq->qcp_q = hw->tx_bar + NFP_QCP_QUEUE_OFF(txq->tx_qcidx); - - txq->port_id = dev->data->port_id; - - /* Saving physical and virtual addresses for the TX ring */ - txq->dma = (uint64_t)tz->iova; - txq->ktxds = (struct nfp_net_nfdk_tx_desc *)tz->addr; - - /* mbuf pointers array for referencing mbufs linked to TX descriptors */ - txq->txbufs = rte_zmalloc_socket("txq->txbufs", - sizeof(*txq->txbufs) * txq->tx_count, - RTE_CACHE_LINE_SIZE, socket_id); - - if (txq->txbufs == NULL) { - nfp_net_tx_queue_release(dev, queue_idx); - return -ENOMEM; - } - PMD_TX_LOG(DEBUG, "txbufs=%p hw_ring=%p dma_addr=0x%" PRIx64, - txq->txbufs, txq->ktxds, (unsigned long)txq->dma); - - nfp_net_reset_tx_queue(txq); - - dev->data->tx_queues[queue_idx] = txq; - txq->hw = hw; - /* - * Telling the HW about the physical address of the TX ring and number - * of descriptors in log2 format - */ - nn_cfg_writeq(hw, NFP_NET_CFG_TXR_ADDR(queue_idx), txq->dma); - nn_cfg_writeb(hw, NFP_NET_CFG_TXR_SZ(queue_idx), rte_log2_u32(txq->tx_count)); - - return 0; -} - int nfp_net_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx, @@ -973,328 +793,3 @@ nfp_net_tx_queue_setup(struct rte_eth_dev *dev, return -EINVAL; } } - -static inline uint32_t -nfp_net_nfdk_free_tx_desc(struct nfp_net_txq *txq) -{ - uint32_t free_desc; - - if (txq->wr_p >= txq->rd_p) - free_desc = txq->tx_count - (txq->wr_p - txq->rd_p); - else - free_desc = txq->rd_p - txq->wr_p; - - return (free_desc > NFDK_TX_DESC_STOP_CNT) ? - (free_desc - NFDK_TX_DESC_STOP_CNT) : 0; -} - -/* - * nfp_net_nfdk_txq_full() - Check if the TX queue free descriptors - * is below tx_free_threshold for firmware of nfdk - * - * @txq: TX queue to check - * - * This function uses the host copy* of read/write pointers. - */ -static inline uint32_t -nfp_net_nfdk_txq_full(struct nfp_net_txq *txq) -{ - return (nfp_net_nfdk_free_tx_desc(txq) < txq->tx_free_thresh); -} - -static inline int -nfp_net_nfdk_headlen_to_segs(unsigned int headlen) -{ - return DIV_ROUND_UP(headlen + - NFDK_TX_MAX_DATA_PER_DESC - - NFDK_TX_MAX_DATA_PER_HEAD, - NFDK_TX_MAX_DATA_PER_DESC); -} - -static int -nfp_net_nfdk_tx_maybe_close_block(struct nfp_net_txq *txq, struct rte_mbuf *pkt) -{ - unsigned int n_descs, wr_p, i, nop_slots; - struct rte_mbuf *pkt_temp; - - pkt_temp = pkt; - n_descs = nfp_net_nfdk_headlen_to_segs(pkt_temp->data_len); - while (pkt_temp->next) { - pkt_temp = pkt_temp->next; - n_descs += DIV_ROUND_UP(pkt_temp->data_len, NFDK_TX_MAX_DATA_PER_DESC); - } - - if (unlikely(n_descs > NFDK_TX_DESC_GATHER_MAX)) - return -EINVAL; - - /* Under count by 1 (don't count meta) for the round down to work out */ - n_descs += !!(pkt->ol_flags & RTE_MBUF_F_TX_TCP_SEG); - - if (round_down(txq->wr_p, NFDK_TX_DESC_BLOCK_CNT) != - round_down(txq->wr_p + n_descs, NFDK_TX_DESC_BLOCK_CNT)) - goto close_block; - - if ((uint32_t)txq->data_pending + pkt->pkt_len > NFDK_TX_MAX_DATA_PER_BLOCK) - goto close_block; - - return 0; - -close_block: - wr_p = txq->wr_p; - nop_slots = D_BLOCK_CPL(wr_p); - - memset(&txq->ktxds[wr_p], 0, nop_slots * sizeof(struct nfp_net_nfdk_tx_desc)); - for (i = wr_p; i < nop_slots + wr_p; i++) { - if (txq->txbufs[i].mbuf) { - rte_pktmbuf_free_seg(txq->txbufs[i].mbuf); - txq->txbufs[i].mbuf = NULL; - } - } - txq->data_pending = 0; - txq->wr_p = D_IDX(txq, txq->wr_p + nop_slots); - - return nop_slots; -} - -/* nfp_net_nfdk_tx_cksum() - Set TX CSUM offload flags in TX descriptor of nfdk */ -static inline uint64_t -nfp_net_nfdk_tx_cksum(struct nfp_net_txq *txq, struct rte_mbuf *mb, - uint64_t flags) -{ - uint64_t ol_flags; - struct nfp_net_hw *hw = txq->hw; - - if ((hw->cap & NFP_NET_CFG_CTRL_TXCSUM) == 0) - return flags; - - ol_flags = mb->ol_flags; - - /* Set TCP csum offload if TSO enabled. */ - if (ol_flags & RTE_MBUF_F_TX_TCP_SEG) - flags |= NFDK_DESC_TX_L4_CSUM; - - if (ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK) - flags |= NFDK_DESC_TX_ENCAP; - - /* IPv6 does not need checksum */ - if (ol_flags & RTE_MBUF_F_TX_IP_CKSUM) - flags |= NFDK_DESC_TX_L3_CSUM; - - if (ol_flags & RTE_MBUF_F_TX_L4_MASK) - flags |= NFDK_DESC_TX_L4_CSUM; - - return flags; -} - -/* nfp_net_nfdk_tx_tso() - Set TX descriptor for TSO of nfdk */ -static inline uint64_t -nfp_net_nfdk_tx_tso(struct nfp_net_txq *txq, struct rte_mbuf *mb) -{ - uint64_t ol_flags; - struct nfp_net_nfdk_tx_desc txd; - struct nfp_net_hw *hw = txq->hw; - - if ((hw->cap & NFP_NET_CFG_CTRL_LSO_ANY) == 0) - goto clean_txd; - - ol_flags = mb->ol_flags; - - if ((ol_flags & RTE_MBUF_F_TX_TCP_SEG) == 0) - goto clean_txd; - - txd.l3_offset = mb->l2_len; - txd.l4_offset = mb->l2_len + mb->l3_len; - txd.lso_meta_res = 0; - txd.mss = rte_cpu_to_le_16(mb->tso_segsz); - txd.lso_hdrlen = mb->l2_len + mb->l3_len + mb->l4_len; - txd.lso_totsegs = (mb->pkt_len + mb->tso_segsz) / mb->tso_segsz; - - if (ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK) { - txd.l3_offset += mb->outer_l2_len + mb->outer_l3_len; - txd.l4_offset += mb->outer_l2_len + mb->outer_l3_len; - txd.lso_hdrlen += mb->outer_l2_len + mb->outer_l3_len; - } - - return txd.raw; - -clean_txd: - txd.l3_offset = 0; - txd.l4_offset = 0; - txd.lso_hdrlen = 0; - txd.mss = 0; - txd.lso_totsegs = 0; - txd.lso_meta_res = 0; - - return txd.raw; -} - -uint16_t -nfp_net_nfdk_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) -{ - uint32_t buf_idx; - uint64_t dma_addr; - uint16_t free_descs; - uint32_t npkts = 0; - uint64_t metadata = 0; - uint16_t issued_descs = 0; - struct nfp_net_txq *txq; - struct nfp_net_hw *hw; - struct nfp_net_nfdk_tx_desc *ktxds; - struct rte_mbuf *pkt, *temp_pkt; - struct rte_mbuf **lmbuf; - - txq = tx_queue; - hw = txq->hw; - - PMD_TX_LOG(DEBUG, "working for queue %u at pos %d and %u packets", - txq->qidx, txq->wr_p, nb_pkts); - - if ((nfp_net_nfdk_free_tx_desc(txq) < NFDK_TX_DESC_PER_SIMPLE_PKT * - nb_pkts) || (nfp_net_nfdk_txq_full(txq))) - nfp_net_tx_free_bufs(txq); - - free_descs = (uint16_t)nfp_net_nfdk_free_tx_desc(txq); - if (unlikely(free_descs == 0)) - return 0; - - PMD_TX_LOG(DEBUG, "queue: %u. Sending %u packets", txq->qidx, nb_pkts); - /* Sending packets */ - while ((npkts < nb_pkts) && free_descs) { - uint32_t type, dma_len, dlen_type, tmp_dlen; - int nop_descs, used_descs; - - pkt = *(tx_pkts + npkts); - nop_descs = nfp_net_nfdk_tx_maybe_close_block(txq, pkt); - if (nop_descs < 0) - goto xmit_end; - - issued_descs += nop_descs; - ktxds = &txq->ktxds[txq->wr_p]; - /* Grabbing the mbuf linked to the current descriptor */ - buf_idx = txq->wr_p; - lmbuf = &txq->txbufs[buf_idx++].mbuf; - /* Warming the cache for releasing the mbuf later on */ - RTE_MBUF_PREFETCH_TO_FREE(*lmbuf); - - temp_pkt = pkt; - nfp_net_nfdk_set_meta_data(pkt, txq, &metadata); - - if (unlikely(pkt->nb_segs > 1 && - !(hw->cap & NFP_NET_CFG_CTRL_GATHER))) { - PMD_INIT_LOG(ERR, "Multisegment packet not supported"); - goto xmit_end; - } - - /* - * Checksum and VLAN flags just in the first descriptor for a - * multisegment packet, but TSO info needs to be in all of them. - */ - - dma_len = pkt->data_len; - if ((hw->cap & NFP_NET_CFG_CTRL_LSO_ANY) && - (pkt->ol_flags & RTE_MBUF_F_TX_TCP_SEG)) { - type = NFDK_DESC_TX_TYPE_TSO; - } else if (pkt->next == NULL && dma_len <= NFDK_TX_MAX_DATA_PER_HEAD) { - type = NFDK_DESC_TX_TYPE_SIMPLE; - } else { - type = NFDK_DESC_TX_TYPE_GATHER; - } - - /* Implicitly truncates to chunk in below logic */ - dma_len -= 1; - - /* - * We will do our best to pass as much data as we can in descriptor - * and we need to make sure the first descriptor includes whole - * head since there is limitation in firmware side. Sometimes the - * value of 'dma_len & NFDK_DESC_TX_DMA_LEN_HEAD' will be less - * than packet head len. - */ - dlen_type = (dma_len > NFDK_DESC_TX_DMA_LEN_HEAD ? - NFDK_DESC_TX_DMA_LEN_HEAD : dma_len) | - (NFDK_DESC_TX_TYPE_HEAD & (type << 12)); - ktxds->dma_len_type = rte_cpu_to_le_16(dlen_type); - dma_addr = rte_mbuf_data_iova(pkt); - PMD_TX_LOG(DEBUG, "Working with mbuf at dma address:" - "%" PRIx64 "", dma_addr); - ktxds->dma_addr_hi = rte_cpu_to_le_16(dma_addr >> 32); - ktxds->dma_addr_lo = rte_cpu_to_le_32(dma_addr & 0xffffffff); - ktxds++; - - /* - * Preserve the original dlen_type, this way below the EOP logic - * can use dlen_type. - */ - tmp_dlen = dlen_type & NFDK_DESC_TX_DMA_LEN_HEAD; - dma_len -= tmp_dlen; - dma_addr += tmp_dlen + 1; - - /* - * The rest of the data (if any) will be in larger DMA descriptors - * and is handled with the dma_len loop. - */ - while (pkt) { - if (*lmbuf) - rte_pktmbuf_free_seg(*lmbuf); - *lmbuf = pkt; - while (dma_len > 0) { - dma_len -= 1; - dlen_type = NFDK_DESC_TX_DMA_LEN & dma_len; - - ktxds->dma_len_type = rte_cpu_to_le_16(dlen_type); - ktxds->dma_addr_hi = rte_cpu_to_le_16(dma_addr >> 32); - ktxds->dma_addr_lo = rte_cpu_to_le_32(dma_addr & 0xffffffff); - ktxds++; - - dma_len -= dlen_type; - dma_addr += dlen_type + 1; - } - - if (pkt->next == NULL) - break; - - pkt = pkt->next; - dma_len = pkt->data_len; - dma_addr = rte_mbuf_data_iova(pkt); - PMD_TX_LOG(DEBUG, "Working with mbuf at dma address:" - "%" PRIx64 "", dma_addr); - - lmbuf = &txq->txbufs[buf_idx++].mbuf; - } - - (ktxds - 1)->dma_len_type = rte_cpu_to_le_16(dlen_type | NFDK_DESC_TX_EOP); - - ktxds->raw = rte_cpu_to_le_64(nfp_net_nfdk_tx_cksum(txq, temp_pkt, metadata)); - ktxds++; - - if ((hw->cap & NFP_NET_CFG_CTRL_LSO_ANY) && - (temp_pkt->ol_flags & RTE_MBUF_F_TX_TCP_SEG)) { - ktxds->raw = rte_cpu_to_le_64(nfp_net_nfdk_tx_tso(txq, temp_pkt)); - ktxds++; - } - - used_descs = ktxds - txq->ktxds - txq->wr_p; - if (round_down(txq->wr_p, NFDK_TX_DESC_BLOCK_CNT) != - round_down(txq->wr_p + used_descs - 1, NFDK_TX_DESC_BLOCK_CNT)) { - PMD_INIT_LOG(INFO, "Used descs cross block boundary"); - goto xmit_end; - } - - txq->wr_p = D_IDX(txq, txq->wr_p + used_descs); - if (txq->wr_p % NFDK_TX_DESC_BLOCK_CNT) - txq->data_pending += temp_pkt->pkt_len; - else - txq->data_pending = 0; - - issued_descs += used_descs; - npkts++; - free_descs = (uint16_t)nfp_net_nfdk_free_tx_desc(txq); - } - -xmit_end: - /* Increment write pointers. Force memory write before we let HW know */ - rte_wmb(); - nfp_qcp_ptr_add(txq->qcp_q, NFP_QCP_WRITE_PTR, issued_descs); - - return npkts; -} diff --git a/drivers/net/nfp/nfp_rxtx.h b/drivers/net/nfp/nfp_rxtx.h index 6c81a98ae0..4d0c88529b 100644 --- a/drivers/net/nfp/nfp_rxtx.h +++ b/drivers/net/nfp/nfp_rxtx.h @@ -96,59 +96,7 @@ struct nfp_meta_parsed { /* Descriptor alignment */ #define NFP_ALIGN_RING_DESC 128 -#define NFDK_TX_MAX_DATA_PER_HEAD 0x00001000 -#define NFDK_DESC_TX_DMA_LEN_HEAD 0x0fff -#define NFDK_DESC_TX_TYPE_HEAD 0xf000 -#define NFDK_DESC_TX_DMA_LEN 0x3fff -#define NFDK_TX_DESC_PER_SIMPLE_PKT 2 -#define NFDK_DESC_TX_TYPE_TSO 2 -#define NFDK_DESC_TX_TYPE_SIMPLE 8 -#define NFDK_DESC_TX_TYPE_GATHER 1 -#define NFDK_DESC_TX_EOP RTE_BIT32(14) -#define NFDK_DESC_TX_CHAIN_META RTE_BIT32(3) -#define NFDK_DESC_TX_ENCAP RTE_BIT32(2) -#define NFDK_DESC_TX_L4_CSUM RTE_BIT32(1) -#define NFDK_DESC_TX_L3_CSUM RTE_BIT32(0) - -#define NFDK_TX_MAX_DATA_PER_DESC 0x00004000 -#define NFDK_TX_DESC_GATHER_MAX 17 #define DIV_ROUND_UP(n, d) (((n) + (d) - 1) / (d)) -#define NFDK_TX_DESC_BLOCK_SZ 256 -#define NFDK_TX_DESC_BLOCK_CNT (NFDK_TX_DESC_BLOCK_SZ / \ - sizeof(struct nfp_net_nfdk_tx_desc)) -#define NFDK_TX_DESC_STOP_CNT (NFDK_TX_DESC_BLOCK_CNT * \ - NFDK_TX_DESC_PER_SIMPLE_PKT) -#define NFDK_TX_MAX_DATA_PER_BLOCK 0x00010000 -#define D_BLOCK_CPL(idx) (NFDK_TX_DESC_BLOCK_CNT - \ - (idx) % NFDK_TX_DESC_BLOCK_CNT) -#define D_IDX(ring, idx) ((idx) & ((ring)->tx_count - 1)) - -struct nfp_net_nfdk_tx_desc { - union { - struct { - __le16 dma_addr_hi; /* High bits of host buf address */ - __le16 dma_len_type; /* Length to DMA for this desc */ - __le32 dma_addr_lo; /* Low 32bit of host buf addr */ - }; - - struct { - __le16 mss; /* MSS to be used for LSO */ - uint8_t lso_hdrlen; /* LSO, TCP payload offset */ - uint8_t lso_totsegs; /* LSO, total segments */ - uint8_t l3_offset; /* L3 header offset */ - uint8_t l4_offset; /* L4 header offset */ - __le16 lso_meta_res; /* Rsvd bits in TSO metadata */ - }; - - struct { - uint8_t flags; /* TX Flags, see @NFDK_DESC_TX_* */ - uint8_t reserved[7]; /* meta byte placeholder */ - }; - - __le32 vals[2]; - __le64 raw; - }; -}; struct nfp_net_txq { struct nfp_net_hw *hw; /* Backpointer to nfp_net structure */ @@ -396,9 +344,6 @@ int nfp_net_tx_queue_setup(struct rte_eth_dev *dev, uint16_t nb_desc, unsigned int socket_id, const struct rte_eth_txconf *tx_conf); -uint16_t nfp_net_nfdk_xmit_pkts(void *tx_queue, - struct rte_mbuf **tx_pkts, - uint16_t nb_pkts); int nfp_net_tx_free_bufs(struct nfp_net_txq *txq); void nfp_net_set_meta_vlan(struct nfp_net_meta_raw *meta_data, struct rte_mbuf *pkt, -- 2.39.1