From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 894B14320F; Fri, 27 Oct 2023 05:02:29 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id B043B42E1D; Fri, 27 Oct 2023 05:00:59 +0200 (CEST) Received: from NAM11-CO1-obe.outbound.protection.outlook.com (mail-co1nam11on2103.outbound.protection.outlook.com [40.107.220.103]) by mails.dpdk.org (Postfix) with ESMTP id 1082B40A6F for ; Fri, 27 Oct 2023 05:00:50 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=VgcrPVRFw1epBwDFXmnQHbePvN0k/y9fYEKgSdfNimACKrAs1+muOBYsuU3mPl35KYAd3LT4bG5Fn742m1qxXTJsEw6sEpMmSPSlSKDVmMc426a5GdgKsKEz61C9Vyue/8E+L7Yq4EV8p6014inPKTzpl4IkSuso8tKKanO93VHRBYeGYgp2a3Wqj4xoz3LfMzcQW/hvZq3MSWeQ6DkxEJQn7zNr6CuwbxGOLiuCQjCUm+5kNb73JRQ1IUGgmbXRq/58Ju2BNwM9kt5Xq93imkHxtjTmGDQuTlJWWDXVxE2VKLOs8kbV1brOedKaKqimeEpW098N6Fkl3uw3hW5xkw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=KmTYxQOK7SEKsSa8yNnaw5vkg/LvZ3+NrMC0S+j/zrE=; b=RYsedqQ8m8hJj6x6uo0KvuIQ6vCqckWw+h2A8zzyJAp3kWILRItCYO7fQbz7S6EnQ55Sjw7HsN6PK19hm8bWs2SA03yzRlivhVL0qnrYPjVS/7nBIsy/sZZ3pNgfLC43ABmm5AnDqVKB4AMr3uOknW5m6fi16n5wZ+Cd5fDs3rokNyzeBbTYV74AlbrjeZMAPIcbtCP82Wxkg/S9jDtR7VkkDHvrV5PtYlNSBWqTFLU+OCDRjcRLjM/MaUGnFDrNh6x13SweaNQmNSny1qILTjnDwrhF4P1v8/SgDyJx5DjvXF+ytV8O/XMHb7iOcIaHrsSWifsQiwJXwz8slpK93Q== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=corigine.com; dmarc=pass action=none header.from=corigine.com; dkim=pass header.d=corigine.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=corigine.onmicrosoft.com; s=selector2-corigine-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=KmTYxQOK7SEKsSa8yNnaw5vkg/LvZ3+NrMC0S+j/zrE=; b=kZP+dROypgDXYMyyJ+Y5eDQwZEyW38/U51rZ2G+GbdlAt41IMHau6oM41vpKa4pIm1j1jyxHYLOjbO9aFdLzvSarPjRMCTyTm5fKu6MRl+CmzMqhKP0p0QU7GYqA6MGTIN7l1NBrCvAD36GWzklyiMSfZgl0qtSxaQ1nXNZTmK0= Authentication-Results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=corigine.com; Received: from SJ0PR13MB5545.namprd13.prod.outlook.com (2603:10b6:a03:424::5) by DM6PR13MB4493.namprd13.prod.outlook.com (2603:10b6:5:1b9::16) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6907.33; Fri, 27 Oct 2023 03:00:47 +0000 Received: from SJ0PR13MB5545.namprd13.prod.outlook.com ([fe80::45b3:d83c:cff2:4a1b]) by SJ0PR13MB5545.namprd13.prod.outlook.com ([fe80::45b3:d83c:cff2:4a1b%6]) with mapi id 15.20.6933.022; Fri, 27 Oct 2023 03:00:47 +0000 From: Chaoyong He To: dev@dpdk.org Cc: oss-drivers@corigine.com, Chaoyong He , Long Wu , Peng Zhang Subject: [PATCH v4 12/24] net/nfp: rename parameter in related logic Date: Fri, 27 Oct 2023 10:59:49 +0800 Message-Id: <20231027030001.602639-13-chaoyong.he@corigine.com> X-Mailer: git-send-email 2.39.1 In-Reply-To: <20231027030001.602639-1-chaoyong.he@corigine.com> References: <20231026064324.177531-1-chaoyong.he@corigine.com> <20231027030001.602639-1-chaoyong.he@corigine.com> Content-Transfer-Encoding: 8bit Content-Type: text/plain X-ClientProxiedBy: BYAPR08CA0065.namprd08.prod.outlook.com (2603:10b6:a03:117::42) To SJ0PR13MB5545.namprd13.prod.outlook.com (2603:10b6:a03:424::5) MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: SJ0PR13MB5545:EE_|DM6PR13MB4493:EE_ X-MS-Office365-Filtering-Correlation-Id: b09782a9-5ba6-40b0-acba-08dbd698eb3f X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: eoQcmLdx8Lm7Di9/B5KtM/L8CXO2GmqUAvEynHD3v87ROQEhSGmtawRIHZGEGFtOby/yNA9/SyRwov1+M+XhKDuqKKdn1JdbYwdT4Jx6M3vVtwuPDVoDFIb/xcZg8Zeb4rwT2g/BjZH5Prx5vGThMSgcsWOPkPPfMkRUD6GruVxvvqGuZYLsPPLZrIFromJtMLH4SOIlcFY8MX7YQUdnNisGYFm31BK/dzUqcvoby01rvgiQ4F9zVBDa+eUl4atcZDT4zDWjpDLzRCLEfa4f2CREnDUeYSJ0JaZvMAXWJHHXTiRsgWI2vvrlHKqH81vMhbjo52lr268Y4dVqzKxeIqDkdEwfj2CKX/SgD3bo8ynA0tx3s7qdG5nisOyE2ppRzfMD45f80GEDPzKjyl+jMfa3pVqpc7ElAj5RqQhfvyyIP5rwHihh8x8xTX6Ubj0rRj/7MuCmUqn4hQofGPPc3ePp6uU9zCylzerzoUwnJzTkA9SRNhH2iznceVS+9SwXIIwpU9SA3r9w4PNH4+gzde01KL1MCL2zYyxG5SVr4xeiAI0gTm0r6oGk4mKaFRZ/bKpwP0fEC2brHSJX2LTisDQtsI9OS9MNv7X/fn8myhH3fsgqyjgCKv3WDiOQ5PNpG7CuHh4onas5OZ91oNrsFq5erhNXY6prai+eQIcVyzA= X-Forefront-Antispam-Report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:SJ0PR13MB5545.namprd13.prod.outlook.com; PTR:; CAT:NONE; SFS:(13230031)(396003)(136003)(366004)(346002)(376002)(39840400004)(230922051799003)(186009)(1800799009)(64100799003)(451199024)(38350700005)(83380400001)(30864003)(4326008)(52116002)(2906002)(38100700002)(107886003)(44832011)(8936002)(26005)(2616005)(1076003)(36756003)(8676002)(54906003)(5660300002)(66476007)(6916009)(6506007)(316002)(6486002)(6512007)(41300700001)(66556008)(66946007)(86362001)(478600001); DIR:OUT; SFP:1102; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: =?us-ascii?Q?3cpocA8VcYzxpVCR3umUAz47vL4a3CydW0fvwhb0ONNKLDot1zAkWjdsGIQN?= =?us-ascii?Q?EIak35hqAVZEpy5Qgn/IO9HT1S7p2T75ZTql/yxTfaebJfH0sVwPh7RJpDMB?= =?us-ascii?Q?qsq7AgooiFPoaoeBWGyr5O/wJ1ErK4VIDDgvDMe+15lwAmLvqtpF0VohNXQa?= =?us-ascii?Q?I0NIs+Z28XpYVhm6oTRZnb53bEBGcBWid+Cuzu9BAMGoGlMVdwnIKCeUAF4Y?= =?us-ascii?Q?DxGZvLB7jDKhWPZbLUx/7TzoWbJ6m1j8J5W+7AWgW/erUtIrMWTKSp22bPjQ?= =?us-ascii?Q?ulfFlK50TGLOT9gOunnvkZtjjeDKasgtQId1Mm3ETD1byeEuxeI6hWEND122?= =?us-ascii?Q?t9Q2/YKlUUVIZ16tLMROislcKWQK7v+nteRgjQ753ePyCuB1EYtJvti/jj0A?= =?us-ascii?Q?qeDodzu0xIyWN/PQjXtEjuG6mMqoG7oLr46ggvHbXyx+om4TU6xMmUQLbEao?= =?us-ascii?Q?6zHDB8n8+E06f75j1ZyXUwCkbmUjIK71R259Y0QmbDo9uenh8EW/tNmHVBQ2?= =?us-ascii?Q?0GfFna8v7q0zLuzoDJR2v8k1oPp9zNUs0eCQwN2/TPVll4lf4xvYZ0hpM5yS?= =?us-ascii?Q?piN2WD1OBhKCQ4ZJhKl3myccTxoskZXnDS/d5aEe2g2xLi6b3M2cIhguHhiF?= =?us-ascii?Q?giiPtEJeoK9AMk3XoRozT8QRHhJCaQuM/egw1HuLWVDqzsWNf7lpa0NRcA1g?= =?us-ascii?Q?p9VFDh6ZqB3WTdmcRjPVm/fl8tXe8M7OtqAXpyJ2qKj1z342egfOT/CtXJSr?= =?us-ascii?Q?eCM2Wb9ViWV244+OAUw+C38rNnjBbTg0IZGU7qVQqT2IozloqmwYGNDZSkzb?= =?us-ascii?Q?6mUGdRCw/HuI4QGLWBxs2FIikhVGRMLzJmOeZai4ZXEweXgyrKhNPbO7FNo2?= =?us-ascii?Q?xqFsqVB6bJCMQfLWCFfBi/XErgCxMTGjch3lqRYHSj3YTbFeubM2NnG6AYe7?= =?us-ascii?Q?vpFKMV74AsD3VCbqNy1qdwgItl1owl39G/sWrRPHJSTyUsDCyrVp0Kp2kQqj?= =?us-ascii?Q?BaaCQOLz0zYnm5nJ7AlznJegrlp5Tle8KDfQ0nePLH0AOn43LCvJli+U1yFA?= =?us-ascii?Q?13+Rjgbd2ZyIQl3ehJFfs21Uc6MGDRRJwWC8IoBNdfuROqXXjLYbPj47Cjq3?= =?us-ascii?Q?XmPcvQIbICshvQSUvf3uSpBgLL1hypez3LWS8Jvyj5OGCdvDspQ5nooeTaYk?= =?us-ascii?Q?BoI9Ox8Q3ffA0vTI85Il5e56A5/Bgm2hbZp3RpJNSF7KFCnItJ/nQ+LJRR4Z?= =?us-ascii?Q?04J1h7BgaWhlUS4RgkRpXA5vFPuk4QW3zh4V2GMMIiDcXbq1ycCSu4Sb9dPz?= =?us-ascii?Q?ASo4Tf8wuR/a70xiEro4JDdaZyYFJyW0dU7t6jWrSfjyV0k18SvlEob/FNRG?= =?us-ascii?Q?KG8s+vZ0Hg/7r/85v0t/7kdCSppqo4ILJlKkVGlDLdozKanT+yS7heyBGTkI?= =?us-ascii?Q?1pk2QceE3i6NPfs0YTC3QsthMuB6JpRwKxKJmpV3xKlli5T7tAgx9L1cGFYM?= =?us-ascii?Q?iobBiMC1J14uh+RgW/b/ajDGsU+cdU93E79ntjPLAzV8N8tXlKUV2XnNXLST?= =?us-ascii?Q?JamzliWtvEDdwAmhG1jwCCgtEieT4WQmEYRi2qQ7Ci64dp8GXsIswjxR4Aes?= =?us-ascii?Q?Ow=3D=3D?= X-OriginatorOrg: corigine.com X-MS-Exchange-CrossTenant-Network-Message-Id: b09782a9-5ba6-40b0-acba-08dbd698eb3f X-MS-Exchange-CrossTenant-AuthSource: SJ0PR13MB5545.namprd13.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 27 Oct 2023 03:00:47.6154 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: fe128f2c-073b-4c20-818e-7246a585940c X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: 2aGusf6yvwcu2XjFVpKRGmFAWSttrYMTQHpEATZqXNeNPrW+ttbrtKnfMDSQtFz3IpBoPSNy5JVtOPfVrRbAlRd+qMWPiRTTphhn+U3RqNQ= X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR13MB4493 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Rename parameter 'hw' into 'net_hw' in the related logic, to make the name more accurate. Signed-off-by: Chaoyong He Reviewed-by: Long Wu Reviewed-by: Peng Zhang --- drivers/net/nfp/flower/nfp_flower.c | 8 +-- drivers/net/nfp/nfp_ethdev.c | 86 +++++++++++------------ drivers/net/nfp/nfp_ethdev_vf.c | 62 +++++++++-------- drivers/net/nfp/nfp_ipsec.c | 82 +++++++++++----------- drivers/net/nfp/nfp_net_common.c | 102 ++++++++++++++++------------ drivers/net/nfp/nfp_net_ctrl.c | 14 ++-- drivers/net/nfp/nfp_rxtx.c | 16 ++--- 7 files changed, 193 insertions(+), 177 deletions(-) diff --git a/drivers/net/nfp/flower/nfp_flower.c b/drivers/net/nfp/flower/nfp_flower.c index 831f4a7265..f3fedbf7e5 100644 --- a/drivers/net/nfp/flower/nfp_flower.c +++ b/drivers/net/nfp/flower/nfp_flower.c @@ -25,18 +25,18 @@ static void nfp_pf_repr_enable_queues(struct rte_eth_dev *dev) { uint16_t i; - struct nfp_net_hw *hw; + struct nfp_hw *hw; uint64_t enabled_queues = 0; struct nfp_flower_representor *repr; repr = dev->data->dev_private; - hw = repr->app_fw_flower->pf_hw; + hw = &repr->app_fw_flower->pf_hw->super; /* Enabling the required TX queues in the device */ for (i = 0; i < dev->data->nb_tx_queues; i++) enabled_queues |= (1 << i); - nn_cfg_writeq(&hw->super, NFP_NET_CFG_TXRS_ENABLE, enabled_queues); + nn_cfg_writeq(hw, NFP_NET_CFG_TXRS_ENABLE, enabled_queues); enabled_queues = 0; @@ -44,7 +44,7 @@ nfp_pf_repr_enable_queues(struct rte_eth_dev *dev) for (i = 0; i < dev->data->nb_rx_queues; i++) enabled_queues |= (1 << i); - nn_cfg_writeq(&hw->super, NFP_NET_CFG_RXRS_ENABLE, enabled_queues); + nn_cfg_writeq(hw, NFP_NET_CFG_RXRS_ENABLE, enabled_queues); } static void diff --git a/drivers/net/nfp/nfp_ethdev.c b/drivers/net/nfp/nfp_ethdev.c index a93742a205..3d4b78fbf1 100644 --- a/drivers/net/nfp/nfp_ethdev.c +++ b/drivers/net/nfp/nfp_ethdev.c @@ -479,11 +479,11 @@ nfp_net_init(struct rte_eth_dev *eth_dev) uint16_t port; uint64_t rx_base; uint64_t tx_base; - struct nfp_net_hw *hw; + struct nfp_hw *hw; + struct nfp_net_hw *net_hw; struct nfp_pf_dev *pf_dev; struct rte_pci_device *pci_dev; struct nfp_app_fw_nic *app_fw_nic; - struct rte_ether_addr *tmp_ether_addr; pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev); @@ -503,46 +503,47 @@ nfp_net_init(struct rte_eth_dev *eth_dev) * Use PF array of physical ports to get pointer to * this specific port. */ - hw = app_fw_nic->ports[port]; + net_hw = app_fw_nic->ports[port]; + hw = &net_hw->super; PMD_INIT_LOG(DEBUG, "Working with physical port number: %hu, " - "NFP internal port number: %d", port, hw->nfp_idx); + "NFP internal port number: %d", port, net_hw->nfp_idx); rte_eth_copy_pci_info(eth_dev, pci_dev); - hw->super.ctrl_bar = pci_dev->mem_resource[0].addr; - if (hw->super.ctrl_bar == NULL) { - PMD_DRV_LOG(ERR, "hw->super.ctrl_bar is NULL. BAR0 not configured"); + hw->ctrl_bar = pci_dev->mem_resource[0].addr; + if (hw->ctrl_bar == NULL) { + PMD_DRV_LOG(ERR, "hw->ctrl_bar is NULL. BAR0 not configured"); return -ENODEV; } if (port == 0) { uint32_t min_size; - hw->super.ctrl_bar = pf_dev->ctrl_bar; - min_size = NFP_MAC_STATS_SIZE * hw->pf_dev->nfp_eth_table->max_index; - hw->mac_stats_bar = nfp_rtsym_map(hw->pf_dev->sym_tbl, "_mac_stats", - min_size, &hw->mac_stats_area); - if (hw->mac_stats_bar == NULL) { + hw->ctrl_bar = pf_dev->ctrl_bar; + min_size = NFP_MAC_STATS_SIZE * net_hw->pf_dev->nfp_eth_table->max_index; + net_hw->mac_stats_bar = nfp_rtsym_map(net_hw->pf_dev->sym_tbl, "_mac_stats", + min_size, &net_hw->mac_stats_area); + if (net_hw->mac_stats_bar == NULL) { PMD_INIT_LOG(ERR, "nfp_rtsym_map fails for _mac_stats_bar"); return -EIO; } - hw->mac_stats = hw->mac_stats_bar; + net_hw->mac_stats = net_hw->mac_stats_bar; } else { if (pf_dev->ctrl_bar == NULL) return -ENODEV; /* Use port offset in pf ctrl_bar for this ports control bar */ - hw->super.ctrl_bar = pf_dev->ctrl_bar + (port * NFP_NET_CFG_BAR_SZ); - hw->mac_stats = app_fw_nic->ports[0]->mac_stats_bar + - (hw->nfp_idx * NFP_MAC_STATS_SIZE); + hw->ctrl_bar = pf_dev->ctrl_bar + (port * NFP_NET_CFG_BAR_SZ); + net_hw->mac_stats = app_fw_nic->ports[0]->mac_stats_bar + + (net_hw->nfp_idx * NFP_MAC_STATS_SIZE); } - PMD_INIT_LOG(DEBUG, "ctrl bar: %p", hw->super.ctrl_bar); - PMD_INIT_LOG(DEBUG, "MAC stats: %p", hw->mac_stats); + PMD_INIT_LOG(DEBUG, "ctrl bar: %p", hw->ctrl_bar); + PMD_INIT_LOG(DEBUG, "MAC stats: %p", net_hw->mac_stats); - err = nfp_net_common_init(pci_dev, hw); + err = nfp_net_common_init(pci_dev, net_hw); if (err != 0) return err; @@ -558,38 +559,38 @@ nfp_net_init(struct rte_eth_dev *eth_dev) return err; } - nfp_net_ethdev_ops_mount(hw, eth_dev); + nfp_net_ethdev_ops_mount(net_hw, eth_dev); - hw->eth_xstats_base = rte_malloc("rte_eth_xstat", sizeof(struct rte_eth_xstat) * + net_hw->eth_xstats_base = rte_malloc("rte_eth_xstat", sizeof(struct rte_eth_xstat) * nfp_net_xstats_size(eth_dev), 0); - if (hw->eth_xstats_base == NULL) { + if (net_hw->eth_xstats_base == NULL) { PMD_INIT_LOG(ERR, "no memory for xstats base values on device %s!", pci_dev->device.name); return -ENOMEM; } /* Work out where in the BAR the queues start. */ - tx_base = nn_cfg_readl(&hw->super, NFP_NET_CFG_START_TXQ); - rx_base = nn_cfg_readl(&hw->super, NFP_NET_CFG_START_RXQ); + tx_base = nn_cfg_readl(hw, NFP_NET_CFG_START_TXQ); + rx_base = nn_cfg_readl(hw, NFP_NET_CFG_START_RXQ); - hw->tx_bar = pf_dev->qc_bar + tx_base * NFP_QCP_QUEUE_ADDR_SZ; - hw->rx_bar = pf_dev->qc_bar + rx_base * NFP_QCP_QUEUE_ADDR_SZ; - eth_dev->data->dev_private = hw; + net_hw->tx_bar = pf_dev->qc_bar + tx_base * NFP_QCP_QUEUE_ADDR_SZ; + net_hw->rx_bar = pf_dev->qc_bar + rx_base * NFP_QCP_QUEUE_ADDR_SZ; + eth_dev->data->dev_private = net_hw; PMD_INIT_LOG(DEBUG, "ctrl_bar: %p, tx_bar: %p, rx_bar: %p", - hw->super.ctrl_bar, hw->tx_bar, hw->rx_bar); + hw->ctrl_bar, net_hw->tx_bar, net_hw->rx_bar); - nfp_net_cfg_queue_setup(hw); - hw->mtu = RTE_ETHER_MTU; + nfp_net_cfg_queue_setup(net_hw); + net_hw->mtu = RTE_ETHER_MTU; /* VLAN insertion is incompatible with LSOv2 */ - if ((hw->super.cap & NFP_NET_CFG_CTRL_LSO2) != 0) - hw->super.cap &= ~NFP_NET_CFG_CTRL_TXVLAN; + if ((hw->cap & NFP_NET_CFG_CTRL_LSO2) != 0) + hw->cap &= ~NFP_NET_CFG_CTRL_TXVLAN; - nfp_net_log_device_information(hw); + nfp_net_log_device_information(net_hw); /* Initializing spinlock for reconfigs */ - rte_spinlock_init(&hw->super.reconfig_lock); + rte_spinlock_init(&hw->reconfig_lock); /* Allocating memory for mac addr */ eth_dev->data->mac_addrs = rte_zmalloc("mac_addr", RTE_ETHER_ADDR_LEN, 0); @@ -599,20 +600,19 @@ nfp_net_init(struct rte_eth_dev *eth_dev) } nfp_net_pf_read_mac(app_fw_nic, port); - nfp_net_write_mac(&hw->super, &hw->super.mac_addr.addr_bytes[0]); + nfp_net_write_mac(hw, &hw->mac_addr.addr_bytes[0]); - tmp_ether_addr = &hw->super.mac_addr; - if (rte_is_valid_assigned_ether_addr(tmp_ether_addr) == 0) { + if (rte_is_valid_assigned_ether_addr(&hw->mac_addr) == 0) { PMD_INIT_LOG(INFO, "Using random mac address for port %d", port); /* Using random mac addresses for VFs */ - rte_eth_random_addr(&hw->super.mac_addr.addr_bytes[0]); - nfp_net_write_mac(&hw->super, &hw->super.mac_addr.addr_bytes[0]); + rte_eth_random_addr(&hw->mac_addr.addr_bytes[0]); + nfp_net_write_mac(hw, &hw->mac_addr.addr_bytes[0]); } /* Copying mac address to DPDK eth_dev struct */ - rte_ether_addr_copy(&hw->super.mac_addr, eth_dev->data->mac_addrs); + rte_ether_addr_copy(&hw->mac_addr, eth_dev->data->mac_addrs); - if ((hw->super.cap & NFP_NET_CFG_CTRL_LIVE_ADDR) == 0) + if ((hw->cap & NFP_NET_CFG_CTRL_LIVE_ADDR) == 0) eth_dev->data->dev_flags |= RTE_ETH_DEV_NOLIVE_MAC_ADDR; eth_dev->data->dev_flags |= RTE_ETH_DEV_AUTOFILL_QUEUE_XSTATS; @@ -621,13 +621,13 @@ nfp_net_init(struct rte_eth_dev *eth_dev) "mac=" RTE_ETHER_ADDR_PRT_FMT, eth_dev->data->port_id, pci_dev->id.vendor_id, pci_dev->id.device_id, - RTE_ETHER_ADDR_BYTES(&hw->super.mac_addr)); + RTE_ETHER_ADDR_BYTES(&hw->mac_addr)); /* Registering LSC interrupt handler */ rte_intr_callback_register(pci_dev->intr_handle, nfp_net_dev_interrupt_handler, (void *)eth_dev); /* Telling the firmware about the LSC interrupt entry */ - nn_cfg_writeb(&hw->super, NFP_NET_CFG_LSC, NFP_NET_IRQ_LSC_IDX); + nn_cfg_writeb(hw, NFP_NET_CFG_LSC, NFP_NET_IRQ_LSC_IDX); /* Recording current stats counters values */ nfp_net_stats_reset(eth_dev); diff --git a/drivers/net/nfp/nfp_ethdev_vf.c b/drivers/net/nfp/nfp_ethdev_vf.c index dead6ca5ab..049728d30c 100644 --- a/drivers/net/nfp/nfp_ethdev_vf.c +++ b/drivers/net/nfp/nfp_ethdev_vf.c @@ -254,7 +254,8 @@ nfp_netvf_init(struct rte_eth_dev *eth_dev) int err; uint16_t port; uint32_t start_q; - struct nfp_net_hw *hw; + struct nfp_hw *hw; + struct nfp_net_hw *net_hw; uint64_t tx_bar_off = 0; uint64_t rx_bar_off = 0; struct rte_pci_device *pci_dev; @@ -269,22 +270,23 @@ nfp_netvf_init(struct rte_eth_dev *eth_dev) return -ENODEV; } - hw = NFP_NET_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private); - hw->dev_info = dev_info; + net_hw = NFP_NET_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private); + net_hw->dev_info = dev_info; + hw = &net_hw->super; - hw->super.ctrl_bar = pci_dev->mem_resource[0].addr; - if (hw->super.ctrl_bar == NULL) { + hw->ctrl_bar = pci_dev->mem_resource[0].addr; + if (hw->ctrl_bar == NULL) { PMD_DRV_LOG(ERR, "hw->super.ctrl_bar is NULL. BAR0 not configured"); return -ENODEV; } - PMD_INIT_LOG(DEBUG, "ctrl bar: %p", hw->super.ctrl_bar); + PMD_INIT_LOG(DEBUG, "ctrl bar: %p", hw->ctrl_bar); - err = nfp_net_common_init(pci_dev, hw); + err = nfp_net_common_init(pci_dev, net_hw); if (err != 0) return err; - nfp_netvf_ethdev_ops_mount(hw, eth_dev); + nfp_netvf_ethdev_ops_mount(net_hw, eth_dev); /* For secondary processes, the primary has done all the work */ if (rte_eal_process_type() != RTE_PROC_PRIMARY) @@ -292,37 +294,37 @@ nfp_netvf_init(struct rte_eth_dev *eth_dev) rte_eth_copy_pci_info(eth_dev, pci_dev); - hw->eth_xstats_base = rte_malloc("rte_eth_xstat", + net_hw->eth_xstats_base = rte_malloc("rte_eth_xstat", sizeof(struct rte_eth_xstat) * nfp_net_xstats_size(eth_dev), 0); - if (hw->eth_xstats_base == NULL) { + if (net_hw->eth_xstats_base == NULL) { PMD_INIT_LOG(ERR, "No memory for xstats base values on device %s!", pci_dev->device.name); return -ENOMEM; } /* Work out where in the BAR the queues start. */ - start_q = nn_cfg_readl(&hw->super, NFP_NET_CFG_START_TXQ); + start_q = nn_cfg_readl(hw, NFP_NET_CFG_START_TXQ); tx_bar_off = nfp_qcp_queue_offset(dev_info, start_q); - start_q = nn_cfg_readl(&hw->super, NFP_NET_CFG_START_RXQ); + start_q = nn_cfg_readl(hw, NFP_NET_CFG_START_RXQ); rx_bar_off = nfp_qcp_queue_offset(dev_info, start_q); - hw->tx_bar = (uint8_t *)pci_dev->mem_resource[2].addr + tx_bar_off; - hw->rx_bar = (uint8_t *)pci_dev->mem_resource[2].addr + rx_bar_off; + net_hw->tx_bar = (uint8_t *)pci_dev->mem_resource[2].addr + tx_bar_off; + net_hw->rx_bar = (uint8_t *)pci_dev->mem_resource[2].addr + rx_bar_off; PMD_INIT_LOG(DEBUG, "ctrl_bar: %p, tx_bar: %p, rx_bar: %p", - hw->super.ctrl_bar, hw->tx_bar, hw->rx_bar); + hw->ctrl_bar, net_hw->tx_bar, net_hw->rx_bar); - nfp_net_cfg_queue_setup(hw); - hw->mtu = RTE_ETHER_MTU; + nfp_net_cfg_queue_setup(net_hw); + net_hw->mtu = RTE_ETHER_MTU; /* VLAN insertion is incompatible with LSOv2 */ - if ((hw->super.cap & NFP_NET_CFG_CTRL_LSO2) != 0) - hw->super.cap &= ~NFP_NET_CFG_CTRL_TXVLAN; + if ((hw->cap & NFP_NET_CFG_CTRL_LSO2) != 0) + hw->cap &= ~NFP_NET_CFG_CTRL_TXVLAN; - nfp_net_log_device_information(hw); + nfp_net_log_device_information(net_hw); /* Initializing spinlock for reconfigs */ - rte_spinlock_init(&hw->super.reconfig_lock); + rte_spinlock_init(&hw->reconfig_lock); /* Allocating memory for mac addr */ eth_dev->data->mac_addrs = rte_zmalloc("mac_addr", RTE_ETHER_ADDR_LEN, 0); @@ -332,18 +334,18 @@ nfp_netvf_init(struct rte_eth_dev *eth_dev) goto dev_err_ctrl_map; } - nfp_netvf_read_mac(&hw->super); - if (rte_is_valid_assigned_ether_addr(&hw->super.mac_addr) == 0) { + nfp_netvf_read_mac(hw); + if (rte_is_valid_assigned_ether_addr(&hw->mac_addr) == 0) { PMD_INIT_LOG(INFO, "Using random mac address for port %hu", port); /* Using random mac addresses for VFs */ - rte_eth_random_addr(&hw->super.mac_addr.addr_bytes[0]); - nfp_net_write_mac(&hw->super, &hw->super.mac_addr.addr_bytes[0]); + rte_eth_random_addr(&hw->mac_addr.addr_bytes[0]); + nfp_net_write_mac(hw, &hw->mac_addr.addr_bytes[0]); } /* Copying mac address to DPDK eth_dev struct */ - rte_ether_addr_copy(&hw->super.mac_addr, eth_dev->data->mac_addrs); + rte_ether_addr_copy(&hw->mac_addr, eth_dev->data->mac_addrs); - if ((hw->super.cap & NFP_NET_CFG_CTRL_LIVE_ADDR) == 0) + if ((hw->cap & NFP_NET_CFG_CTRL_LIVE_ADDR) == 0) eth_dev->data->dev_flags |= RTE_ETH_DEV_NOLIVE_MAC_ADDR; eth_dev->data->dev_flags |= RTE_ETH_DEV_AUTOFILL_QUEUE_XSTATS; @@ -352,14 +354,14 @@ nfp_netvf_init(struct rte_eth_dev *eth_dev) "mac=" RTE_ETHER_ADDR_PRT_FMT, port, pci_dev->id.vendor_id, pci_dev->id.device_id, - RTE_ETHER_ADDR_BYTES(&hw->super.mac_addr)); + RTE_ETHER_ADDR_BYTES(&hw->mac_addr)); if (rte_eal_process_type() == RTE_PROC_PRIMARY) { /* Registering LSC interrupt handler */ rte_intr_callback_register(pci_dev->intr_handle, nfp_net_dev_interrupt_handler, (void *)eth_dev); /* Telling the firmware about the LSC interrupt entry */ - nn_cfg_writeb(&hw->super, NFP_NET_CFG_LSC, NFP_NET_IRQ_LSC_IDX); + nn_cfg_writeb(hw, NFP_NET_CFG_LSC, NFP_NET_IRQ_LSC_IDX); /* Recording current stats counters values */ nfp_net_stats_reset(eth_dev); } @@ -367,7 +369,7 @@ nfp_netvf_init(struct rte_eth_dev *eth_dev) return 0; dev_err_ctrl_map: - nfp_cpp_area_free(hw->ctrl_area); + nfp_cpp_area_free(net_hw->ctrl_area); return err; } diff --git a/drivers/net/nfp/nfp_ipsec.c b/drivers/net/nfp/nfp_ipsec.c index 0da5c2a3d2..7e26977dc1 100644 --- a/drivers/net/nfp/nfp_ipsec.c +++ b/drivers/net/nfp/nfp_ipsec.c @@ -434,7 +434,7 @@ enum nfp_ipsec_df_type { }; static int -nfp_ipsec_cfg_cmd_issue(struct nfp_net_hw *hw, +nfp_ipsec_cfg_cmd_issue(struct nfp_net_hw *net_hw, struct nfp_ipsec_msg *msg) { int ret; @@ -445,9 +445,9 @@ nfp_ipsec_cfg_cmd_issue(struct nfp_net_hw *hw, msg->rsp = NFP_IPSEC_CFG_MSG_OK; for (i = 0; i < msg_size; i++) - nn_cfg_writel(&hw->super, NFP_NET_CFG_MBOX_VAL + 4 * i, msg->raw[i]); + nn_cfg_writel(&net_hw->super, NFP_NET_CFG_MBOX_VAL + 4 * i, msg->raw[i]); - ret = nfp_net_mbox_reconfig(hw, NFP_NET_CFG_MBOX_CMD_IPSEC); + ret = nfp_net_mbox_reconfig(net_hw, NFP_NET_CFG_MBOX_CMD_IPSEC); if (ret < 0) { PMD_DRV_LOG(ERR, "Failed to IPsec reconfig mbox"); return ret; @@ -459,7 +459,7 @@ nfp_ipsec_cfg_cmd_issue(struct nfp_net_hw *hw, * response. One example where the data is needed is for statistics. */ for (i = 0; i < msg_size; i++) - msg->raw[i] = nn_cfg_readl(&hw->super, NFP_NET_CFG_MBOX_VAL + 4 * i); + msg->raw[i] = nn_cfg_readl(&net_hw->super, NFP_NET_CFG_MBOX_VAL + 4 * i); switch (msg->rsp) { case NFP_IPSEC_CFG_MSG_OK: @@ -577,10 +577,10 @@ nfp_aead_map(struct rte_eth_dev *eth_dev, uint32_t device_id; const char *iv_str; const uint32_t *key; - struct nfp_net_hw *hw; + struct nfp_net_hw *net_hw; - hw = NFP_NET_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private); - device_id = hw->device_id; + net_hw = NFP_NET_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private); + device_id = net_hw->device_id; offset = 0; switch (aead->algo) { @@ -665,10 +665,10 @@ nfp_cipher_map(struct rte_eth_dev *eth_dev, uint32_t i; uint32_t device_id; const uint32_t *key; - struct nfp_net_hw *hw; + struct nfp_net_hw *net_hw; - hw = NFP_NET_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private); - device_id = hw->device_id; + net_hw = NFP_NET_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private); + device_id = net_hw->device_id; switch (cipher->algo) { case RTE_CRYPTO_CIPHER_NULL: @@ -801,15 +801,15 @@ nfp_auth_map(struct rte_eth_dev *eth_dev, uint8_t key_length; uint32_t device_id; const uint32_t *key; - struct nfp_net_hw *hw; + struct nfp_net_hw *net_hw; if (digest_length == 0) { PMD_DRV_LOG(ERR, "Auth digest length is illegal!"); return -EINVAL; } - hw = NFP_NET_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private); - device_id = hw->device_id; + net_hw = NFP_NET_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private); + device_id = net_hw->device_id; digest_length = digest_length << 3; switch (auth->algo) { @@ -1068,7 +1068,7 @@ nfp_crypto_create_session(void *device, { int ret; int sa_idx; - struct nfp_net_hw *hw; + struct nfp_net_hw *net_hw; struct nfp_ipsec_msg msg; struct rte_eth_dev *eth_dev; struct nfp_ipsec_session *priv_session; @@ -1082,14 +1082,14 @@ nfp_crypto_create_session(void *device, sa_idx = -1; eth_dev = device; priv_session = SECURITY_GET_SESS_PRIV(session); - hw = NFP_NET_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private); + net_hw = NFP_NET_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private); - if (hw->ipsec_data->sa_free_cnt == 0) { + if (net_hw->ipsec_data->sa_free_cnt == 0) { PMD_DRV_LOG(ERR, "No space in SA table, spi: %d", conf->ipsec.spi); return -EINVAL; } - nfp_get_sa_entry(hw->ipsec_data, &sa_idx); + nfp_get_sa_entry(net_hw->ipsec_data, &sa_idx); if (sa_idx < 0) { PMD_DRV_LOG(ERR, "Failed to get SA entry!"); @@ -1105,7 +1105,7 @@ nfp_crypto_create_session(void *device, msg.cmd = NFP_IPSEC_CFG_MSG_ADD_SA; msg.sa_idx = sa_idx; - ret = nfp_ipsec_cfg_cmd_issue(hw, &msg); + ret = nfp_ipsec_cfg_cmd_issue(net_hw, &msg); if (ret < 0) { PMD_DRV_LOG(ERR, "Failed to add SA to nic"); return -EINVAL; @@ -1118,8 +1118,8 @@ nfp_crypto_create_session(void *device, priv_session->dev = eth_dev; priv_session->user_data = conf->userdata; - hw->ipsec_data->sa_free_cnt--; - hw->ipsec_data->sa_entries[sa_idx] = priv_session; + net_hw->ipsec_data->sa_free_cnt--; + net_hw->ipsec_data->sa_entries[sa_idx] = priv_session; return 0; } @@ -1156,19 +1156,19 @@ nfp_security_set_pkt_metadata(void *device, { int offset; uint64_t *sqn; - struct nfp_net_hw *hw; + struct nfp_net_hw *net_hw; struct rte_eth_dev *eth_dev; struct nfp_ipsec_session *priv_session; sqn = params; eth_dev = device; priv_session = SECURITY_GET_SESS_PRIV(session); - hw = NFP_NET_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private); + net_hw = NFP_NET_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private); if (priv_session->ipsec.direction == RTE_SECURITY_IPSEC_SA_DIR_EGRESS) { struct nfp_tx_ipsec_desc_msg *desc_md; - offset = hw->ipsec_data->pkt_dynfield_offset; + offset = net_hw->ipsec_data->pkt_dynfield_offset; desc_md = RTE_MBUF_DYNFIELD(m, offset, struct nfp_tx_ipsec_desc_msg *); if (priv_session->msg.ctrl_word.ext_seq != 0 && sqn != NULL) { @@ -1223,7 +1223,7 @@ nfp_security_session_get_stats(void *device, struct rte_security_stats *stats) { int ret; - struct nfp_net_hw *hw; + struct nfp_net_hw *net_hw; struct nfp_ipsec_msg msg; struct rte_eth_dev *eth_dev; struct ipsec_get_sa_stats *cfg_s; @@ -1236,9 +1236,9 @@ nfp_security_session_get_stats(void *device, memset(&msg, 0, sizeof(msg)); msg.cmd = NFP_IPSEC_CFG_MSG_GET_SA_STATS; msg.sa_idx = priv_session->sa_index; - hw = NFP_NET_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private); + net_hw = NFP_NET_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private); - ret = nfp_ipsec_cfg_cmd_issue(hw, &msg); + ret = nfp_ipsec_cfg_cmd_issue(net_hw, &msg); if (ret < 0) { PMD_DRV_LOG(ERR, "Failed to get SA stats"); return ret; @@ -1284,22 +1284,22 @@ nfp_crypto_remove_sa(struct rte_eth_dev *eth_dev, { int ret; uint32_t sa_index; - struct nfp_net_hw *hw; + struct nfp_net_hw *net_hw; struct nfp_ipsec_msg cfg; sa_index = priv_session->sa_index; - hw = NFP_NET_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private); + net_hw = NFP_NET_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private); cfg.cmd = NFP_IPSEC_CFG_MSG_INV_SA; cfg.sa_idx = sa_index; - ret = nfp_ipsec_cfg_cmd_issue(hw, &cfg); + ret = nfp_ipsec_cfg_cmd_issue(net_hw, &cfg); if (ret < 0) { PMD_DRV_LOG(ERR, "Failed to remove SA!"); return -EINVAL; } - hw->ipsec_data->sa_free_cnt++; - hw->ipsec_data->sa_entries[sa_index] = NULL; + net_hw->ipsec_data->sa_free_cnt++; + net_hw->ipsec_data->sa_entries[sa_index] = NULL; return 0; } @@ -1377,12 +1377,12 @@ nfp_ipsec_init(struct rte_eth_dev *dev) { int ret; uint32_t cap_extend; - struct nfp_net_hw *hw; + struct nfp_net_hw *net_hw; struct nfp_net_ipsec_data *data; - hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private); + net_hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private); - cap_extend = hw->super.cap_ext; + cap_extend = net_hw->super.cap_ext; if ((cap_extend & NFP_NET_CFG_CTRL_IPSEC) == 0) { PMD_INIT_LOG(INFO, "Unsupported IPsec extend capability"); return 0; @@ -1396,7 +1396,7 @@ nfp_ipsec_init(struct rte_eth_dev *dev) data->pkt_dynfield_offset = -1; data->sa_free_cnt = NFP_NET_IPSEC_MAX_SA_CNT; - hw->ipsec_data = data; + net_hw->ipsec_data = data; ret = nfp_ipsec_ctx_create(dev, data); if (ret != 0) { @@ -1424,12 +1424,12 @@ nfp_ipsec_uninit(struct rte_eth_dev *dev) { uint16_t i; uint32_t cap_extend; - struct nfp_net_hw *hw; + struct nfp_net_hw *net_hw; struct nfp_ipsec_session *priv_session; - hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private); + net_hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private); - cap_extend = hw->super.cap_ext; + cap_extend = net_hw->super.cap_ext; if ((cap_extend & NFP_NET_CFG_CTRL_IPSEC) == 0) { PMD_INIT_LOG(INFO, "Unsupported IPsec extend capability"); return; @@ -1437,17 +1437,17 @@ nfp_ipsec_uninit(struct rte_eth_dev *dev) nfp_ipsec_ctx_destroy(dev); - if (hw->ipsec_data == NULL) { + if (net_hw->ipsec_data == NULL) { PMD_INIT_LOG(INFO, "IPsec data is NULL!"); return; } for (i = 0; i < NFP_NET_IPSEC_MAX_SA_CNT; i++) { - priv_session = hw->ipsec_data->sa_entries[i]; + priv_session = net_hw->ipsec_data->sa_entries[i]; if (priv_session != NULL) memset(priv_session, 0, sizeof(struct nfp_ipsec_session)); } - rte_free(hw->ipsec_data); + rte_free(net_hw->ipsec_data); } diff --git a/drivers/net/nfp/nfp_net_common.c b/drivers/net/nfp/nfp_net_common.c index a760fcf0d2..01574de963 100644 --- a/drivers/net/nfp/nfp_net_common.c +++ b/drivers/net/nfp/nfp_net_common.c @@ -336,7 +336,7 @@ nfp_ext_reconfig(struct nfp_hw *hw, /** * Reconfigure the firmware via the mailbox * - * @param hw + * @param net_hw * Device to reconfigure * @param mbox_cmd * The value for the mailbox command @@ -346,24 +346,24 @@ nfp_ext_reconfig(struct nfp_hw *hw, * - (-EIO) if I/O err and fail to reconfigure by the mailbox */ int -nfp_net_mbox_reconfig(struct nfp_net_hw *hw, +nfp_net_mbox_reconfig(struct nfp_net_hw *net_hw, uint32_t mbox_cmd) { int ret; uint32_t mbox; - mbox = hw->tlv_caps.mbox_off; + mbox = net_hw->tlv_caps.mbox_off; - rte_spinlock_lock(&hw->super.reconfig_lock); + rte_spinlock_lock(&net_hw->super.reconfig_lock); - nn_cfg_writeq(&hw->super, mbox + NFP_NET_CFG_MBOX_SIMPLE_CMD, mbox_cmd); - nn_cfg_writel(&hw->super, NFP_NET_CFG_UPDATE, NFP_NET_CFG_UPDATE_MBOX); + nn_cfg_writeq(&net_hw->super, mbox + NFP_NET_CFG_MBOX_SIMPLE_CMD, mbox_cmd); + nn_cfg_writel(&net_hw->super, NFP_NET_CFG_UPDATE, NFP_NET_CFG_UPDATE_MBOX); rte_wmb(); - ret = nfp_reconfig_real(&hw->super, NFP_NET_CFG_UPDATE_MBOX); + ret = nfp_reconfig_real(&net_hw->super, NFP_NET_CFG_UPDATE_MBOX); - rte_spinlock_unlock(&hw->super.reconfig_lock); + rte_spinlock_unlock(&net_hw->super.reconfig_lock); if (ret != 0) { PMD_DRV_LOG(ERR, "Error nft net mailbox reconfig: mbox=%#08x update=%#08x", @@ -371,7 +371,7 @@ nfp_net_mbox_reconfig(struct nfp_net_hw *hw, return -EIO; } - return nn_cfg_readl(&hw->super, mbox + NFP_NET_CFG_MBOX_SIMPLE_RET); + return nn_cfg_readl(&net_hw->super, mbox + NFP_NET_CFG_MBOX_SIMPLE_RET); } /* @@ -625,6 +625,7 @@ nfp_configure_rx_interrupt(struct rte_eth_dev *dev, uint32_t nfp_check_offloads(struct rte_eth_dev *dev) { + uint32_t cap; uint32_t ctrl = 0; uint64_t rx_offload; uint64_t tx_offload; @@ -632,13 +633,14 @@ nfp_check_offloads(struct rte_eth_dev *dev) struct rte_eth_conf *dev_conf; hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private); + cap = hw->super.cap; dev_conf = &dev->data->dev_conf; rx_offload = dev_conf->rxmode.offloads; tx_offload = dev_conf->txmode.offloads; if ((rx_offload & RTE_ETH_RX_OFFLOAD_IPV4_CKSUM) != 0) { - if ((hw->super.cap & NFP_NET_CFG_CTRL_RXCSUM) != 0) + if ((cap & NFP_NET_CFG_CTRL_RXCSUM) != 0) ctrl |= NFP_NET_CFG_CTRL_RXCSUM; } @@ -646,25 +648,25 @@ nfp_check_offloads(struct rte_eth_dev *dev) nfp_net_enable_rxvlan_cap(hw, &ctrl); if ((rx_offload & RTE_ETH_RX_OFFLOAD_QINQ_STRIP) != 0) { - if ((hw->super.cap & NFP_NET_CFG_CTRL_RXQINQ) != 0) + if ((cap & NFP_NET_CFG_CTRL_RXQINQ) != 0) ctrl |= NFP_NET_CFG_CTRL_RXQINQ; } hw->mtu = dev->data->mtu; if ((tx_offload & RTE_ETH_TX_OFFLOAD_VLAN_INSERT) != 0) { - if ((hw->super.cap & NFP_NET_CFG_CTRL_TXVLAN_V2) != 0) + if ((cap & NFP_NET_CFG_CTRL_TXVLAN_V2) != 0) ctrl |= NFP_NET_CFG_CTRL_TXVLAN_V2; - else if ((hw->super.cap & NFP_NET_CFG_CTRL_TXVLAN) != 0) + else if ((cap & NFP_NET_CFG_CTRL_TXVLAN) != 0) ctrl |= NFP_NET_CFG_CTRL_TXVLAN; } /* L2 broadcast */ - if ((hw->super.cap & NFP_NET_CFG_CTRL_L2BC) != 0) + if ((cap & NFP_NET_CFG_CTRL_L2BC) != 0) ctrl |= NFP_NET_CFG_CTRL_L2BC; /* L2 multicast */ - if ((hw->super.cap & NFP_NET_CFG_CTRL_L2MC) != 0) + if ((cap & NFP_NET_CFG_CTRL_L2MC) != 0) ctrl |= NFP_NET_CFG_CTRL_L2MC; /* TX checksum offload */ @@ -676,7 +678,7 @@ nfp_check_offloads(struct rte_eth_dev *dev) /* LSO offload */ if ((tx_offload & RTE_ETH_TX_OFFLOAD_TCP_TSO) != 0 || (tx_offload & RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO) != 0) { - if ((hw->super.cap & NFP_NET_CFG_CTRL_LSO) != 0) + if ((cap & NFP_NET_CFG_CTRL_LSO) != 0) ctrl |= NFP_NET_CFG_CTRL_LSO; else ctrl |= NFP_NET_CFG_CTRL_LSO2; @@ -1194,6 +1196,7 @@ nfp_net_tx_desc_limits(struct nfp_net_hw *hw, int nfp_net_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info) { + uint32_t cap; uint32_t cap_extend; uint16_t min_rx_desc; uint16_t max_rx_desc; @@ -1224,32 +1227,34 @@ nfp_net_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info) /* Next should change when PF support is implemented */ dev_info->max_mac_addrs = 1; - if ((hw->super.cap & (NFP_NET_CFG_CTRL_RXVLAN | NFP_NET_CFG_CTRL_RXVLAN_V2)) != 0) + cap = hw->super.cap; + + if ((cap & (NFP_NET_CFG_CTRL_RXVLAN | NFP_NET_CFG_CTRL_RXVLAN_V2)) != 0) dev_info->rx_offload_capa = RTE_ETH_RX_OFFLOAD_VLAN_STRIP; - if ((hw->super.cap & NFP_NET_CFG_CTRL_RXQINQ) != 0) + if ((cap & NFP_NET_CFG_CTRL_RXQINQ) != 0) dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_QINQ_STRIP; - if ((hw->super.cap & NFP_NET_CFG_CTRL_RXCSUM) != 0) + if ((cap & NFP_NET_CFG_CTRL_RXCSUM) != 0) dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_IPV4_CKSUM | RTE_ETH_RX_OFFLOAD_UDP_CKSUM | RTE_ETH_RX_OFFLOAD_TCP_CKSUM; - if ((hw->super.cap & (NFP_NET_CFG_CTRL_TXVLAN | NFP_NET_CFG_CTRL_TXVLAN_V2)) != 0) + if ((cap & (NFP_NET_CFG_CTRL_TXVLAN | NFP_NET_CFG_CTRL_TXVLAN_V2)) != 0) dev_info->tx_offload_capa = RTE_ETH_TX_OFFLOAD_VLAN_INSERT; - if ((hw->super.cap & NFP_NET_CFG_CTRL_TXCSUM) != 0) + if ((cap & NFP_NET_CFG_CTRL_TXCSUM) != 0) dev_info->tx_offload_capa |= RTE_ETH_TX_OFFLOAD_IPV4_CKSUM | RTE_ETH_TX_OFFLOAD_UDP_CKSUM | RTE_ETH_TX_OFFLOAD_TCP_CKSUM; - if ((hw->super.cap & NFP_NET_CFG_CTRL_LSO_ANY) != 0) { + if ((cap & NFP_NET_CFG_CTRL_LSO_ANY) != 0) { dev_info->tx_offload_capa |= RTE_ETH_TX_OFFLOAD_TCP_TSO; - if ((hw->super.cap & NFP_NET_CFG_CTRL_VXLAN) != 0) + if ((cap & NFP_NET_CFG_CTRL_VXLAN) != 0) dev_info->tx_offload_capa |= RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO; } - if ((hw->super.cap & NFP_NET_CFG_CTRL_GATHER) != 0) + if ((cap & NFP_NET_CFG_CTRL_GATHER) != 0) dev_info->tx_offload_capa |= RTE_ETH_TX_OFFLOAD_MULTI_SEGS; cap_extend = hw->super.cap_ext; @@ -1292,7 +1297,7 @@ nfp_net_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info) .nb_mtu_seg_max = NFP_TX_MAX_MTU_SEG, }; - if ((hw->super.cap & NFP_NET_CFG_CTRL_RSS_ANY) != 0) { + if ((cap & NFP_NET_CFG_CTRL_RSS_ANY) != 0) { dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_RSS_HASH; dev_info->flow_type_rss_offloads = RTE_ETH_RSS_IPV4 | @@ -1615,9 +1620,11 @@ nfp_net_rss_reta_write(struct rte_eth_dev *dev, uint8_t mask; uint32_t reta; uint16_t shift; - struct nfp_net_hw *hw; + struct nfp_hw *hw; + struct nfp_net_hw *net_hw; - hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private); + net_hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private); + hw = &net_hw->super; if (reta_size != NFP_NET_CFG_RSS_ITBL_SZ) { PMD_DRV_LOG(ERR, "The size of hash lookup table configured (%hu)" @@ -1642,7 +1649,7 @@ nfp_net_rss_reta_write(struct rte_eth_dev *dev, /* If all 4 entries were set, don't need read RETA register */ if (mask != 0xF) - reta = nn_cfg_readl(&hw->super, NFP_NET_CFG_RSS_ITBL + i); + reta = nn_cfg_readl(hw, NFP_NET_CFG_RSS_ITBL + i); for (j = 0; j < 4; j++) { if ((mask & (0x1 << j)) == 0) @@ -1655,7 +1662,7 @@ nfp_net_rss_reta_write(struct rte_eth_dev *dev, reta |= reta_conf[idx].reta[shift + j] << (8 * j); } - nn_cfg_writel(&hw->super, NFP_NET_CFG_RSS_ITBL + (idx * 64) + shift, reta); + nn_cfg_writel(hw, NFP_NET_CFG_RSS_ITBL + (idx * 64) + shift, reta); } return 0; @@ -1702,10 +1709,13 @@ nfp_net_reta_query(struct rte_eth_dev *dev, uint8_t mask; uint32_t reta; uint16_t shift; - struct nfp_net_hw *hw; + struct nfp_hw *hw; + struct nfp_net_hw *net_hw; - hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private); - if ((hw->super.ctrl & NFP_NET_CFG_CTRL_RSS_ANY) == 0) + net_hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private); + hw = &net_hw->super; + + if ((hw->ctrl & NFP_NET_CFG_CTRL_RSS_ANY) == 0) return -EINVAL; if (reta_size != NFP_NET_CFG_RSS_ITBL_SZ) { @@ -1728,7 +1738,7 @@ nfp_net_reta_query(struct rte_eth_dev *dev, if (mask == 0) continue; - reta = nn_cfg_readl(&hw->super, NFP_NET_CFG_RSS_ITBL + (idx * 64) + shift); + reta = nn_cfg_readl(hw, NFP_NET_CFG_RSS_ITBL + (idx * 64) + shift); for (j = 0; j < 4; j++) { if ((mask & (0x1 << j)) == 0) continue; @@ -1748,15 +1758,17 @@ nfp_net_rss_hash_write(struct rte_eth_dev *dev, uint8_t i; uint8_t key; uint64_t rss_hf; - struct nfp_net_hw *hw; + struct nfp_hw *hw; + struct nfp_net_hw *net_hw; uint32_t cfg_rss_ctrl = 0; - hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private); + net_hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private); + hw = &net_hw->super; /* Writing the key byte by byte */ for (i = 0; i < rss_conf->rss_key_len; i++) { memcpy(&key, &rss_conf->rss_key[i], 1); - nn_cfg_writeb(&hw->super, NFP_NET_CFG_RSS_KEY + i, key); + nn_cfg_writeb(hw, NFP_NET_CFG_RSS_KEY + i, key); } rss_hf = rss_conf->rss_hf; @@ -1789,10 +1801,10 @@ nfp_net_rss_hash_write(struct rte_eth_dev *dev, cfg_rss_ctrl |= NFP_NET_CFG_RSS_TOEPLITZ; /* Configuring where to apply the RSS hash */ - nn_cfg_writel(&hw->super, NFP_NET_CFG_RSS_CTRL, cfg_rss_ctrl); + nn_cfg_writel(hw, NFP_NET_CFG_RSS_CTRL, cfg_rss_ctrl); /* Writing the key size */ - nn_cfg_writeb(&hw->super, NFP_NET_CFG_RSS_KEY_SZ, rss_conf->rss_key_len); + nn_cfg_writeb(hw, NFP_NET_CFG_RSS_KEY_SZ, rss_conf->rss_key_len); return 0; } @@ -1843,16 +1855,18 @@ nfp_net_rss_hash_conf_get(struct rte_eth_dev *dev, uint8_t i; uint8_t key; uint64_t rss_hf; + struct nfp_hw *hw; uint32_t cfg_rss_ctrl; - struct nfp_net_hw *hw; + struct nfp_net_hw *net_hw; - hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private); + net_hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private); + hw = &net_hw->super; - if ((hw->super.ctrl & NFP_NET_CFG_CTRL_RSS_ANY) == 0) + if ((hw->ctrl & NFP_NET_CFG_CTRL_RSS_ANY) == 0) return -EINVAL; rss_hf = rss_conf->rss_hf; - cfg_rss_ctrl = nn_cfg_readl(&hw->super, NFP_NET_CFG_RSS_CTRL); + cfg_rss_ctrl = nn_cfg_readl(hw, NFP_NET_CFG_RSS_CTRL); if ((cfg_rss_ctrl & NFP_NET_CFG_RSS_IPV4) != 0) rss_hf |= RTE_ETH_RSS_IPV4; @@ -1882,11 +1896,11 @@ nfp_net_rss_hash_conf_get(struct rte_eth_dev *dev, rss_conf->rss_hf = rss_hf; /* Reading the key size */ - rss_conf->rss_key_len = nn_cfg_readl(&hw->super, NFP_NET_CFG_RSS_KEY_SZ); + rss_conf->rss_key_len = nn_cfg_readl(hw, NFP_NET_CFG_RSS_KEY_SZ); /* Reading the key byte a byte */ for (i = 0; i < rss_conf->rss_key_len; i++) { - key = nn_cfg_readb(&hw->super, NFP_NET_CFG_RSS_KEY + i); + key = nn_cfg_readb(hw, NFP_NET_CFG_RSS_KEY + i); memcpy(&rss_conf->rss_key[i], &key, 1); } diff --git a/drivers/net/nfp/nfp_net_ctrl.c b/drivers/net/nfp/nfp_net_ctrl.c index d469896a64..8848fa38fe 100644 --- a/drivers/net/nfp/nfp_net_ctrl.c +++ b/drivers/net/nfp/nfp_net_ctrl.c @@ -29,15 +29,15 @@ nfp_net_tlv_caps_parse(struct rte_eth_dev *dev) uint32_t length; uint32_t offset; uint32_t tlv_type; - struct nfp_net_hw *hw; + struct nfp_net_hw *net_hw; struct nfp_net_tlv_caps *caps; - hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private); - caps = &hw->tlv_caps; + net_hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private); + caps = &net_hw->tlv_caps; nfp_net_tlv_caps_reset(caps); - data = hw->super.ctrl_bar + NFP_NET_CFG_TLV_BASE; - end = hw->super.ctrl_bar + NFP_NET_CFG_BAR_SZ; + data = net_hw->super.ctrl_bar + NFP_NET_CFG_TLV_BASE; + end = net_hw->super.ctrl_bar + NFP_NET_CFG_BAR_SZ; hdr = rte_read32(data); if (hdr == 0) { @@ -46,7 +46,7 @@ nfp_net_tlv_caps_parse(struct rte_eth_dev *dev) } for (; ; data += length) { - offset = data - hw->super.ctrl_bar; + offset = data - net_hw->super.ctrl_bar; if (data + NFP_NET_CFG_TLV_VALUE > end) { PMD_DRV_LOG(ERR, "Reached end of BAR without END TLV"); @@ -87,7 +87,7 @@ nfp_net_tlv_caps_parse(struct rte_eth_dev *dev) caps->mbox_len = length; if (length != 0) - caps->mbox_off = data - hw->super.ctrl_bar; + caps->mbox_off = data - net_hw->super.ctrl_bar; else caps->mbox_off = 0; break; diff --git a/drivers/net/nfp/nfp_rxtx.c b/drivers/net/nfp/nfp_rxtx.c index f17cc13cc1..fc94e5f0b9 100644 --- a/drivers/net/nfp/nfp_rxtx.c +++ b/drivers/net/nfp/nfp_rxtx.c @@ -336,10 +336,10 @@ nfp_net_parse_meta_vlan(const struct nfp_meta_parsed *meta, struct nfp_net_rxq *rxq, struct rte_mbuf *mb) { - struct nfp_net_hw *hw = rxq->hw; + uint32_t ctrl = rxq->hw->super.ctrl; - /* Skip if firmware don't support setting vlan. */ - if ((hw->super.ctrl & (NFP_NET_CFG_CTRL_RXVLAN | NFP_NET_CFG_CTRL_RXVLAN_V2)) == 0) + /* Skip if hardware don't support setting vlan. */ + if ((ctrl & (NFP_NET_CFG_CTRL_RXVLAN | NFP_NET_CFG_CTRL_RXVLAN_V2)) == 0) return; /* @@ -347,12 +347,12 @@ nfp_net_parse_meta_vlan(const struct nfp_meta_parsed *meta, * 1. Using the metadata when NFP_NET_CFG_CTRL_RXVLAN_V2 is set, * 2. Using the descriptor when NFP_NET_CFG_CTRL_RXVLAN is set. */ - if ((hw->super.ctrl & NFP_NET_CFG_CTRL_RXVLAN_V2) != 0) { + if ((ctrl & NFP_NET_CFG_CTRL_RXVLAN_V2) != 0) { if (meta->vlan_layer > 0 && meta->vlan[0].offload != 0) { mb->vlan_tci = rte_cpu_to_le_32(meta->vlan[0].tci); mb->ol_flags |= RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED; } - } else if ((hw->super.ctrl & NFP_NET_CFG_CTRL_RXVLAN) != 0) { + } else if ((ctrl & NFP_NET_CFG_CTRL_RXVLAN) != 0) { if ((rxd->rxd.flags & PCIE_DESC_RX_VLAN) != 0) { mb->vlan_tci = rte_cpu_to_le_32(rxd->rxd.offload_info); mb->ol_flags |= RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED; @@ -383,10 +383,10 @@ nfp_net_parse_meta_qinq(const struct nfp_meta_parsed *meta, struct nfp_net_rxq *rxq, struct rte_mbuf *mb) { - struct nfp_net_hw *hw = rxq->hw; + struct nfp_hw *hw = &rxq->hw->super; - if ((hw->super.ctrl & NFP_NET_CFG_CTRL_RXQINQ) == 0 || - (hw->super.cap & NFP_NET_CFG_CTRL_RXQINQ) == 0) + if ((hw->ctrl & NFP_NET_CFG_CTRL_RXQINQ) == 0 || + (hw->cap & NFP_NET_CFG_CTRL_RXQINQ) == 0) return; if (meta->vlan_layer < NFP_META_MAX_VLANS) -- 2.39.1