From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 2BD0EA034F for ; Wed, 10 Nov 2021 07:50:19 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 25A5D40142; Wed, 10 Nov 2021 07:50:19 +0100 (CET) Received: from NAM11-DM6-obe.outbound.protection.outlook.com (mail-dm6nam11on2050.outbound.protection.outlook.com [40.107.223.50]) by mails.dpdk.org (Postfix) with ESMTP id 8806440142 for ; Wed, 10 Nov 2021 07:50:17 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=ELD9MNipA53KAFEQ82/hGLtbDI/jp0rji7ipJrTyt8vpfhlhSv7Nw19ipkbPWqaD8eDfqfDRlKgaXYcubPGa5/us3i3iPIQdWoBn7KIlKyEJnX6J2q/mBT/rp03Qi4UcWgdBPGQDMJ8j74UAkBMB9BSvYH8sDHF7alWx3bbZ+0YwL/qKBCHOA12rX0JF+pzBxAf50KjvbaMbzJJi9PwT8Dx0NNc2rt2bARi1EUZbMjmrBNvNBaPW0YB02ft+LXyIR8Yi0wRqLPJmioNe28Me5EcNQ85AtIVGchYHGlffdfgNWGqf4KRMl6k7AkPuG2PTYWMszeyIx3cVgb5TkeXLPA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=GIXr1NJ0ougFU6kVHs158/Zsk1Jf+jmbg7lUQWtYkVg=; b=IOfR81Wrno1QB8sx+tdb3nPaDCUy5tT9jhArpDRpusmM/oxZRlEpD+OJN3CSFTK+UizMlfQal666dhaaYFxLNi52GtcFeRLa3E19iqQQ50WE6sUqifSR4SbyCaX8A/rgDGpPp6L9k6NJcvWpX+SBNMebXhTiepnu0QEjlxNs9ZV4+fzDzN4mNHYoFdgCuPBB1aVcE9heFw5YiG6G/4ZiYC9cJokEP96UT/ShOHvTw/R5FYTPw76Irs72VxeztmyNZVihRkW+egN/8WomEGSBcWHm1vuwq+3g9esfFrRmZqNYHTmNFGsxvODjXNRMQvUbouqcsTqXfwPUWyCKmlGXIw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.34) smtp.rcpttodomain=amazon.com smtp.mailfrom=nvidia.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=GIXr1NJ0ougFU6kVHs158/Zsk1Jf+jmbg7lUQWtYkVg=; b=oAqWJI9nDKiXiREUCLUeiNn4t4cimf05mJL9rz9r35C8t+fSCkfhUAH9TQB6dyJG/oUPse4mk8EdRlnTLFl0MMeFa+xJY9yLFy38r+TBiIg2hqDLHDilrzBJr91W9mqiTGWN2tbFknVuBcaMCi3OZZTV2JBy4js9PugkOfl2PBBVeF5w6vCFh5S1bFCt8I1BlNfgqn7p0JIJZSKCpKo2V+sbr5wp7YqxLe1onfV4rvTpoV0KXAMgJactj+gXX/4DU+xAQqkel0ykZqdHucRKFgmemY4n4poe6G/LFOoB6KWUN7bRxyQPg2DUfsaw4mSJYuO427Nr3URWPWyZ+zlx5w== Received: from BN0PR04CA0081.namprd04.prod.outlook.com (2603:10b6:408:ea::26) by DM6PR12MB3546.namprd12.prod.outlook.com (2603:10b6:5:18d::17) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4669.15; Wed, 10 Nov 2021 06:50:15 +0000 Received: from BN8NAM11FT005.eop-nam11.prod.protection.outlook.com (2603:10b6:408:ea:cafe::98) by BN0PR04CA0081.outlook.office365.com (2603:10b6:408:ea::26) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4690.15 via Frontend Transport; Wed, 10 Nov 2021 06:50:15 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.34) smtp.mailfrom=nvidia.com; amazon.com; dkim=none (message not signed) header.d=none;amazon.com; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.34 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.34; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.34) by BN8NAM11FT005.mail.protection.outlook.com (10.13.176.69) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4690.15 via Frontend Transport; Wed, 10 Nov 2021 06:50:14 +0000 Received: from nvidia.com (172.20.187.5) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Wed, 10 Nov 2021 06:49:59 +0000 From: Xueming Li To: Michal Krawczyk CC: Luca Boccassi , Igor Chauskin , Shai Brandes , dpdk stable Date: Wed, 10 Nov 2021 14:30:43 +0800 Message-ID: <20211110063216.2744012-160-xuemingl@nvidia.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20211110063216.2744012-1-xuemingl@nvidia.com> References: <20211110063216.2744012-1-xuemingl@nvidia.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain X-Originating-IP: [172.20.187.5] X-ClientProxiedBy: HQMAIL101.nvidia.com (172.20.187.10) To HQMAIL107.nvidia.com (172.20.187.13) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 1fc101ac-c09d-4a5a-65bd-08d9a41659ba X-MS-TrafficTypeDiagnostic: DM6PR12MB3546: X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:9508; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: 0LAyUkgE/Yw30zfR636cUMEsDYy5I7NFRdnUnbZ0GEcBk7HvlZxdBuZK+dFGOnONptYeKJHHCgYkEwbkhKPhWa44F83DJnEq5AByV8oWW3CGL+Fmcq31EN0HvFGnSkJtremZ/Fyr3i+U4VLNZ6BoH42sKr+5b1bHWQfeABPiJR/cFzgi2JSGfNgn0eeIkIPO+BpNZiSVT6dsGrTEHv+aKRyxKxYdIgrXEGAUy2S0XZapkuI+tsoqftccoj4fFFs50A0BdsHNA5BZMnNs5I7dat39v/G7z8od5Cl6P4uOmHHJIuxmlG4qK+sTzphPNzaAp29+YxmugXx1BYMObKa8JdRA+qDfJGj9canCVI1Y1B8P8+8k8GN8EeX6OfP6b/mNUL1373t+IoOWw99Vk5VTaEiQEiXxwD3uXl8k4WhqKkuf0gmuiPtjuzWZj9eZbI44v44ZtLLw4OAPKss/3l+cs8aU87y2tsLPMeNTbgK39b7tePsg/DISc08yNSO4M6O17QcM1HoMM0sSc/KGphlrGUWSNmy64w44JhvpYGLuyn2pUpBxOldqfLkT/tk310LLmYFW5GAbBF0gBDTVgBLsr7UstoaQqbOw2BKuaMpUtIjeABflNuXltMKAhySLFfoCBSLkWEN341LNROCttPlVNFsN1ku+StIh65sRkoO/WiMcOEr/pz6xtnLbO1hGezuB21DOpd4tfs6QEVOk7hGkeo8YLNAyH3/COBXm8UBviClMouCXIHVxOtq2geOyI0mEvXeOy5Z/QeoQfqaPseUv6yBOlYJLYqYxguBxYE6rWMuNSx2rMe6Ofzgo4bZhvOYMJf/1464j2wxumng18Eel2A== X-Forefront-Antispam-Report: CIP:216.228.112.34; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:schybrid03.nvidia.com; CAT:NONE; SFS:(4636009)(46966006)(36840700001)(16526019)(82310400003)(47076005)(53546011)(966005)(26005)(186003)(6286002)(508600001)(1076003)(5660300002)(55016002)(36756003)(70206006)(40140700001)(54906003)(336012)(2906002)(426003)(2616005)(83380400001)(36860700001)(4326008)(8676002)(36906005)(6666004)(6916009)(30864003)(15650500001)(7696005)(4001150100001)(316002)(356005)(8936002)(86362001)(7636003)(70586007); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 10 Nov 2021 06:50:14.3783 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 1fc101ac-c09d-4a5a-65bd-08d9a41659ba X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.112.34]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT005.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR12MB3546 Subject: [dpdk-stable] patch 'net/ena: fix offload capabilities verification' has been queued to stable release 20.11.4 X-BeenThere: stable@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: patches for DPDK stable branches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: stable-bounces@dpdk.org Sender: "stable" Hi, FYI, your patch has been queued to stable release 20.11.4 Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet. It will be pushed if I get no objections before 11/12/21. So please shout if anyone has objections. Also note that after the patch there's a diff of the upstream commit vs the patch applied to the branch. This will indicate if there was any rebasing needed to apply to the stable branch. If there were code changes for rebasing (ie: not only metadata diffs), please double check that the rebase was correctly done. Queued patches are on a temporary branch at: https://github.com/steevenlee/dpdk This queued commit can be viewed at: https://github.com/steevenlee/dpdk/commit/ac43dc8cc4a9bc09da0f07b44627528936cf47a6 Thanks. Xueming Li --- >From ac43dc8cc4a9bc09da0f07b44627528936cf47a6 Mon Sep 17 00:00:00 2001 From: Michal Krawczyk Date: Tue, 19 Oct 2021 12:56:23 +0200 Subject: [PATCH] net/ena: fix offload capabilities verification Cc: Xueming Li [ upstream commit e8c838fde93f48c2a7504570aae38c06e3189fa1 ] ENA PMD has multiple checksum offload flags, which are more discrete than the DPDK offload capabilities flags. As the driver wasn't storing it's internal checksum offload capabilities and was relying only on the DPDK capabilities, not all scenarios could be properly covered (like when to prepare pseudo header checksum and when not). Moreover, the user could request offload capability, which isn't supported by the HW and the PMD would quietly ignore the issue. This commit reworks eth_ena_prep_pkts() function to perform additional checks and to properly reflect the HW requirements. With the RTE_LIBRTE_ETHDEV_DEBUG enabled, the function will do even more verifications, to help the user find any issues with the mbuf configuration. Fixes: b3fc5a1ae10d ("net/ena: add Tx preparation") Signed-off-by: Michal Krawczyk Reviewed-by: Igor Chauskin Reviewed-by: Shai Brandes --- drivers/net/ena/ena_ethdev.c | 233 +++++++++++++++++++++++++++-------- drivers/net/ena/ena_ethdev.h | 5 +- 2 files changed, 185 insertions(+), 53 deletions(-) diff --git a/drivers/net/ena/ena_ethdev.c b/drivers/net/ena/ena_ethdev.c index 3f2c979f52..16ba729989 100644 --- a/drivers/net/ena/ena_ethdev.c +++ b/drivers/net/ena/ena_ethdev.c @@ -155,6 +155,23 @@ static const struct ena_stats ena_stats_rx_strings[] = { #define ENA_TX_OFFLOAD_NOTSUP_MASK \ (PKT_TX_OFFLOAD_MASK ^ ENA_TX_OFFLOAD_MASK) +/** HW specific offloads capabilities. */ +/* IPv4 checksum offload. */ +#define ENA_L3_IPV4_CSUM 0x0001 +/* TCP/UDP checksum offload for IPv4 packets. */ +#define ENA_L4_IPV4_CSUM 0x0002 +/* TCP/UDP checksum offload for IPv4 packets with pseudo header checksum. */ +#define ENA_L4_IPV4_CSUM_PARTIAL 0x0004 +/* TCP/UDP checksum offload for IPv6 packets. */ +#define ENA_L4_IPV6_CSUM 0x0008 +/* TCP/UDP checksum offload for IPv6 packets with pseudo header checksum. */ +#define ENA_L4_IPV6_CSUM_PARTIAL 0x0010 +/* TSO support for IPv4 packets. */ +#define ENA_IPV4_TSO 0x0020 + +/* Device supports setting RSS hash. */ +#define ENA_RX_RSS_HASH 0x0040 + static const struct rte_pci_id pci_id_ena_map[] = { { RTE_PCI_DEVICE(PCI_VENDOR_ID_AMAZON, PCI_DEVICE_ID_ENA_VF) }, { RTE_PCI_DEVICE(PCI_VENDOR_ID_AMAZON, PCI_DEVICE_ID_ENA_VF_RSERV0) }, @@ -1746,6 +1763,50 @@ static uint32_t ena_calc_max_io_queue_num(struct ena_com_dev *ena_dev, return max_num_io_queues; } +static void +ena_set_offloads(struct ena_offloads *offloads, + struct ena_admin_feature_offload_desc *offload_desc) +{ + if (offload_desc->tx & ENA_ADMIN_FEATURE_OFFLOAD_DESC_TSO_IPV4_MASK) + offloads->tx_offloads |= ENA_IPV4_TSO; + + /* Tx IPv4 checksum offloads */ + if (offload_desc->tx & + ENA_ADMIN_FEATURE_OFFLOAD_DESC_TX_L3_CSUM_IPV4_MASK) + offloads->tx_offloads |= ENA_L3_IPV4_CSUM; + if (offload_desc->tx & + ENA_ADMIN_FEATURE_OFFLOAD_DESC_TX_L4_IPV4_CSUM_FULL_MASK) + offloads->tx_offloads |= ENA_L4_IPV4_CSUM; + if (offload_desc->tx & + ENA_ADMIN_FEATURE_OFFLOAD_DESC_TX_L4_IPV4_CSUM_PART_MASK) + offloads->tx_offloads |= ENA_L4_IPV4_CSUM_PARTIAL; + + /* Tx IPv6 checksum offloads */ + if (offload_desc->tx & + ENA_ADMIN_FEATURE_OFFLOAD_DESC_TX_L4_IPV6_CSUM_FULL_MASK) + offloads->tx_offloads |= ENA_L4_IPV6_CSUM; + if (offload_desc->tx & + ENA_ADMIN_FEATURE_OFFLOAD_DESC_TX_L4_IPV6_CSUM_PART_MASK) + offloads->tx_offloads |= ENA_L4_IPV6_CSUM_PARTIAL; + + /* Rx IPv4 checksum offloads */ + if (offload_desc->rx_supported & + ENA_ADMIN_FEATURE_OFFLOAD_DESC_RX_L3_CSUM_IPV4_MASK) + offloads->rx_offloads |= ENA_L3_IPV4_CSUM; + if (offload_desc->rx_supported & + ENA_ADMIN_FEATURE_OFFLOAD_DESC_RX_L4_IPV4_CSUM_MASK) + offloads->rx_offloads |= ENA_L4_IPV4_CSUM; + + /* Rx IPv6 checksum offloads */ + if (offload_desc->rx_supported & + ENA_ADMIN_FEATURE_OFFLOAD_DESC_RX_L4_IPV6_CSUM_MASK) + offloads->rx_offloads |= ENA_L4_IPV6_CSUM; + + if (offload_desc->rx_supported & + ENA_ADMIN_FEATURE_OFFLOAD_DESC_RX_HASH_MASK) + offloads->rx_offloads |= ENA_RX_RSS_HASH; +} + static int eth_ena_dev_init(struct rte_eth_dev *eth_dev) { struct ena_calc_queue_size_ctx calc_queue_ctx = { 0 }; @@ -1868,14 +1929,7 @@ static int eth_ena_dev_init(struct rte_eth_dev *eth_dev) /* Set max MTU for this device */ adapter->max_mtu = get_feat_ctx.dev_attr.max_mtu; - /* set device support for offloads */ - adapter->offloads.tso4_supported = (get_feat_ctx.offload.tx & - ENA_ADMIN_FEATURE_OFFLOAD_DESC_TSO_IPV4_MASK) != 0; - adapter->offloads.tx_csum_supported = (get_feat_ctx.offload.tx & - ENA_ADMIN_FEATURE_OFFLOAD_DESC_TX_L4_IPV4_CSUM_PART_MASK) != 0; - adapter->offloads.rx_csum_supported = - (get_feat_ctx.offload.rx_supported & - ENA_ADMIN_FEATURE_OFFLOAD_DESC_RX_L4_IPV4_CSUM_MASK) != 0; + ena_set_offloads(&adapter->offloads, &get_feat_ctx.offload); /* Copy MAC address and point DPDK to it */ eth_dev->data->mac_addrs = (struct rte_ether_addr *)adapter->mac_addr; @@ -2024,25 +2078,29 @@ static int ena_infos_get(struct rte_eth_dev *dev, ETH_LINK_SPEED_100G; /* Set Tx & Rx features available for device */ - if (adapter->offloads.tso4_supported) + if (adapter->offloads.tx_offloads & ENA_IPV4_TSO) tx_feat |= DEV_TX_OFFLOAD_TCP_TSO; - if (adapter->offloads.tx_csum_supported) - tx_feat |= DEV_TX_OFFLOAD_IPV4_CKSUM | - DEV_TX_OFFLOAD_UDP_CKSUM | - DEV_TX_OFFLOAD_TCP_CKSUM; + if (adapter->offloads.tx_offloads & ENA_L3_IPV4_CSUM) + tx_feat |= DEV_TX_OFFLOAD_IPV4_CKSUM; + if (adapter->offloads.tx_offloads & + (ENA_L4_IPV4_CSUM_PARTIAL | ENA_L4_IPV4_CSUM | + ENA_L4_IPV6_CSUM | ENA_L4_IPV6_CSUM_PARTIAL)) + tx_feat |= DEV_TX_OFFLOAD_UDP_CKSUM | DEV_TX_OFFLOAD_TCP_CKSUM; - if (adapter->offloads.rx_csum_supported) - rx_feat |= DEV_RX_OFFLOAD_IPV4_CKSUM | - DEV_RX_OFFLOAD_UDP_CKSUM | - DEV_RX_OFFLOAD_TCP_CKSUM; + if (adapter->offloads.rx_offloads & ENA_L3_IPV4_CSUM) + rx_feat |= DEV_RX_OFFLOAD_IPV4_CKSUM; + if (adapter->offloads.rx_offloads & + (ENA_L4_IPV4_CSUM | ENA_L4_IPV6_CSUM)) + rx_feat |= DEV_RX_OFFLOAD_UDP_CKSUM | DEV_RX_OFFLOAD_TCP_CKSUM; rx_feat |= DEV_RX_OFFLOAD_JUMBO_FRAME; tx_feat |= DEV_TX_OFFLOAD_MULTI_SEGS; /* Inform framework about available features */ dev_info->rx_offload_capa = rx_feat; - dev_info->rx_offload_capa |= DEV_RX_OFFLOAD_RSS_HASH; + if (adapter->offloads.rx_offloads & ENA_RX_RSS_HASH) + dev_info->rx_offload_capa |= DEV_RX_OFFLOAD_RSS_HASH; dev_info->rx_queue_offload_capa = rx_feat; dev_info->tx_offload_capa = tx_feat; dev_info->tx_queue_offload_capa = tx_feat; @@ -2284,45 +2342,60 @@ eth_ena_prep_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint32_t i; struct rte_mbuf *m; struct ena_ring *tx_ring = (struct ena_ring *)(tx_queue); + struct ena_adapter *adapter = tx_ring->adapter; struct rte_ipv4_hdr *ip_hdr; uint64_t ol_flags; + uint64_t l4_csum_flag; + uint64_t dev_offload_capa; uint16_t frag_field; + bool need_pseudo_csum; + dev_offload_capa = adapter->offloads.tx_offloads; for (i = 0; i != nb_pkts; i++) { m = tx_pkts[i]; ol_flags = m->ol_flags; - if (!(ol_flags & PKT_TX_IPV4)) + /* Check if any offload flag was set */ + if (ol_flags == 0) continue; - /* If there was not L2 header length specified, assume it is - * length of the ethernet header. - */ - if (unlikely(m->l2_len == 0)) - m->l2_len = sizeof(struct rte_ether_hdr); - - ip_hdr = rte_pktmbuf_mtod_offset(m, struct rte_ipv4_hdr *, - m->l2_len); - frag_field = rte_be_to_cpu_16(ip_hdr->fragment_offset); - - if ((frag_field & RTE_IPV4_HDR_DF_FLAG) != 0) { - m->packet_type |= RTE_PTYPE_L4_NONFRAG; - - /* If IPv4 header has DF flag enabled and TSO support is - * disabled, partial chcecksum should not be calculated. - */ - if (!tx_ring->adapter->offloads.tso4_supported) - continue; - } - - if ((ol_flags & ENA_TX_OFFLOAD_NOTSUP_MASK) != 0 || - (ol_flags & PKT_TX_L4_MASK) == - PKT_TX_SCTP_CKSUM) { + l4_csum_flag = ol_flags & PKT_TX_L4_MASK; + /* SCTP checksum offload is not supported by the ENA. */ + if ((ol_flags & ENA_TX_OFFLOAD_NOTSUP_MASK) || + l4_csum_flag == PKT_TX_SCTP_CKSUM) { + PMD_TX_LOG(DEBUG, + "mbuf[%" PRIu32 "] has unsupported offloads flags set: 0x%" PRIu64 "\n", + i, ol_flags); rte_errno = ENOTSUP; return i; } #ifdef RTE_LIBRTE_ETHDEV_DEBUG + /* Check if requested offload is also enabled for the queue */ + if ((ol_flags & PKT_TX_IP_CKSUM && + !(tx_ring->offloads & DEV_TX_OFFLOAD_IPV4_CKSUM)) || + (l4_csum_flag == PKT_TX_TCP_CKSUM && + !(tx_ring->offloads & DEV_TX_OFFLOAD_TCP_CKSUM)) || + (l4_csum_flag == PKT_TX_UDP_CKSUM && + !(tx_ring->offloads & DEV_TX_OFFLOAD_UDP_CKSUM))) { + PMD_TX_LOG(DEBUG, + "mbuf[%" PRIu32 "]: requested offloads: %" PRIu16 " are not enabled for the queue[%u]\n", + i, m->nb_segs, tx_ring->id); + rte_errno = EINVAL; + return i; + } + + /* The caller is obligated to set l2 and l3 len if any cksum + * offload is enabled. + */ + if (unlikely(ol_flags & (PKT_TX_IP_CKSUM | PKT_TX_L4_MASK) && + (m->l2_len == 0 || m->l3_len == 0))) { + PMD_TX_LOG(DEBUG, + "mbuf[%" PRIu32 "]: l2_len or l3_len values are 0 while the offload was requested\n", + i); + rte_errno = EINVAL; + return i; + } ret = rte_validate_tx_offload(m); if (ret != 0) { rte_errno = -ret; @@ -2330,16 +2403,76 @@ eth_ena_prep_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, } #endif - /* In case we are supposed to TSO and have DF not set (DF=0) - * hardware must be provided with partial checksum, otherwise - * it will take care of necessary calculations. + /* Verify HW support for requested offloads and determine if + * pseudo header checksum is needed. */ + need_pseudo_csum = false; + if (ol_flags & PKT_TX_IPV4) { + if (ol_flags & PKT_TX_IP_CKSUM && + !(dev_offload_capa & ENA_L3_IPV4_CSUM)) { + rte_errno = ENOTSUP; + return i; + } - ret = rte_net_intel_cksum_flags_prepare(m, - ol_flags & ~PKT_TX_TCP_SEG); - if (ret != 0) { - rte_errno = -ret; - return i; + if (ol_flags & PKT_TX_TCP_SEG && + !(dev_offload_capa & ENA_IPV4_TSO)) { + rte_errno = ENOTSUP; + return i; + } + + /* Check HW capabilities and if pseudo csum is needed + * for L4 offloads. + */ + if (l4_csum_flag != PKT_TX_L4_NO_CKSUM && + !(dev_offload_capa & ENA_L4_IPV4_CSUM)) { + if (dev_offload_capa & + ENA_L4_IPV4_CSUM_PARTIAL) { + need_pseudo_csum = true; + } else { + rte_errno = ENOTSUP; + return i; + } + } + + /* Parse the DF flag */ + ip_hdr = rte_pktmbuf_mtod_offset(m, + struct rte_ipv4_hdr *, m->l2_len); + frag_field = rte_be_to_cpu_16(ip_hdr->fragment_offset); + if (frag_field & RTE_IPV4_HDR_DF_FLAG) { + m->packet_type |= RTE_PTYPE_L4_NONFRAG; + } else if (ol_flags & PKT_TX_TCP_SEG) { + /* In case we are supposed to TSO and have DF + * not set (DF=0) hardware must be provided with + * partial checksum. + */ + need_pseudo_csum = true; + } + } else if (ol_flags & PKT_TX_IPV6) { + /* There is no support for IPv6 TSO as for now. */ + if (ol_flags & PKT_TX_TCP_SEG) { + rte_errno = ENOTSUP; + return i; + } + + /* Check HW capabilities and if pseudo csum is needed */ + if (l4_csum_flag != PKT_TX_L4_NO_CKSUM && + !(dev_offload_capa & ENA_L4_IPV6_CSUM)) { + if (dev_offload_capa & + ENA_L4_IPV6_CSUM_PARTIAL) { + need_pseudo_csum = true; + } else { + rte_errno = ENOTSUP; + return i; + } + } + } + + if (need_pseudo_csum) { + ret = rte_net_intel_cksum_flags_prepare(m, ol_flags); + if (ret != 0) { + rte_errno = -ret; + return i; + } } } diff --git a/drivers/net/ena/ena_ethdev.h b/drivers/net/ena/ena_ethdev.h index ae235897ee..1118cc5a06 100644 --- a/drivers/net/ena/ena_ethdev.h +++ b/drivers/net/ena/ena_ethdev.h @@ -202,9 +202,8 @@ struct ena_stats_eni { }; struct ena_offloads { - bool tso4_supported; - bool tx_csum_supported; - bool rx_csum_supported; + uint32_t tx_offloads; + uint32_t rx_offloads; }; /* board specific private data structure */ -- 2.33.0 --- Diff of the applied patch vs upstream commit (please double-check if non-empty: --- --- - 2021-11-10 14:17:09.008695449 +0800 +++ 0159-net-ena-fix-offload-capabilities-verification.patch 2021-11-10 14:17:01.967411966 +0800 @@ -1 +1 @@ -From e8c838fde93f48c2a7504570aae38c06e3189fa1 Mon Sep 17 00:00:00 2001 +From ac43dc8cc4a9bc09da0f07b44627528936cf47a6 Mon Sep 17 00:00:00 2001 @@ -4,0 +5,3 @@ +Cc: Xueming Li + +[ upstream commit e8c838fde93f48c2a7504570aae38c06e3189fa1 ] @@ -23 +25,0 @@ -Cc: stable@dpdk.org @@ -29,3 +31,3 @@ - drivers/net/ena/ena_ethdev.c | 235 +++++++++++++++++++++++++++-------- - drivers/net/ena/ena_ethdev.h | 6 +- - 2 files changed, 184 insertions(+), 57 deletions(-) + drivers/net/ena/ena_ethdev.c | 233 +++++++++++++++++++++++++++-------- + drivers/net/ena/ena_ethdev.h | 5 +- + 2 files changed, 185 insertions(+), 53 deletions(-) @@ -34 +36 @@ -index 3fde099ab4..197cb7ecd4 100644 +index 3f2c979f52..16ba729989 100644 @@ -37 +39 @@ -@@ -140,6 +140,23 @@ static const struct ena_stats ena_stats_rx_strings[] = { +@@ -155,6 +155,23 @@ static const struct ena_stats ena_stats_rx_strings[] = { @@ -61 +63 @@ -@@ -1612,6 +1629,50 @@ static uint32_t ena_calc_max_io_queue_num(struct ena_com_dev *ena_dev, +@@ -1746,6 +1763,50 @@ static uint32_t ena_calc_max_io_queue_num(struct ena_com_dev *ena_dev, @@ -112 +114 @@ -@@ -1733,17 +1794,7 @@ static int eth_ena_dev_init(struct rte_eth_dev *eth_dev) +@@ -1868,14 +1929,7 @@ static int eth_ena_dev_init(struct rte_eth_dev *eth_dev) @@ -124,3 +125,0 @@ -- adapter->offloads.rss_hash_supported = -- (get_feat_ctx.offload.rx_supported & -- ENA_ADMIN_FEATURE_OFFLOAD_DESC_RX_HASH_MASK) != 0; @@ -131 +130 @@ -@@ -1903,24 +1954,27 @@ static int ena_infos_get(struct rte_eth_dev *dev, +@@ -2024,25 +2078,29 @@ static int ena_infos_get(struct rte_eth_dev *dev, @@ -159,0 +159 @@ + rx_feat |= DEV_RX_OFFLOAD_JUMBO_FRAME; @@ -164 +164 @@ -- if (adapter->offloads.rss_hash_supported) +- dev_info->rx_offload_capa |= DEV_RX_OFFLOAD_RSS_HASH; @@ -166 +166 @@ - dev_info->rx_offload_capa |= DEV_RX_OFFLOAD_RSS_HASH; ++ dev_info->rx_offload_capa |= DEV_RX_OFFLOAD_RSS_HASH; @@ -169 +169,2 @@ -@@ -2173,45 +2227,60 @@ eth_ena_prep_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, + dev_info->tx_queue_offload_capa = tx_feat; +@@ -2284,45 +2342,60 @@ eth_ena_prep_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, @@ -254 +255 @@ -@@ -2219,16 +2288,76 @@ eth_ena_prep_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, +@@ -2330,16 +2403,76 @@ eth_ena_prep_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, @@ -340 +341 @@ -index 06ac8b06b5..26d425a893 100644 +index ae235897ee..1118cc5a06 100644 @@ -343 +344 @@ -@@ -223,10 +223,8 @@ struct ena_stats_eni { +@@ -202,9 +202,8 @@ struct ena_stats_eni { @@ -350 +350,0 @@ -- bool rss_hash_supported;