From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 3D386A0548 for ; Mon, 16 Aug 2021 18:30:16 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 371E54114F; Mon, 16 Aug 2021 18:30:16 +0200 (CEST) Received: from NAM10-BN7-obe.outbound.protection.outlook.com (mail-bn7nam10on2050.outbound.protection.outlook.com [40.107.92.50]) by mails.dpdk.org (Postfix) with ESMTP id D4E9D4114C for ; Mon, 16 Aug 2021 18:30:14 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=mHRV4gbK+sXVZTyVkwWIf5IjRW38qFpNyHMuXgfVRpckglzBQx8ItSK3tgecX7Kjo9dURyJjY7EciPgD/cFz9q1G49mEhsuvwufXu7dBWRBchcjHo6B+Jz+8QjGS4Bo21yllgP4K1cgwnfDqvjD8uF6N58TA7YQqBX740ByIoC1eGZMatcUaVEOpE2dDFn8AJ1gqWFIVP05j9MmZRKGP6k8aAmK5ikjOYBE+C6WA5bN56wwNcli2PSufelWPwVe/B7sx1hsBoBWp1MI9GMHF0JAREZqENk5ygjiot6HtrXnGRNqfkw6mm4Q/c9vBfTqY2X+7d83W3wCdRz5RGm+PxA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=65sIUxiYSeaFMeSE2FQo5bRLdElQsR1NS/HLZuo8ETM=; b=KCne4/LuBZwnPc1fovnwmH7uyz7iIMry/OQAlxZH940oYyYHHwQBw/nHG53Mk51TQYi5N87aUkM1+VPLEOOlASAVWF/fAv3IOTPf5x10rI9OwEUdggfAcYdDIni0ya9Tv0f7PgmEHppIQSFsRCYNmbg06PHB7+vjq5lRlKmZLiEfRJTWHeCzamfBN24XiTiU82QRLPzJ47ObKZUm9AEeFmORVEkqVnOguDwio8XgBRy7Zi9hZ7f/FWvkuZ2JmwtbAyxSsnJQ9NYcS3fNFgNoKGTKg4UIarXJRxTNUcxGAxSe+p7hzYY535W3qFp3aifGM9ZljeyA257MGV8V9e3I6Q== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.35) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=quarantine sp=none pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=65sIUxiYSeaFMeSE2FQo5bRLdElQsR1NS/HLZuo8ETM=; b=HE3YWx9Y401F9sdAbRPGI2tx2Gre7x4Q8W5qdK7dhM466tnhmVTArBMdVPHZY1Vs2LW8Ni5DN/j3G8HJmCUHkHoYcCAn7nWSbhhyBFB72lK6+MojYeMq/LlUk/oOU/opDvuaGGiKhS0J6OMuwhhep5KRHbn5x25uWSAS6e5Z9AZIRVdjDe2jwkFQt9HqNFS1lWuILcTbj+zSdGJQHKvD2V6BoluMiHa54rbCbA3uUiMQRZdjTxDr3WWuX8QNrp4d4aqRYeYRug4GAdtz9haK1HBqa6epZ7bfcHCZgoxDxDLWr8C02xlf3sYOKFdufeJ3m2MQ1zop7aHPCEXsW0q/fw== Received: from BN6PR1401CA0021.namprd14.prod.outlook.com (2603:10b6:405:4b::31) by PH0PR12MB5450.namprd12.prod.outlook.com (2603:10b6:510:e8::22) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4415.18; Mon, 16 Aug 2021 16:30:13 +0000 Received: from BN8NAM11FT023.eop-nam11.prod.protection.outlook.com (2603:10b6:405:4b:cafe::4c) by BN6PR1401CA0021.outlook.office365.com (2603:10b6:405:4b::31) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4415.13 via Frontend Transport; Mon, 16 Aug 2021 16:30:13 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.35) smtp.mailfrom=nvidia.com; dpdk.org; dkim=none (message not signed) header.d=none;dpdk.org; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.35 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.35; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.35) by BN8NAM11FT023.mail.protection.outlook.com (10.13.177.103) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4415.16 via Frontend Transport; Mon, 16 Aug 2021 16:30:13 +0000 Received: from DRHQMAIL107.nvidia.com (10.27.9.16) by HQMAIL111.nvidia.com (172.20.187.18) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Mon, 16 Aug 2021 16:30:12 +0000 Received: from nvidia.com (172.20.187.6) by DRHQMAIL107.nvidia.com (10.27.9.16) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Mon, 16 Aug 2021 16:30:10 +0000 From: Bing Zhao To: , CC: , , Aman Deep Singh , Xiaoyun Li Date: Mon, 16 Aug 2021 19:29:47 +0300 Message-ID: <20210816162952.1931473-2-bingz@nvidia.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210816162952.1931473-1-bingz@nvidia.com> References: <20210816162952.1931473-1-bingz@nvidia.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain X-Originating-IP: [172.20.187.6] X-ClientProxiedBy: HQMAIL101.nvidia.com (172.20.187.10) To DRHQMAIL107.nvidia.com (10.27.9.16) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 103a163f-7038-4440-fa46-08d960d31f82 X-MS-TrafficTypeDiagnostic: PH0PR12MB5450: X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:26; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: ztvw4AiKyeK49RU4OSk/RL9BYZg+cUu997M6PKSEyxsklHRkcz5xZfR7v39HYhOqe41dbt6sEyfNWd85MtLZMGz0aE5ZMGchaFCQlRWtPAWva7A2Jce+gBBzwNsi9g28RGxVljCPbhzYp2th5LzAZUfevWBte2mO4BYJhZ45YmzC1shgQ3z4+S50/RQ7/KhZ2vi+qBP4WDVLX5LQqsPMzf18WR1UP+2dFvKZKcfe12rrZ9c37JUmtpCuSaw808c46HPf8LoxUWQkFSgksi1wituouqBnqpuEno9ldddASLZNFF6QZnEdFq018wa32LHT09WxmGM/oJi3YJDLqoIGosvHffhpjfqHnHW2T217jN4KBLuEhzq33FbUwKL+ol/7B6s8DhgI25r1koo4jYizirFslXNAryIUyVLBCJQyDxVCEX7dBgvc511W7lXMncrNbNyU91L/zQfUwTO8PXmUiRWmU3WdLADxi6R5LdLHxwKSQ5ow+EM7hid8sRL/zl+dRM24Uv0ZC0VtA6HxEc+i2bs113pDciirTIjgJwJr4C8M3j06PLlWkrTNSOqdOEXAbAL8dKt3YQdlP9AoES7UDbq/CIMKBNkvITFE8MlobaR+ICfY6nnCoiK8Os9a9eFmTQxhNWEEIAo41fgxpcxFTItPBxPVyZzVAV4fGbhEmLQ3FLwgBWONZo9lipk0VsxFnm/SHEkpCCFO1dlmBHYx6g== X-Forefront-Antispam-Report: CIP:216.228.112.35; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:schybrid02.nvidia.com; CAT:NONE; SFS:(4636009)(39860400002)(136003)(396003)(346002)(376002)(46966006)(36840700001)(82310400003)(47076005)(83380400001)(426003)(6666004)(6286002)(336012)(36860700001)(36906005)(316002)(8676002)(1076003)(36756003)(70206006)(70586007)(7696005)(54906003)(8936002)(2906002)(82740400003)(2616005)(86362001)(26005)(55016002)(186003)(478600001)(16526019)(4326008)(5660300002)(7636003)(356005)(110136005); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 16 Aug 2021 16:30:13.0950 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 103a163f-7038-4440-fa46-08d960d31f82 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.112.35]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT023.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH0PR12MB5450 Subject: [dpdk-stable] [PATCH 19.11 1/6] app/testpmd: fix offloads for newly attached port X-BeenThere: stable@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: patches for DPDK stable branches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: stable-bounces@dpdk.org Sender: "stable" From: Viacheslav Ovsiienko [ upstream commit b6b8a1ebd4dadc82733ce4b0a711da918c386115 ] For the newly attached ports (with "port attach" command) the default offloads settings, configured from application command line, were not applied, causing port start failure following the attach. For example, if scattering offload was configured in command line and rxpkts was configured for multiple segments, the newly attached port start was failed due to missing scattering offload enable in the new port settings. The missing code to apply the offloads to the new device and its queues is added. The new local routine init_config_port_offloads() is introduced, embracing the shared part of port offloads initialization code. Fixes: c9cce42876f5 ("ethdev: remove deprecated attach/detach functions") Cc: stable@dpdk.org Signed-off-by: Viacheslav Ovsiienko Acked-by: Aman Deep Singh Acked-by: Xiaoyun Li Signed-off-by: Bing Zhao --- app/test-pmd/testpmd.c | 145 ++++++++++++++++++----------------------- 1 file changed, 65 insertions(+), 80 deletions(-) diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c index 9485953aba..ea25e9a984 100644 --- a/app/test-pmd/testpmd.c +++ b/app/test-pmd/testpmd.c @@ -1289,23 +1289,69 @@ check_nb_hairpinq(queueid_t hairpinq) return 0; } +static void +init_config_port_offloads(portid_t pid, uint32_t socket_id) +{ + struct rte_port *port = &ports[pid]; + uint16_t data_size; + int ret; + int i; + + port->dev_conf.txmode = tx_mode; + port->dev_conf.rxmode = rx_mode; + + ret = eth_dev_info_get_print_err(pid, &port->dev_info); + if (ret != 0) + rte_exit(EXIT_FAILURE, "rte_eth_dev_info_get() failed\n"); + + ret = update_jumbo_frame_offload(pid); + if (ret != 0) + printf("Updating jumbo frame offload failed for port %u\n", + pid); + + if (!(port->dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)) + port->dev_conf.txmode.offloads &= + ~DEV_TX_OFFLOAD_MBUF_FAST_FREE; + + /* Apply Rx offloads configuration */ + for (i = 0; i < port->dev_info.max_rx_queues; i++) + port->rx_conf[i].offloads = port->dev_conf.rxmode.offloads; + /* Apply Tx offloads configuration */ + for (i = 0; i < port->dev_info.max_tx_queues; i++) + port->tx_conf[i].offloads = port->dev_conf.txmode.offloads; + + /* set flag to initialize port/queue */ + port->need_reconfig = 1; + port->need_reconfig_queues = 1; + port->socket_id = socket_id; + port->tx_metadata = 0; + + /* + * Check for maximum number of segments per MTU. + * Accordingly update the mbuf data size. + */ + if (port->dev_info.rx_desc_lim.nb_mtu_seg_max != UINT16_MAX && + port->dev_info.rx_desc_lim.nb_mtu_seg_max != 0) { + data_size = rx_mode.max_rx_pkt_len / + port->dev_info.rx_desc_lim.nb_mtu_seg_max; + + if ((data_size + RTE_PKTMBUF_HEADROOM) > mbuf_data_size) { + mbuf_data_size = data_size + RTE_PKTMBUF_HEADROOM; + TESTPMD_LOG(WARNING, "Configured mbuf size %hu\n", + mbuf_data_size); + } + } +} + static void init_config(void) { portid_t pid; - struct rte_port *port; struct rte_mempool *mbp; unsigned int nb_mbuf_per_pool; lcoreid_t lc_id; - uint8_t port_per_socket[RTE_MAX_NUMA_NODES]; struct rte_gro_param gro_param; uint32_t gso_types; - uint16_t data_size; - bool warning = 0; - int k; - int ret; - - memset(port_per_socket,0,RTE_MAX_NUMA_NODES); /* Configuration of logical cores. */ fwd_lcores = rte_zmalloc("testpmd: fwd_lcores", @@ -1327,30 +1373,12 @@ init_config(void) } RTE_ETH_FOREACH_DEV(pid) { - port = &ports[pid]; - /* Apply default TxRx configuration for all ports */ - port->dev_conf.txmode = tx_mode; - port->dev_conf.rxmode = rx_mode; + uint32_t socket_id; - ret = eth_dev_info_get_print_err(pid, &port->dev_info); - if (ret != 0) - rte_exit(EXIT_FAILURE, - "rte_eth_dev_info_get() failed\n"); - - ret = update_jumbo_frame_offload(pid); - if (ret != 0) - printf("Updating jumbo frame offload failed for port %u\n", - pid); - - if (!(port->dev_info.tx_offload_capa & - DEV_TX_OFFLOAD_MBUF_FAST_FREE)) - port->dev_conf.txmode.offloads &= - ~DEV_TX_OFFLOAD_MBUF_FAST_FREE; if (numa_support) { - if (port_numa[pid] != NUMA_NO_CONFIG) - port_per_socket[port_numa[pid]]++; - else { - uint32_t socket_id = rte_eth_dev_socket_id(pid); + socket_id = port_numa[pid]; + if (port_numa[pid] == NUMA_NO_CONFIG) { + socket_id = rte_eth_dev_socket_id(pid); /* * if socket_id is invalid, @@ -1358,45 +1386,15 @@ init_config(void) */ if (check_socket_id(socket_id) < 0) socket_id = socket_ids[0]; - port_per_socket[socket_id]++; - } - } - - /* Apply Rx offloads configuration */ - for (k = 0; k < port->dev_info.max_rx_queues; k++) - port->rx_conf[k].offloads = - port->dev_conf.rxmode.offloads; - /* Apply Tx offloads configuration */ - for (k = 0; k < port->dev_info.max_tx_queues; k++) - port->tx_conf[k].offloads = - port->dev_conf.txmode.offloads; - - /* set flag to initialize port/queue */ - port->need_reconfig = 1; - port->need_reconfig_queues = 1; - port->tx_metadata = 0; - - /* Check for maximum number of segments per MTU. Accordingly - * update the mbuf data size. - */ - if (port->dev_info.rx_desc_lim.nb_mtu_seg_max != UINT16_MAX && - port->dev_info.rx_desc_lim.nb_mtu_seg_max != 0) { - data_size = rx_mode.max_rx_pkt_len / - port->dev_info.rx_desc_lim.nb_mtu_seg_max; - - if ((data_size + RTE_PKTMBUF_HEADROOM) > - mbuf_data_size) { - mbuf_data_size = data_size + - RTE_PKTMBUF_HEADROOM; - warning = 1; } + } else { + socket_id = (socket_num == UMA_NO_CONFIG) ? + 0 : socket_num; } + /* Apply default TxRx configuration for all ports */ + init_config_port_offloads(pid, socket_id); } - if (warning) - TESTPMD_LOG(WARNING, "Configured mbuf size %hu\n", - mbuf_data_size); - /* * Create pools of mbuf. * If NUMA support is disabled, create a single pool of mbuf in @@ -1479,7 +1477,7 @@ init_config(void) #if defined RTE_LIBRTE_PMD_SOFTNIC if (strcmp(cur_fwd_eng->fwd_mode_name, "softnic") == 0) { RTE_ETH_FOREACH_DEV(pid) { - port = &ports[pid]; + struct rte_port *port = &ports[pid]; const char *driver = port->dev_info.driver_name; if (strcmp(driver, "net_softnic") == 0) @@ -1494,21 +1492,8 @@ init_config(void) void reconfig(portid_t new_port_id, unsigned socket_id) { - struct rte_port *port; - int ret; - /* Reconfiguration of Ethernet ports. */ - port = &ports[new_port_id]; - - ret = eth_dev_info_get_print_err(new_port_id, &port->dev_info); - if (ret != 0) - return; - - /* set flag to initialize port/queue */ - port->need_reconfig = 1; - port->need_reconfig_queues = 1; - port->socket_id = socket_id; - + init_config_port_offloads(new_port_id, socket_id); init_port_config(); } -- 2.21.0