From mboxrd@z Thu Jan  1 00:00:00 1970
Return-Path: <stable-bounces@dpdk.org>
Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124])
	by inbox.dpdk.org (Postfix) with ESMTP id 81700A0C47
	for <public@inbox.dpdk.org>; Mon, 26 Jul 2021 15:53:41 +0200 (CEST)
Received: from [217.70.189.124] (localhost [127.0.0.1])
	by mails.dpdk.org (Postfix) with ESMTP id 28931410EF;
	Mon, 26 Jul 2021 15:53:41 +0200 (CEST)
Received: from mail-wm1-f52.google.com (mail-wm1-f52.google.com
 [209.85.128.52]) by mails.dpdk.org (Postfix) with ESMTP id CD7B2410E1
 for <stable@dpdk.org>; Mon, 26 Jul 2021 15:53:39 +0200 (CEST)
Received: by mail-wm1-f52.google.com with SMTP id
 a80-20020a1c98530000b0290245467f26a4so63061wme.0
 for <stable@dpdk.org>; Mon, 26 Jul 2021 06:53:39 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=from:to:cc:subject:date:message-id:in-reply-to:references
 :mime-version:content-transfer-encoding;
 bh=gm4itHCiAKNROpxI4nb85xY4DuW0qROD/pAwJJGK5Uo=;
 b=jMqJBRFPSs3ubuXCrCW3Or7WehKFkbVi9Ab3dDSKtV8mbkWejFuxEWQbS2ddqnXIa8
 mJt8X9HrSAKJvVRwcNunXgnaewCVT1ytko8DKUDv8VABCFJ3e5fQ53VkT9VmRdt+d3zg
 JhzJWdD4R+rgw/ZzzF96nYavlzb8evHVeJ/VN1nVgj3WZmV/2xXzsj5cUnDvRWLRoilG
 GRM3LB14qYRuybSenLv9VJk6jh/bFljy4hNd2ZBLxtIrzS+xv5/0Ru6HHErnQ+CN8IMR
 ZvITzHRKIAhIac2Vw/xwvjAiFXubGPTCLIC7ey4HMtm5zga2MedZ9crbftE3Qz1YCZVt
 a3Dg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
 :references:mime-version:content-transfer-encoding;
 bh=gm4itHCiAKNROpxI4nb85xY4DuW0qROD/pAwJJGK5Uo=;
 b=V10vb0ZgozLQF8gN7Xi6YGeq5jrtr2X08Rdt5eoAKFCV7USTHiJOQiKImgCkYdO80Z
 hSzvJIKzH9l3sEA0ngq6dxV95tbjKCj0oB9Jh0WUkGcTkOdwUL/t9NWryCvOQEkQhqG3
 C8tvls5Z51/XCHvRT+1aX3dYyoC40baVimUq5P+nbVC7k1x6Q7z70BOKoXXZKVUj2B1f
 Nu6RAapnH0mIO6RVJDzEnI2VTSV/TRHOkkGxmbHy47mvzlI6BLrQ756dXZZpPkiEH/mE
 xML07p4y50Ia+dMu+kCm2/B1Rt9I9eE/ACg9JDqBq/4Wt6K2uyFZOhSOeLUYkGJ97bNb
 W3sg==
X-Gm-Message-State: AOAM5302Gdifwubmg61XnGq/5hkb1c3lF/ub1oH7HcO/HuL/+nJuBjf8
 A3crdJJHGHyIy3RqvYgi6Qc=
X-Google-Smtp-Source: ABdhPJxq3hl2cUMErRZVqe24d7mQd8cnFlkfqWLyAPGjrDS3ibx4m0FcYnKEnw7xJRLz/0sl50cuOw==
X-Received: by 2002:a05:600c:3795:: with SMTP id
 o21mr3314163wmr.90.1627307619603; 
 Mon, 26 Jul 2021 06:53:39 -0700 (PDT)
Received: from localhost ([137.220.125.106])
 by smtp.gmail.com with ESMTPSA id q17sm7386807wre.3.2021.07.26.06.53.39
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 26 Jul 2021 06:53:39 -0700 (PDT)
From: luca.boccassi@gmail.com
To: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Cc: Aman Deep Singh <aman.deep.singh@intel.com>,
 Xiaoyun Li <xiaoyun.li@intel.com>, dpdk stable <stable@dpdk.org>
Date: Mon, 26 Jul 2021 14:52:27 +0100
Message-Id: <20210726135322.149850-4-luca.boccassi@gmail.com>
X-Mailer: git-send-email 2.30.2
In-Reply-To: <20210726135322.149850-1-luca.boccassi@gmail.com>
References: <20210712130551.2462159-1-luca.boccassi@gmail.com>
 <20210726135322.149850-1-luca.boccassi@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Subject: [dpdk-stable] patch 'app/testpmd: fix offloads for newly attached
 port' has been queued to stable release 20.11.3
X-BeenThere: stable@dpdk.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: patches for DPDK stable branches <stable.dpdk.org>
List-Unsubscribe: <https://mails.dpdk.org/options/stable>,
 <mailto:stable-request@dpdk.org?subject=unsubscribe>
List-Archive: <http://mails.dpdk.org/archives/stable/>
List-Post: <mailto:stable@dpdk.org>
List-Help: <mailto:stable-request@dpdk.org?subject=help>
List-Subscribe: <https://mails.dpdk.org/listinfo/stable>,
 <mailto:stable-request@dpdk.org?subject=subscribe>
Errors-To: stable-bounces@dpdk.org
Sender: "stable" <stable-bounces@dpdk.org>

Hi,

FYI, your patch has been queued to stable release 20.11.3

Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 07/28/21. So please
shout if anyone has objections.

Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.

Queued patches are on a temporary branch at:
https://github.com/bluca/dpdk-stable

This queued commit can be viewed at:
https://github.com/bluca/dpdk-stable/commit/a5fb806241d9b55afa7f829403414be5b237ae4d

Thanks.

Luca Boccassi

---
>From a5fb806241d9b55afa7f829403414be5b237ae4d Mon Sep 17 00:00:00 2001
From: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Date: Mon, 12 Jul 2021 15:40:53 +0300
Subject: [PATCH] app/testpmd: fix offloads for newly attached port

[ upstream commit b6b8a1ebd4dadc82733ce4b0a711da918c386115 ]

For the newly attached ports (with "port attach" command) the
default offloads settings, configured from application command
line, were not applied, causing port start failure following
the attach.

For example, if scattering offload was configured in command
line and rxpkts was configured for multiple segments, the newly
attached port start was failed due to missing scattering offload
enable in the new port settings. The missing code to apply
the offloads to the new device and its queues is added.

The new local routine init_config_port_offloads() is introduced,
embracing the shared part of port offloads initialization code.

Fixes: c9cce42876f5 ("ethdev: remove deprecated attach/detach functions")

Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Acked-by: Aman Deep Singh <aman.deep.singh@intel.com>
Acked-by: Xiaoyun Li <xiaoyun.li@intel.com>
---
 app/test-pmd/testpmd.c | 145 ++++++++++++++++++-----------------------
 1 file changed, 65 insertions(+), 80 deletions(-)

diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c
index d830fe3a2f..c442bcc5ff 100644
--- a/app/test-pmd/testpmd.c
+++ b/app/test-pmd/testpmd.c
@@ -1398,23 +1398,70 @@ check_nb_hairpinq(queueid_t hairpinq)
 	return 0;
 }
 
+static void
+init_config_port_offloads(portid_t pid, uint32_t socket_id)
+{
+	struct rte_port *port = &ports[pid];
+	uint16_t data_size;
+	int ret;
+	int i;
+
+	port->dev_conf.txmode = tx_mode;
+	port->dev_conf.rxmode = rx_mode;
+
+	ret = eth_dev_info_get_print_err(pid, &port->dev_info);
+	if (ret != 0)
+		rte_exit(EXIT_FAILURE, "rte_eth_dev_info_get() failed\n");
+
+	ret = update_jumbo_frame_offload(pid);
+	if (ret != 0)
+		printf("Updating jumbo frame offload failed for port %u\n",
+			pid);
+
+	if (!(port->dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE))
+		port->dev_conf.txmode.offloads &=
+			~DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+
+	/* Apply Rx offloads configuration */
+	for (i = 0; i < port->dev_info.max_rx_queues; i++)
+		port->rx_conf[i].offloads = port->dev_conf.rxmode.offloads;
+	/* Apply Tx offloads configuration */
+	for (i = 0; i < port->dev_info.max_tx_queues; i++)
+		port->tx_conf[i].offloads = port->dev_conf.txmode.offloads;
+
+	/* set flag to initialize port/queue */
+	port->need_reconfig = 1;
+	port->need_reconfig_queues = 1;
+	port->socket_id = socket_id;
+	port->tx_metadata = 0;
+
+	/*
+	 * Check for maximum number of segments per MTU.
+	 * Accordingly update the mbuf data size.
+	 */
+	if (port->dev_info.rx_desc_lim.nb_mtu_seg_max != UINT16_MAX &&
+	    port->dev_info.rx_desc_lim.nb_mtu_seg_max != 0) {
+		data_size = rx_mode.max_rx_pkt_len /
+			port->dev_info.rx_desc_lim.nb_mtu_seg_max;
+
+		if ((data_size + RTE_PKTMBUF_HEADROOM) > mbuf_data_size[0]) {
+			mbuf_data_size[0] = data_size + RTE_PKTMBUF_HEADROOM;
+			TESTPMD_LOG(WARNING,
+				    "Configured mbuf size of the first segment %hu\n",
+				    mbuf_data_size[0]);
+		}
+	}
+}
+
 static void
 init_config(void)
 {
 	portid_t pid;
-	struct rte_port *port;
 	struct rte_mempool *mbp;
 	unsigned int nb_mbuf_per_pool;
 	lcoreid_t  lc_id;
-	uint8_t port_per_socket[RTE_MAX_NUMA_NODES];
 	struct rte_gro_param gro_param;
 	uint32_t gso_types;
-	uint16_t data_size;
-	bool warning = 0;
-	int k;
-	int ret;
-
-	memset(port_per_socket,0,RTE_MAX_NUMA_NODES);
 
 	/* Configuration of logical cores. */
 	fwd_lcores = rte_zmalloc("testpmd: fwd_lcores",
@@ -1436,30 +1483,12 @@ init_config(void)
 	}
 
 	RTE_ETH_FOREACH_DEV(pid) {
-		port = &ports[pid];
-		/* Apply default TxRx configuration for all ports */
-		port->dev_conf.txmode = tx_mode;
-		port->dev_conf.rxmode = rx_mode;
+		uint32_t socket_id;
 
-		ret = eth_dev_info_get_print_err(pid, &port->dev_info);
-		if (ret != 0)
-			rte_exit(EXIT_FAILURE,
-				 "rte_eth_dev_info_get() failed\n");
-
-		ret = update_jumbo_frame_offload(pid);
-		if (ret != 0)
-			printf("Updating jumbo frame offload failed for port %u\n",
-				pid);
-
-		if (!(port->dev_info.tx_offload_capa &
-		      DEV_TX_OFFLOAD_MBUF_FAST_FREE))
-			port->dev_conf.txmode.offloads &=
-				~DEV_TX_OFFLOAD_MBUF_FAST_FREE;
 		if (numa_support) {
-			if (port_numa[pid] != NUMA_NO_CONFIG)
-				port_per_socket[port_numa[pid]]++;
-			else {
-				uint32_t socket_id = rte_eth_dev_socket_id(pid);
+			socket_id = port_numa[pid];
+			if (port_numa[pid] == NUMA_NO_CONFIG) {
+				socket_id = rte_eth_dev_socket_id(pid);
 
 				/*
 				 * if socket_id is invalid,
@@ -1467,45 +1496,14 @@ init_config(void)
 				 */
 				if (check_socket_id(socket_id) < 0)
 					socket_id = socket_ids[0];
-				port_per_socket[socket_id]++;
-			}
-		}
-
-		/* Apply Rx offloads configuration */
-		for (k = 0; k < port->dev_info.max_rx_queues; k++)
-			port->rx_conf[k].offloads =
-				port->dev_conf.rxmode.offloads;
-		/* Apply Tx offloads configuration */
-		for (k = 0; k < port->dev_info.max_tx_queues; k++)
-			port->tx_conf[k].offloads =
-				port->dev_conf.txmode.offloads;
-
-		/* set flag to initialize port/queue */
-		port->need_reconfig = 1;
-		port->need_reconfig_queues = 1;
-		port->tx_metadata = 0;
-
-		/* Check for maximum number of segments per MTU. Accordingly
-		 * update the mbuf data size.
-		 */
-		if (port->dev_info.rx_desc_lim.nb_mtu_seg_max != UINT16_MAX &&
-				port->dev_info.rx_desc_lim.nb_mtu_seg_max != 0) {
-			data_size = rx_mode.max_rx_pkt_len /
-				port->dev_info.rx_desc_lim.nb_mtu_seg_max;
-
-			if ((data_size + RTE_PKTMBUF_HEADROOM) >
-							mbuf_data_size[0]) {
-				mbuf_data_size[0] = data_size +
-						 RTE_PKTMBUF_HEADROOM;
-				warning = 1;
 			}
+		} else {
+			socket_id = (socket_num == UMA_NO_CONFIG) ?
+				    0 : socket_num;
 		}
+		/* Apply default TxRx configuration for all ports */
+		init_config_port_offloads(pid, socket_id);
 	}
-
-	if (warning)
-		TESTPMD_LOG(WARNING,
-			    "Configured mbuf size of the first segment %hu\n",
-			    mbuf_data_size[0]);
 	/*
 	 * Create pools of mbuf.
 	 * If NUMA support is disabled, create a single pool of mbuf in
@@ -1592,21 +1590,8 @@ init_config(void)
 void
 reconfig(portid_t new_port_id, unsigned socket_id)
 {
-	struct rte_port *port;
-	int ret;
-
 	/* Reconfiguration of Ethernet ports. */
-	port = &ports[new_port_id];
-
-	ret = eth_dev_info_get_print_err(new_port_id, &port->dev_info);
-	if (ret != 0)
-		return;
-
-	/* set flag to initialize port/queue */
-	port->need_reconfig = 1;
-	port->need_reconfig_queues = 1;
-	port->socket_id = socket_id;
-
+	init_config_port_offloads(new_port_id, socket_id);
 	init_port_config();
 }
 
-- 
2.30.2

---
  Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- -	2021-07-26 13:53:16.166985967 +0100
+++ 0004-app-testpmd-fix-offloads-for-newly-attached-port.patch	2021-07-26 13:53:15.773291025 +0100
@@ -1 +1 @@
-From b6b8a1ebd4dadc82733ce4b0a711da918c386115 Mon Sep 17 00:00:00 2001
+From a5fb806241d9b55afa7f829403414be5b237ae4d Mon Sep 17 00:00:00 2001
@@ -5,0 +6,2 @@
+[ upstream commit b6b8a1ebd4dadc82733ce4b0a711da918c386115 ]
+
@@ -21 +22,0 @@
-Cc: stable@dpdk.org
@@ -27,2 +28,2 @@
- app/test-pmd/testpmd.c | 151 +++++++++++++++++++----------------------
- 1 file changed, 68 insertions(+), 83 deletions(-)
+ app/test-pmd/testpmd.c | 145 ++++++++++++++++++-----------------------
+ 1 file changed, 65 insertions(+), 80 deletions(-)
@@ -31 +32 @@
-index 1cdd3cdd12..a48f70962f 100644
+index d830fe3a2f..c442bcc5ff 100644
@@ -34 +35 @@
-@@ -1417,23 +1417,73 @@ check_nb_hairpinq(queueid_t hairpinq)
+@@ -1398,23 +1398,70 @@ check_nb_hairpinq(queueid_t hairpinq)
@@ -69,3 +69,0 @@
-+	if (eth_link_speed)
-+		port->dev_conf.link_speeds = eth_link_speed;
-+
@@ -116 +114 @@
-@@ -1455,30 +1505,12 @@ init_config(void)
+@@ -1436,30 +1483,12 @@ init_config(void)
@@ -151 +149 @@
-@@ -1486,48 +1518,14 @@ init_config(void)
+@@ -1467,45 +1496,14 @@ init_config(void)
@@ -168,3 +165,0 @@
--		if (eth_link_speed)
--			port->dev_conf.link_speeds = eth_link_speed;
--
@@ -205 +200 @@
-@@ -1610,21 +1608,8 @@ init_config(void)
+@@ -1592,21 +1590,8 @@ init_config(void)