patches for DPDK stable branches
 help / color / mirror / Atom feed
* [dpdk-stable] [PATCH] app/testpmd: fix offloads for the newly attached port
@ 2021-06-19 15:40 Viacheslav Ovsiienko
  2021-07-01 14:01 ` [dpdk-stable] [dpdk-dev] " Andrew Rybchenko
                   ` (2 more replies)
  0 siblings, 3 replies; 6+ messages in thread
From: Viacheslav Ovsiienko @ 2021-06-19 15:40 UTC (permalink / raw)
  To: dev; +Cc: stable

For the newly attached ports (with "port attach" command) the
default offloads settings, configured from application command
line, were not applied, causing port start failure following
the attach. For example, if scattering offload was configured
in command line and rxpkts was configured for multiple segments,
the newly attached port start was failed due to missing scattering
offload enable in the new port settings. The missing code to apply
the offloads to the new device and its queues is added.

Cc: stable@dpdk.org
Fixes: c9cce42876f5 ("ethdev: remove deprecated attach/detach functions")

Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
---
 app/test-pmd/testpmd.c | 32 ++++++++++++++++++++++++++++++++
 1 file changed, 32 insertions(+)

diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c
index 8ed1b97dec..b4ec182423 100644
--- a/app/test-pmd/testpmd.c
+++ b/app/test-pmd/testpmd.c
@@ -1599,6 +1599,7 @@ reconfig(portid_t new_port_id, unsigned socket_id)
 {
 	struct rte_port *port;
 	int ret;
+	int i;
 
 	/* Reconfiguration of Ethernet ports. */
 	port = &ports[new_port_id];
@@ -1611,7 +1612,38 @@ reconfig(portid_t new_port_id, unsigned socket_id)
 	port->need_reconfig = 1;
 	port->need_reconfig_queues = 1;
 	port->socket_id = socket_id;
+	port->tx_metadata = 0;
+
+	/* Apply default TxRx configuration to the port */
+	port->dev_conf.txmode = tx_mode;
+	port->dev_conf.rxmode = rx_mode;
+
+	if (!(port->dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE))
+		port->dev_conf.txmode.offloads &=
+					~DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+
+	/* Apply Rx offloads configuration */
+	for (i = 0; i < port->dev_info.max_rx_queues; i++)
+		port->rx_conf[i].offloads = port->dev_conf.rxmode.offloads;
+	/* Apply Tx offloads configuration */
+	for (i = 0; i < port->dev_info.max_tx_queues; i++)
+		port->tx_conf[i].offloads = port->dev_conf.txmode.offloads;
+
+	/* Check for maximum number of segments per MTU. Accordingly
+	 * update the mbuf data size.
+	 */
+	if (port->dev_info.rx_desc_lim.nb_mtu_seg_max != UINT16_MAX &&
+	    port->dev_info.rx_desc_lim.nb_mtu_seg_max != 0) {
+		uint16_t data_size = rx_mode.max_rx_pkt_len /
+				port->dev_info.rx_desc_lim.nb_mtu_seg_max;
 
+		if ((data_size + RTE_PKTMBUF_HEADROOM) > mbuf_data_size[0]) {
+			mbuf_data_size[0] = data_size + RTE_PKTMBUF_HEADROOM;
+			TESTPMD_LOG(WARNING,
+			    "Adjusted mbuf size of the first segment %hu\n",
+			    mbuf_data_size[0]);
+		}
+	}
 	init_port_config();
 }
 
-- 
2.18.1


^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [dpdk-stable] [dpdk-dev] [PATCH] app/testpmd: fix offloads for the newly attached port
  2021-06-19 15:40 [dpdk-stable] [PATCH] app/testpmd: fix offloads for the newly attached port Viacheslav Ovsiienko
@ 2021-07-01 14:01 ` Andrew Rybchenko
  2021-07-12 10:24 ` [dpdk-stable] [PATCH v2] " Viacheslav Ovsiienko
  2021-07-12 12:40 ` [dpdk-stable] [PATCH v3] " Viacheslav Ovsiienko
  2 siblings, 0 replies; 6+ messages in thread
From: Andrew Rybchenko @ 2021-07-01 14:01 UTC (permalink / raw)
  To: Viacheslav Ovsiienko, dev; +Cc: stable

On 6/19/21 6:40 PM, Viacheslav Ovsiienko wrote:
> For the newly attached ports (with "port attach" command) the
> default offloads settings, configured from application command
> line, were not applied, causing port start failure following
> the attach. For example, if scattering offload was configured
> in command line and rxpkts was configured for multiple segments,
> the newly attached port start was failed due to missing scattering
> offload enable in the new port settings. The missing code to apply
> the offloads to the new device and its queues is added.
> 
> Cc: stable@dpdk.org
> Fixes: c9cce42876f5 ("ethdev: remove deprecated attach/detach functions")

Two above lines should be swapped.

> 
> Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>

The patch duplicates too much from init_config() function.
Please, factor out a helper function to do the job and
use it in init_config() and reconfig().

^ permalink raw reply	[flat|nested] 6+ messages in thread

* [dpdk-stable] [PATCH v2] app/testpmd: fix offloads for the newly attached port
  2021-06-19 15:40 [dpdk-stable] [PATCH] app/testpmd: fix offloads for the newly attached port Viacheslav Ovsiienko
  2021-07-01 14:01 ` [dpdk-stable] [dpdk-dev] " Andrew Rybchenko
@ 2021-07-12 10:24 ` Viacheslav Ovsiienko
  2021-07-12 12:40 ` [dpdk-stable] [PATCH v3] " Viacheslav Ovsiienko
  2 siblings, 0 replies; 6+ messages in thread
From: Viacheslav Ovsiienko @ 2021-07-12 10:24 UTC (permalink / raw)
  To: dev; +Cc: aman.deep.singh, arybchenko, xiaoyun.li, stable

For the newly attached ports (with "port attach" command) the
default offloads settings, configured from application command
line, were not applied, causing port start failure following
the attach.

For example, if scattering offload was configured in command
line and rxpkts was configured for multiple segments, the newly
attached port start was failed due to missing scattering offload
enable in the new port settings. The missing code to apply
the offloads to the new device and its queues is added.

The new local routine init_config_port_offloads() is introduced,
embracing the shared part of port offloads initialization code.

Cc: stable@dpdk.org
Fixes: c9cce42876f5 ("ethdev: remove deprecated attach/detach functions")

Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
---

 v1: http://patches.dpdk.org/project/dpdk/patch/20210619154012.27295-1-viacheslavo@nvidia.com/
 v2: comments addressed - common code is presented as dedicated
     routine 

 app/test-pmd/testpmd.c | 142 +++++++++++++++++++----------------------
 1 file changed, 65 insertions(+), 77 deletions(-)

diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c
index 1cdd3cdd12..55aa8e504b 100644
--- a/app/test-pmd/testpmd.c
+++ b/app/test-pmd/testpmd.c
@@ -1417,21 +1417,74 @@ check_nb_hairpinq(queueid_t hairpinq)
 	return 0;
 }
 
+static void
+init_config_port_offloads(portid_t pid, uint32_t socket_id)
+{
+	struct rte_port *port = &ports[pid];
+	uint16_t data_size;
+	int ret;
+	int i;
+
+	port->dev_conf.txmode = tx_mode;
+	port->dev_conf.rxmode = rx_mode;
+
+	ret = eth_dev_info_get_print_err(pid, &port->dev_info);
+	if (ret != 0)
+		rte_exit(EXIT_FAILURE, "rte_eth_dev_info_get() failed\n");
+
+	ret = update_jumbo_frame_offload(pid);
+	if (ret != 0)
+		printf("Updating jumbo frame offload failed for port %u\n",
+			pid);
+
+	if (!(port->dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE))
+		port->dev_conf.txmode.offloads &=
+			~DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+
+	/* Apply Rx offloads configuration */
+	for (i = 0; i < port->dev_info.max_rx_queues; i++)
+		port->rx_conf[i].offloads = port->dev_conf.rxmode.offloads;
+	/* Apply Tx offloads configuration */
+	for (i = 0; i < port->dev_info.max_tx_queues; i++)
+		port->tx_conf[i].offloads = port->dev_conf.txmode.offloads;
+
+	if (eth_link_speed)
+		port->dev_conf.link_speeds = eth_link_speed;
+
+	/* set flag to initialize port/queue */
+	port->need_reconfig = 1;
+	port->need_reconfig_queues = 1;
+	port->socket_id = socket_id;
+	port->tx_metadata = 0;
+
+	/*
+	 * Check for maximum number of segments per MTU.
+	 * Accordingly update the mbuf data size.
+	 */
+	if (port->dev_info.rx_desc_lim.nb_mtu_seg_max != UINT16_MAX &&
+	    port->dev_info.rx_desc_lim.nb_mtu_seg_max != 0) {
+		data_size = rx_mode.max_rx_pkt_len /
+			port->dev_info.rx_desc_lim.nb_mtu_seg_max;
+
+		if ((data_size + RTE_PKTMBUF_HEADROOM) > mbuf_data_size[0]) {
+			mbuf_data_size[0] = data_size + RTE_PKTMBUF_HEADROOM;
+			TESTPMD_LOG(WARNING,
+				    "Configured mbuf size of the first segment %hu\n",
+				    mbuf_data_size[0]);
+		}
+	}
+}
+
 static void
 init_config(void)
 {
 	portid_t pid;
-	struct rte_port *port;
 	struct rte_mempool *mbp;
 	unsigned int nb_mbuf_per_pool;
 	lcoreid_t  lc_id;
 	uint8_t port_per_socket[RTE_MAX_NUMA_NODES];
 	struct rte_gro_param gro_param;
 	uint32_t gso_types;
-	uint16_t data_size;
-	bool warning = 0;
-	int k;
-	int ret;
 
 	memset(port_per_socket,0,RTE_MAX_NUMA_NODES);
 
@@ -1455,30 +1508,14 @@ init_config(void)
 	}
 
 	RTE_ETH_FOREACH_DEV(pid) {
-		port = &ports[pid];
-		/* Apply default TxRx configuration for all ports */
-		port->dev_conf.txmode = tx_mode;
-		port->dev_conf.rxmode = rx_mode;
-
-		ret = eth_dev_info_get_print_err(pid, &port->dev_info);
-		if (ret != 0)
-			rte_exit(EXIT_FAILURE,
-				 "rte_eth_dev_info_get() failed\n");
-
-		ret = update_jumbo_frame_offload(pid);
-		if (ret != 0)
-			printf("Updating jumbo frame offload failed for port %u\n",
-				pid);
+		uint32_t socket_id;
 
-		if (!(port->dev_info.tx_offload_capa &
-		      DEV_TX_OFFLOAD_MBUF_FAST_FREE))
-			port->dev_conf.txmode.offloads &=
-				~DEV_TX_OFFLOAD_MBUF_FAST_FREE;
 		if (numa_support) {
-			if (port_numa[pid] != NUMA_NO_CONFIG)
+			socket_id = port_numa[pid];
+			if (socket_id != NUMA_NO_CONFIG)
 				port_per_socket[port_numa[pid]]++;
 			else {
-				uint32_t socket_id = rte_eth_dev_socket_id(pid);
+				socket_id = rte_eth_dev_socket_id(pid);
 
 				/*
 				 * if socket_id is invalid,
@@ -1489,45 +1526,9 @@ init_config(void)
 				port_per_socket[socket_id]++;
 			}
 		}
-
-		/* Apply Rx offloads configuration */
-		for (k = 0; k < port->dev_info.max_rx_queues; k++)
-			port->rx_conf[k].offloads =
-				port->dev_conf.rxmode.offloads;
-		/* Apply Tx offloads configuration */
-		for (k = 0; k < port->dev_info.max_tx_queues; k++)
-			port->tx_conf[k].offloads =
-				port->dev_conf.txmode.offloads;
-
-		if (eth_link_speed)
-			port->dev_conf.link_speeds = eth_link_speed;
-
-		/* set flag to initialize port/queue */
-		port->need_reconfig = 1;
-		port->need_reconfig_queues = 1;
-		port->tx_metadata = 0;
-
-		/* Check for maximum number of segments per MTU. Accordingly
-		 * update the mbuf data size.
-		 */
-		if (port->dev_info.rx_desc_lim.nb_mtu_seg_max != UINT16_MAX &&
-				port->dev_info.rx_desc_lim.nb_mtu_seg_max != 0) {
-			data_size = rx_mode.max_rx_pkt_len /
-				port->dev_info.rx_desc_lim.nb_mtu_seg_max;
-
-			if ((data_size + RTE_PKTMBUF_HEADROOM) >
-							mbuf_data_size[0]) {
-				mbuf_data_size[0] = data_size +
-						 RTE_PKTMBUF_HEADROOM;
-				warning = 1;
-			}
-		}
+		/* Apply default TxRx configuration for all ports */
+		init_config_port_offloads(pid, socket_id);
 	}
-
-	if (warning)
-		TESTPMD_LOG(WARNING,
-			    "Configured mbuf size of the first segment %hu\n",
-			    mbuf_data_size[0]);
 	/*
 	 * Create pools of mbuf.
 	 * If NUMA support is disabled, create a single pool of mbuf in
@@ -1610,21 +1611,8 @@ init_config(void)
 void
 reconfig(portid_t new_port_id, unsigned socket_id)
 {
-	struct rte_port *port;
-	int ret;
-
 	/* Reconfiguration of Ethernet ports. */
-	port = &ports[new_port_id];
-
-	ret = eth_dev_info_get_print_err(new_port_id, &port->dev_info);
-	if (ret != 0)
-		return;
-
-	/* set flag to initialize port/queue */
-	port->need_reconfig = 1;
-	port->need_reconfig_queues = 1;
-	port->socket_id = socket_id;
-
+	init_config_port_offloads(new_port_id, socket_id);
 	init_port_config();
 }
 
-- 
2.18.1


^ permalink raw reply	[flat|nested] 6+ messages in thread

* [dpdk-stable] [PATCH v3] app/testpmd: fix offloads for the newly attached port
  2021-06-19 15:40 [dpdk-stable] [PATCH] app/testpmd: fix offloads for the newly attached port Viacheslav Ovsiienko
  2021-07-01 14:01 ` [dpdk-stable] [dpdk-dev] " Andrew Rybchenko
  2021-07-12 10:24 ` [dpdk-stable] [PATCH v2] " Viacheslav Ovsiienko
@ 2021-07-12 12:40 ` Viacheslav Ovsiienko
  2021-07-13  5:37   ` Li, Xiaoyun
  2 siblings, 1 reply; 6+ messages in thread
From: Viacheslav Ovsiienko @ 2021-07-12 12:40 UTC (permalink / raw)
  To: dev; +Cc: aman.deep.singh, arybchenko, stable

For the newly attached ports (with "port attach" command) the
default offloads settings, configured from application command
line, were not applied, causing port start failure following
the attach.

For example, if scattering offload was configured in command
line and rxpkts was configured for multiple segments, the newly
attached port start was failed due to missing scattering offload
enable in the new port settings. The missing code to apply
the offloads to the new device and its queues is added.

The new local routine init_config_port_offloads() is introduced,
embracing the shared part of port offloads initialization code.

Cc: stable@dpdk.org
Fixes: c9cce42876f5 ("ethdev: remove deprecated attach/detach functions")

Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
---
v1: http://patches.dpdk.org/project/dpdk/patch/20210619154012.27295-1-viacheslavo@nvidia.com/
v2: http://patches.dpdk.org/project/dpdk/patch/20210712102440.12491-1-viacheslavo@nvidia.com/
    - comments addressed - common code is presented as dedicated routine
v3: - uninitialized socket_id issue (reported by CI)
    - removed dead code for port_per_socket from init_config()

 app/test-pmd/testpmd.c | 151 +++++++++++++++++++----------------------
 1 file changed, 68 insertions(+), 83 deletions(-)

diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c
index 1cdd3cdd12..a48f70962f 100644
--- a/app/test-pmd/testpmd.c
+++ b/app/test-pmd/testpmd.c
@@ -1417,23 +1417,73 @@ check_nb_hairpinq(queueid_t hairpinq)
 	return 0;
 }
 
+static void
+init_config_port_offloads(portid_t pid, uint32_t socket_id)
+{
+	struct rte_port *port = &ports[pid];
+	uint16_t data_size;
+	int ret;
+	int i;
+
+	port->dev_conf.txmode = tx_mode;
+	port->dev_conf.rxmode = rx_mode;
+
+	ret = eth_dev_info_get_print_err(pid, &port->dev_info);
+	if (ret != 0)
+		rte_exit(EXIT_FAILURE, "rte_eth_dev_info_get() failed\n");
+
+	ret = update_jumbo_frame_offload(pid);
+	if (ret != 0)
+		printf("Updating jumbo frame offload failed for port %u\n",
+			pid);
+
+	if (!(port->dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE))
+		port->dev_conf.txmode.offloads &=
+			~DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+
+	/* Apply Rx offloads configuration */
+	for (i = 0; i < port->dev_info.max_rx_queues; i++)
+		port->rx_conf[i].offloads = port->dev_conf.rxmode.offloads;
+	/* Apply Tx offloads configuration */
+	for (i = 0; i < port->dev_info.max_tx_queues; i++)
+		port->tx_conf[i].offloads = port->dev_conf.txmode.offloads;
+
+	if (eth_link_speed)
+		port->dev_conf.link_speeds = eth_link_speed;
+
+	/* set flag to initialize port/queue */
+	port->need_reconfig = 1;
+	port->need_reconfig_queues = 1;
+	port->socket_id = socket_id;
+	port->tx_metadata = 0;
+
+	/*
+	 * Check for maximum number of segments per MTU.
+	 * Accordingly update the mbuf data size.
+	 */
+	if (port->dev_info.rx_desc_lim.nb_mtu_seg_max != UINT16_MAX &&
+	    port->dev_info.rx_desc_lim.nb_mtu_seg_max != 0) {
+		data_size = rx_mode.max_rx_pkt_len /
+			port->dev_info.rx_desc_lim.nb_mtu_seg_max;
+
+		if ((data_size + RTE_PKTMBUF_HEADROOM) > mbuf_data_size[0]) {
+			mbuf_data_size[0] = data_size + RTE_PKTMBUF_HEADROOM;
+			TESTPMD_LOG(WARNING,
+				    "Configured mbuf size of the first segment %hu\n",
+				    mbuf_data_size[0]);
+		}
+	}
+}
+
 static void
 init_config(void)
 {
 	portid_t pid;
-	struct rte_port *port;
 	struct rte_mempool *mbp;
 	unsigned int nb_mbuf_per_pool;
 	lcoreid_t  lc_id;
-	uint8_t port_per_socket[RTE_MAX_NUMA_NODES];
 	struct rte_gro_param gro_param;
 	uint32_t gso_types;
-	uint16_t data_size;
-	bool warning = 0;
-	int k;
-	int ret;
-
-	memset(port_per_socket,0,RTE_MAX_NUMA_NODES);
 
 	/* Configuration of logical cores. */
 	fwd_lcores = rte_zmalloc("testpmd: fwd_lcores",
@@ -1455,30 +1505,12 @@ init_config(void)
 	}
 
 	RTE_ETH_FOREACH_DEV(pid) {
-		port = &ports[pid];
-		/* Apply default TxRx configuration for all ports */
-		port->dev_conf.txmode = tx_mode;
-		port->dev_conf.rxmode = rx_mode;
+		uint32_t socket_id;
 
-		ret = eth_dev_info_get_print_err(pid, &port->dev_info);
-		if (ret != 0)
-			rte_exit(EXIT_FAILURE,
-				 "rte_eth_dev_info_get() failed\n");
-
-		ret = update_jumbo_frame_offload(pid);
-		if (ret != 0)
-			printf("Updating jumbo frame offload failed for port %u\n",
-				pid);
-
-		if (!(port->dev_info.tx_offload_capa &
-		      DEV_TX_OFFLOAD_MBUF_FAST_FREE))
-			port->dev_conf.txmode.offloads &=
-				~DEV_TX_OFFLOAD_MBUF_FAST_FREE;
 		if (numa_support) {
-			if (port_numa[pid] != NUMA_NO_CONFIG)
-				port_per_socket[port_numa[pid]]++;
-			else {
-				uint32_t socket_id = rte_eth_dev_socket_id(pid);
+			socket_id = port_numa[pid];
+			if (port_numa[pid] == NUMA_NO_CONFIG) {
+				socket_id = rte_eth_dev_socket_id(pid);
 
 				/*
 				 * if socket_id is invalid,
@@ -1486,48 +1518,14 @@ init_config(void)
 				 */
 				if (check_socket_id(socket_id) < 0)
 					socket_id = socket_ids[0];
-				port_per_socket[socket_id]++;
-			}
-		}
-
-		/* Apply Rx offloads configuration */
-		for (k = 0; k < port->dev_info.max_rx_queues; k++)
-			port->rx_conf[k].offloads =
-				port->dev_conf.rxmode.offloads;
-		/* Apply Tx offloads configuration */
-		for (k = 0; k < port->dev_info.max_tx_queues; k++)
-			port->tx_conf[k].offloads =
-				port->dev_conf.txmode.offloads;
-
-		if (eth_link_speed)
-			port->dev_conf.link_speeds = eth_link_speed;
-
-		/* set flag to initialize port/queue */
-		port->need_reconfig = 1;
-		port->need_reconfig_queues = 1;
-		port->tx_metadata = 0;
-
-		/* Check for maximum number of segments per MTU. Accordingly
-		 * update the mbuf data size.
-		 */
-		if (port->dev_info.rx_desc_lim.nb_mtu_seg_max != UINT16_MAX &&
-				port->dev_info.rx_desc_lim.nb_mtu_seg_max != 0) {
-			data_size = rx_mode.max_rx_pkt_len /
-				port->dev_info.rx_desc_lim.nb_mtu_seg_max;
-
-			if ((data_size + RTE_PKTMBUF_HEADROOM) >
-							mbuf_data_size[0]) {
-				mbuf_data_size[0] = data_size +
-						 RTE_PKTMBUF_HEADROOM;
-				warning = 1;
 			}
+		} else {
+			socket_id = (socket_num == UMA_NO_CONFIG) ?
+				    0 : socket_num;
 		}
+		/* Apply default TxRx configuration for all ports */
+		init_config_port_offloads(pid, socket_id);
 	}
-
-	if (warning)
-		TESTPMD_LOG(WARNING,
-			    "Configured mbuf size of the first segment %hu\n",
-			    mbuf_data_size[0]);
 	/*
 	 * Create pools of mbuf.
 	 * If NUMA support is disabled, create a single pool of mbuf in
@@ -1610,21 +1608,8 @@ init_config(void)
 void
 reconfig(portid_t new_port_id, unsigned socket_id)
 {
-	struct rte_port *port;
-	int ret;
-
 	/* Reconfiguration of Ethernet ports. */
-	port = &ports[new_port_id];
-
-	ret = eth_dev_info_get_print_err(new_port_id, &port->dev_info);
-	if (ret != 0)
-		return;
-
-	/* set flag to initialize port/queue */
-	port->need_reconfig = 1;
-	port->need_reconfig_queues = 1;
-	port->socket_id = socket_id;
-
+	init_config_port_offloads(new_port_id, socket_id);
 	init_port_config();
 }
 
-- 
2.18.1


^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [dpdk-stable] [PATCH v3] app/testpmd: fix offloads for the newly attached port
  2021-07-12 12:40 ` [dpdk-stable] [PATCH v3] " Viacheslav Ovsiienko
@ 2021-07-13  5:37   ` Li, Xiaoyun
  2021-07-13  9:50     ` [dpdk-stable] [dpdk-dev] " Andrew Rybchenko
  0 siblings, 1 reply; 6+ messages in thread
From: Li, Xiaoyun @ 2021-07-13  5:37 UTC (permalink / raw)
  To: Viacheslav Ovsiienko, dev; +Cc: Singh, Aman Deep, arybchenko, stable



> -----Original Message-----
> From: stable <stable-bounces@dpdk.org> On Behalf Of Viacheslav Ovsiienko
> Sent: Monday, July 12, 2021 20:41
> To: dev@dpdk.org
> Cc: Singh, Aman Deep <aman.deep.singh@intel.com>;
> arybchenko@solarflare.com; stable@dpdk.org
> Subject: [dpdk-stable] [PATCH v3] app/testpmd: fix offloads for the newly
> attached port
> 
> For the newly attached ports (with "port attach" command) the default offloads
> settings, configured from application command line, were not applied, causing
> port start failure following the attach.
> 
> For example, if scattering offload was configured in command line and rxpkts
> was configured for multiple segments, the newly attached port start was failed
> due to missing scattering offload enable in the new port settings. The missing
> code to apply the offloads to the new device and its queues is added.
> 
> The new local routine init_config_port_offloads() is introduced, embracing the
> shared part of port offloads initialization code.
> 
> Cc: stable@dpdk.org
> Fixes: c9cce42876f5 ("ethdev: remove deprecated attach/detach functions")
> 
> Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
> ---
> v1: http://patches.dpdk.org/project/dpdk/patch/20210619154012.27295-1-
> viacheslavo@nvidia.com/
> v2: http://patches.dpdk.org/project/dpdk/patch/20210712102440.12491-1-
> viacheslavo@nvidia.com/
>     - comments addressed - common code is presented as dedicated routine
> v3: - uninitialized socket_id issue (reported by CI)
>     - removed dead code for port_per_socket from init_config()
> 
>  app/test-pmd/testpmd.c | 151 +++++++++++++++++++----------------------
>  1 file changed, 68 insertions(+), 83 deletions(-)
> 
Acked-by: Xiaoyun Li <xiaoyun.li@intel.com>

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [dpdk-stable] [dpdk-dev] [PATCH v3] app/testpmd: fix offloads for the newly attached port
  2021-07-13  5:37   ` Li, Xiaoyun
@ 2021-07-13  9:50     ` Andrew Rybchenko
  0 siblings, 0 replies; 6+ messages in thread
From: Andrew Rybchenko @ 2021-07-13  9:50 UTC (permalink / raw)
  To: Li, Xiaoyun, Viacheslav Ovsiienko, dev
  Cc: Singh, Aman Deep, arybchenko, stable

On 7/13/21 8:37 AM, Li, Xiaoyun wrote:
> 
> 
>> -----Original Message-----
>> From: stable <stable-bounces@dpdk.org> On Behalf Of Viacheslav Ovsiienko
>> Sent: Monday, July 12, 2021 20:41
>> To: dev@dpdk.org
>> Cc: Singh, Aman Deep <aman.deep.singh@intel.com>;
>> arybchenko@solarflare.com; stable@dpdk.org
>> Subject: [dpdk-stable] [PATCH v3] app/testpmd: fix offloads for the newly
>> attached port
>>
>> For the newly attached ports (with "port attach" command) the default offloads
>> settings, configured from application command line, were not applied, causing
>> port start failure following the attach.
>>
>> For example, if scattering offload was configured in command line and rxpkts
>> was configured for multiple segments, the newly attached port start was failed
>> due to missing scattering offload enable in the new port settings. The missing
>> code to apply the offloads to the new device and its queues is added.
>>
>> The new local routine init_config_port_offloads() is introduced, embracing the
>> shared part of port offloads initialization code.
>>
>> Cc: stable@dpdk.org
>> Fixes: c9cce42876f5 ("ethdev: remove deprecated attach/detach functions")
>>
>> Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
>> ---
>> v1: http://patches.dpdk.org/project/dpdk/patch/20210619154012.27295-1-
>> viacheslavo@nvidia.com/
>> v2: http://patches.dpdk.org/project/dpdk/patch/20210712102440.12491-1-
>> viacheslavo@nvidia.com/
>>     - comments addressed - common code is presented as dedicated routine
>> v3: - uninitialized socket_id issue (reported by CI)
>>     - removed dead code for port_per_socket from init_config()
>>
>>  app/test-pmd/testpmd.c | 151 +++++++++++++++++++----------------------
>>  1 file changed, 68 insertions(+), 83 deletions(-)
>>
> Acked-by: Xiaoyun Li <xiaoyun.li@intel.com>
> 

Applied, thanks.

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2021-07-13  9:50 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-06-19 15:40 [dpdk-stable] [PATCH] app/testpmd: fix offloads for the newly attached port Viacheslav Ovsiienko
2021-07-01 14:01 ` [dpdk-stable] [dpdk-dev] " Andrew Rybchenko
2021-07-12 10:24 ` [dpdk-stable] [PATCH v2] " Viacheslav Ovsiienko
2021-07-12 12:40 ` [dpdk-stable] [PATCH v3] " Viacheslav Ovsiienko
2021-07-13  5:37   ` Li, Xiaoyun
2021-07-13  9:50     ` [dpdk-stable] [dpdk-dev] " Andrew Rybchenko

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).