patches for DPDK stable branches
 help / color / mirror / Atom feed
* [dpdk-stable] [PATCH 16.11 0/2] net/bonding backports
@ 2018-10-07 20:22 Chas Williams
  2018-10-07 20:22 ` [dpdk-stable] [PATCH 16.11 1/2] net/bonding: reduce slave starvation on Rx poll Chas Williams
                   ` (2 more replies)
  0 siblings, 3 replies; 4+ messages in thread
From: Chas Williams @ 2018-10-07 20:22 UTC (permalink / raw)
  To: stable; +Cc: bluca

"net/bonding: fix Rx slave fairness" builds on work done in "net/bonding:
reduce slave starvation on Rx poll" so that makes it a prerequisite.

^ permalink raw reply	[flat|nested] 4+ messages in thread

* [dpdk-stable] [PATCH 16.11 1/2] net/bonding: reduce slave starvation on Rx poll
  2018-10-07 20:22 [dpdk-stable] [PATCH 16.11 0/2] net/bonding backports Chas Williams
@ 2018-10-07 20:22 ` Chas Williams
  2018-10-07 20:22 ` [dpdk-stable] [PATCH 16.11 2/2] net/bonding: fix Rx slave fairness Chas Williams
  2018-10-08  9:00 ` [dpdk-stable] [PATCH 16.11 0/2] net/bonding backports Luca Boccassi
  2 siblings, 0 replies; 4+ messages in thread
From: Chas Williams @ 2018-10-07 20:22 UTC (permalink / raw)
  To: stable; +Cc: bluca, Keith Wiles

From: Keith Wiles <keith.wiles@intel.com>

[ upstream commit ae2a04864a9a3878f74e66e3ae0fdebe77223a09 ]

When polling the bonded ports for RX packets the old driver would
always start with the first slave in the list. If the requested
number of packets is filled on the first port in a two port config
then the second port could be starved or have larger number of
missed packet errors.

The code attempts to start with a different slave each time RX poll
is done to help eliminate starvation of slave ports. The effect of
the previous code was much lower performance for two slaves in the
bond then just the one slave.

The performance drop was detected when the application can not poll
the rings of Rx packets fast enough and the packets per second for
two or more ports was at the threshold throughput of the application.
At this threshold the slaves would see very little or no drops in
the case of one slave. Then enable the second slave you would see
a large drop rate on the two slave bond and reduction in throughput.

Signed-off-by: Keith Wiles <keith.wiles@intel.com>
Acked-by: Declan Doherty <declan.doherty@intel.com>
---
 drivers/net/bonding/rte_eth_bond_pmd.c     | 21 +++++++++++++++------
 drivers/net/bonding/rte_eth_bond_private.h |  3 ++-
 2 files changed, 17 insertions(+), 7 deletions(-)

diff --git a/drivers/net/bonding/rte_eth_bond_pmd.c b/drivers/net/bonding/rte_eth_bond_pmd.c
index c672f0560..15e893240 100644
--- a/drivers/net/bonding/rte_eth_bond_pmd.c
+++ b/drivers/net/bonding/rte_eth_bond_pmd.c
@@ -1,7 +1,7 @@
 /*-
  *   BSD LICENSE
  *
- *   Copyright(c) 2010-2015 Intel Corporation. All rights reserved.
+ *   Copyright(c) 2010-2017 Intel Corporation. All rights reserved.
  *   All rights reserved.
  *
  *   Redistribution and use in source and binary forms, with or without
@@ -146,7 +146,7 @@ bond_ethdev_rx_burst_8023ad(void *queue, struct rte_mbuf **bufs,
 	const uint16_t ether_type_slow_be = rte_be_to_cpu_16(ETHER_TYPE_SLOW);
 	uint16_t num_rx_total = 0;	/* Total number of received packets */
 	uint8_t slaves[RTE_MAX_ETHPORTS];
-	uint8_t slave_count;
+	uint8_t slave_count, idx;
 
 	uint8_t collecting;  /* current slave collecting status */
 	const uint8_t promisc = internals->promiscuous_en;
@@ -160,12 +160,18 @@ bond_ethdev_rx_burst_8023ad(void *queue, struct rte_mbuf **bufs,
 	memcpy(slaves, internals->active_slaves,
 			sizeof(internals->active_slaves[0]) * slave_count);
 
+	idx = internals->active_slave;
+	if (idx >= slave_count) {
+		internals->active_slave = 0;
+		idx = 0;
+	}
 	for (i = 0; i < slave_count && num_rx_total < nb_pkts; i++) {
 		j = num_rx_total;
-		collecting = ACTOR_STATE(&mode_8023ad_ports[slaves[i]], COLLECTING);
+		collecting = ACTOR_STATE(&mode_8023ad_ports[slaves[idx]],
+					 COLLECTING);
 
 		/* Read packets from this slave */
-		num_rx_total += rte_eth_rx_burst(slaves[i], bd_rx_q->queue_id,
+		num_rx_total += rte_eth_rx_burst(slaves[idx], bd_rx_q->queue_id,
 				&bufs[num_rx_total], nb_pkts - num_rx_total);
 
 		for (k = j; k < 2 && k < num_rx_total; k++)
@@ -188,8 +194,8 @@ bond_ethdev_rx_burst_8023ad(void *queue, struct rte_mbuf **bufs,
 					!is_same_ether_addr(&bond_mac, &hdr->d_addr)))) {
 
 				if (hdr->ether_type == ether_type_slow_be) {
-					bond_mode_8023ad_handle_slow_pkt(internals, slaves[i],
-						bufs[j]);
+					bond_mode_8023ad_handle_slow_pkt(
+					    internals, slaves[idx], bufs[j]);
 				} else
 					rte_pktmbuf_free(bufs[j]);
 
@@ -202,8 +208,11 @@ bond_ethdev_rx_burst_8023ad(void *queue, struct rte_mbuf **bufs,
 			} else
 				j++;
 		}
+		if (unlikely(++idx == slave_count))
+			idx = 0;
 	}
 
+	internals->active_slave = idx;
 	return num_rx_total;
 }
 
diff --git a/drivers/net/bonding/rte_eth_bond_private.h b/drivers/net/bonding/rte_eth_bond_private.h
index d95d440b4..8c963ddb2 100644
--- a/drivers/net/bonding/rte_eth_bond_private.h
+++ b/drivers/net/bonding/rte_eth_bond_private.h
@@ -1,7 +1,7 @@
 /*-
  *   BSD LICENSE
  *
- *   Copyright(c) 2010-2015 Intel Corporation. All rights reserved.
+ *   Copyright(c) 2010-2017 Intel Corporation. All rights reserved.
  *   All rights reserved.
  *
  *   Redistribution and use in source and binary forms, with or without
@@ -144,6 +144,7 @@ struct bond_dev_private {
 	uint16_t nb_rx_queues;			/**< Total number of rx queues */
 	uint16_t nb_tx_queues;			/**< Total number of tx queues*/
 
+	uint8_t active_slave;		/**< Next active_slave to poll */
 	uint8_t active_slave_count;		/**< Number of active slaves */
 	uint8_t active_slaves[RTE_MAX_ETHPORTS];	/**< Active slave list */
 
-- 
2.14.4

^ permalink raw reply	[flat|nested] 4+ messages in thread

* [dpdk-stable] [PATCH 16.11 2/2] net/bonding: fix Rx slave fairness
  2018-10-07 20:22 [dpdk-stable] [PATCH 16.11 0/2] net/bonding backports Chas Williams
  2018-10-07 20:22 ` [dpdk-stable] [PATCH 16.11 1/2] net/bonding: reduce slave starvation on Rx poll Chas Williams
@ 2018-10-07 20:22 ` Chas Williams
  2018-10-08  9:00 ` [dpdk-stable] [PATCH 16.11 0/2] net/bonding backports Luca Boccassi
  2 siblings, 0 replies; 4+ messages in thread
From: Chas Williams @ 2018-10-07 20:22 UTC (permalink / raw)
  To: stable; +Cc: bluca, Chas Williams

From: Chas Williams <chas3@att.com>

[ upstream commit e1110e97764873de0af28e6fa11dcd9c170d4e53 ]

Some PMDs, especially ones with vector receives, require a minimum number
of receive buffers in order to receive any packets.  If the first slave
read leaves less than this number available, a read from the next slave
may return 0 implying that the slave doesn't have any packets which
results in skipping over that slave as the next active slave.

To fix this, implement round robin for the slaves during receive that
is only advanced to the next slave at the end of each receive burst.
This is also done to provide some additional fairness in processing in
other bonding RX burst routines as well.

Fixes: 2efb58cbab6e ("bond: new link bonding library")
Cc: stable@dpdk.org

Signed-off-by: Chas Williams <chas3@att.com>
Acked-by: Luca Boccassi <bluca@debian.org>
Acked-by: Matan Azrad <matan@mellanox.com>
---
 drivers/net/bonding/rte_eth_bond_pmd.c | 30 +++++++++++++++++++-----------
 1 file changed, 19 insertions(+), 11 deletions(-)

diff --git a/drivers/net/bonding/rte_eth_bond_pmd.c b/drivers/net/bonding/rte_eth_bond_pmd.c
index 15e893240..87a247de3 100644
--- a/drivers/net/bonding/rte_eth_bond_pmd.c
+++ b/drivers/net/bonding/rte_eth_bond_pmd.c
@@ -83,28 +83,34 @@ bond_ethdev_rx_burst(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 {
 	struct bond_dev_private *internals;
 
-	uint16_t num_rx_slave = 0;
 	uint16_t num_rx_total = 0;
-
+	uint16_t slave_count;
+	uint16_t active_slave;
 	int i;
 
 	/* Cast to structure, containing bonded device's port id and queue id */
 	struct bond_rx_queue *bd_rx_q = (struct bond_rx_queue *)queue;
-
 	internals = bd_rx_q->dev_private;
+	slave_count = internals->active_slave_count;
+	active_slave = internals->active_slave;
 
+	for (i = 0; i < slave_count && nb_pkts; i++) {
+		uint16_t num_rx_slave;
 
-	for (i = 0; i < internals->active_slave_count && nb_pkts; i++) {
 		/* Offset of pointer to *bufs increases as packets are received
 		 * from other slaves */
-		num_rx_slave = rte_eth_rx_burst(internals->active_slaves[i],
-				bd_rx_q->queue_id, bufs + num_rx_total, nb_pkts);
-		if (num_rx_slave) {
-			num_rx_total += num_rx_slave;
-			nb_pkts -= num_rx_slave;
-		}
+		num_rx_slave =
+			rte_eth_rx_burst(internals->active_slaves[active_slave],
+					 bd_rx_q->queue_id,
+					 bufs + num_rx_total, nb_pkts);
+		num_rx_total += num_rx_slave;
+		nb_pkts -= num_rx_slave;
+		if (++active_slave == slave_count)
+			active_slave = 0;
 	}
 
+	if (++internals->active_slave == slave_count)
+		internals->active_slave = 0;
 	return num_rx_total;
 }
 
@@ -212,7 +218,9 @@ bond_ethdev_rx_burst_8023ad(void *queue, struct rte_mbuf **bufs,
 			idx = 0;
 	}
 
-	internals->active_slave = idx;
+	if (++internals->active_slave == slave_count)
+		internals->active_slave = 0;
+
 	return num_rx_total;
 }
 
-- 
2.14.4

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [dpdk-stable] [PATCH 16.11 0/2] net/bonding backports
  2018-10-07 20:22 [dpdk-stable] [PATCH 16.11 0/2] net/bonding backports Chas Williams
  2018-10-07 20:22 ` [dpdk-stable] [PATCH 16.11 1/2] net/bonding: reduce slave starvation on Rx poll Chas Williams
  2018-10-07 20:22 ` [dpdk-stable] [PATCH 16.11 2/2] net/bonding: fix Rx slave fairness Chas Williams
@ 2018-10-08  9:00 ` Luca Boccassi
  2 siblings, 0 replies; 4+ messages in thread
From: Luca Boccassi @ 2018-10-08  9:00 UTC (permalink / raw)
  To: Chas Williams, stable

On Sun, 2018-10-07 at 16:22 -0400, Chas Williams wrote:
> "net/bonding: fix Rx slave fairness" builds on work done in
> "net/bonding:
> reduce slave starvation on Rx poll" so that makes it a prerequisite.

Thanks, applied and pushed

-- 
Kind regards,
Luca Boccassi

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2018-10-08  9:00 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-10-07 20:22 [dpdk-stable] [PATCH 16.11 0/2] net/bonding backports Chas Williams
2018-10-07 20:22 ` [dpdk-stable] [PATCH 16.11 1/2] net/bonding: reduce slave starvation on Rx poll Chas Williams
2018-10-07 20:22 ` [dpdk-stable] [PATCH 16.11 2/2] net/bonding: fix Rx slave fairness Chas Williams
2018-10-08  9:00 ` [dpdk-stable] [PATCH 16.11 0/2] net/bonding backports Luca Boccassi

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).