DPDK patches and discussions
 help / color / mirror / Atom feed
From: Chas Williams <3chas3@gmail.com>
To: dev@dpdk.org
Cc: declan.doherty@intel.com, matan@mellanox.com, ehkinzie@gmail.com,
	Chas Williams <chas3@att.com>,
	stable@dpdk.org
Subject: [dpdk-dev] [PATCH v2] net/bonding: fix RX slave fairness
Date: Thu, 20 Sep 2018 08:52:26 -0400	[thread overview]
Message-ID: <20180920125226.11904-1-3chas3@gmail.com> (raw)
In-Reply-To: <20180919154825.5183-1-3chas3@gmail.com>

From: Chas Williams <chas3@att.com>

Some PMDs, especially ones with vector receives, require a minimum number
of receive buffers in order to receive any packets.  If the first slave
read leaves less than this number available, a read from the next slave
may return 0 implying that the slave doesn't have any packets which
results in skipping over that slave as the next active slave.

To fix this, implement round robin for the slaves during receive that
is only advanced to the next slave at the end of each receive burst.
This is also done to provide some additional fairness in processing in
other bonding RX burst routines as well.

Fixes: 2efb58cbab6e ("bond: new link bonding library")
Cc: stable@dpdk.org

Signed-off-by: Chas Williams <chas3@att.com>
Acked-by: Luca Boccassi <bluca@debian.org>
Acked-by: Matan Azrad <matan@mellanox.com>
---

v2:
  - Reworded title and commit message
  - Fix checkpatch issue
		
 drivers/net/bonding/rte_eth_bond_pmd.c | 53 ++++++++++++++++++++++------------
 1 file changed, 34 insertions(+), 19 deletions(-)

diff --git a/drivers/net/bonding/rte_eth_bond_pmd.c b/drivers/net/bonding/rte_eth_bond_pmd.c
index b84f32263..5efd046a1 100644
--- a/drivers/net/bonding/rte_eth_bond_pmd.c
+++ b/drivers/net/bonding/rte_eth_bond_pmd.c
@@ -58,28 +58,34 @@ bond_ethdev_rx_burst(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 {
 	struct bond_dev_private *internals;
 
-	uint16_t num_rx_slave = 0;
 	uint16_t num_rx_total = 0;
-
+	uint16_t slave_count;
+	uint16_t active_slave;
 	int i;
 
 	/* Cast to structure, containing bonded device's port id and queue id */
 	struct bond_rx_queue *bd_rx_q = (struct bond_rx_queue *)queue;
-
 	internals = bd_rx_q->dev_private;
+	slave_count = internals->active_slave_count;
+	active_slave = internals->active_slave;
 
+	for (i = 0; i < slave_count && nb_pkts; i++) {
+		uint16_t num_rx_slave;
 
-	for (i = 0; i < internals->active_slave_count && nb_pkts; i++) {
 		/* Offset of pointer to *bufs increases as packets are received
 		 * from other slaves */
-		num_rx_slave = rte_eth_rx_burst(internals->active_slaves[i],
-				bd_rx_q->queue_id, bufs + num_rx_total, nb_pkts);
-		if (num_rx_slave) {
-			num_rx_total += num_rx_slave;
-			nb_pkts -= num_rx_slave;
-		}
+		num_rx_slave =
+			rte_eth_rx_burst(internals->active_slaves[active_slave],
+					 bd_rx_q->queue_id,
+					 bufs + num_rx_total, nb_pkts);
+		num_rx_total += num_rx_slave;
+		nb_pkts -= num_rx_slave;
+		if (++active_slave == slave_count)
+			active_slave = 0;
 	}
 
+	if (++internals->active_slave == slave_count)
+		internals->active_slave = 0;
 	return num_rx_total;
 }
 
@@ -258,25 +264,32 @@ bond_ethdev_rx_burst_8023ad_fast_queue(void *queue, struct rte_mbuf **bufs,
 	uint16_t num_rx_total = 0;	/* Total number of received packets */
 	uint16_t slaves[RTE_MAX_ETHPORTS];
 	uint16_t slave_count;
-
-	uint16_t i, idx;
+	uint16_t active_slave;
+	uint16_t i;
 
 	/* Copy slave list to protect against slave up/down changes during tx
 	 * bursting */
 	slave_count = internals->active_slave_count;
+	active_slave = internals->active_slave;
 	memcpy(slaves, internals->active_slaves,
 			sizeof(internals->active_slaves[0]) * slave_count);
 
-	for (i = 0, idx = internals->active_slave;
-			i < slave_count && num_rx_total < nb_pkts; i++, idx++) {
-		idx = idx % slave_count;
+	for (i = 0; i < slave_count && nb_pkts; i++) {
+		uint16_t num_rx_slave;
 
 		/* Read packets from this slave */
-		num_rx_total += rte_eth_rx_burst(slaves[idx], bd_rx_q->queue_id,
-				&bufs[num_rx_total], nb_pkts - num_rx_total);
+		num_rx_slave = rte_eth_rx_burst(slaves[active_slave],
+						bd_rx_q->queue_id,
+						bufs + num_rx_total, nb_pkts);
+		num_rx_total += num_rx_slave;
+		nb_pkts -= num_rx_slave;
+
+		if (++active_slave == slave_count)
+			active_slave = 0;
 	}
 
-	internals->active_slave = idx;
+	if (++internals->active_slave == slave_count)
+		internals->active_slave = 0;
 
 	return num_rx_total;
 }
@@ -459,7 +472,9 @@ bond_ethdev_rx_burst_8023ad(void *queue, struct rte_mbuf **bufs,
 			idx = 0;
 	}
 
-	internals->active_slave = idx;
+	if (++internals->active_slave == slave_count)
+		internals->active_slave = 0;
+
 	return num_rx_total;
 }
 
-- 
2.14.4

  parent reply	other threads:[~2018-09-20 12:52 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-09-19 15:48 [dpdk-dev] [PATCH] net/bonding: ensure fairness among slaves Chas Williams
2018-09-19 16:06 ` [dpdk-dev] [dpdk-stable] " Luca Boccassi
2018-09-20  6:28 ` [dpdk-dev] " Matan Azrad
2018-09-20 12:47   ` Chas Williams
2018-09-20 12:52 ` Chas Williams [this message]
2018-09-21 18:27   ` [dpdk-dev] [dpdk-stable] [PATCH v2] net/bonding: fix RX slave fairness Ferruh Yigit

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20180920125226.11904-1-3chas3@gmail.com \
    --to=3chas3@gmail.com \
    --cc=chas3@att.com \
    --cc=declan.doherty@intel.com \
    --cc=dev@dpdk.org \
    --cc=ehkinzie@gmail.com \
    --cc=matan@mellanox.com \
    --cc=stable@dpdk.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).