From: Radu Nicolau <radu.nicolau@intel.com>
To: dev@dpdk.org
Cc: ferruh.yigit@intel.com, declan.doherty@intel.com,
keith.wiles@intel.com, Radu Nicolau <radu.nicolau@intel.com>,
stable@dpdk.org
Subject: [dpdk-dev] [PATCH] net/bonding: fix burst hash computation
Date: Mon, 29 Jan 2018 14:36:03 +0000 [thread overview]
Message-ID: <1517236563-13546-1-git-send-email-radu.nicolau@intel.com> (raw)
Fixes: 09150784a776 ("net/bonding: burst mode hash calculation")
Cc: stable@dpdk.org
Wrong function was used for l23 and l34 hashing
slave index was incremented twice.
Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
---
drivers/net/bonding/rte_eth_bond_api.c | 3 +++
drivers/net/bonding/rte_eth_bond_pmd.c | 8 ++++----
2 files changed, 7 insertions(+), 4 deletions(-)
diff --git a/drivers/net/bonding/rte_eth_bond_api.c b/drivers/net/bonding/rte_eth_bond_api.c
index 03b73be..e69b199 100644
--- a/drivers/net/bonding/rte_eth_bond_api.c
+++ b/drivers/net/bonding/rte_eth_bond_api.c
@@ -665,12 +665,15 @@ rte_eth_bond_xmit_policy_set(uint16_t bonded_port_id, uint8_t policy)
switch (policy) {
case BALANCE_XMIT_POLICY_LAYER2:
internals->balance_xmit_policy = policy;
+ internals->burst_xmit_hash = burst_xmit_l2_hash;
break;
case BALANCE_XMIT_POLICY_LAYER23:
internals->balance_xmit_policy = policy;
+ internals->burst_xmit_hash = burst_xmit_l23_hash;
break;
case BALANCE_XMIT_POLICY_LAYER34:
internals->balance_xmit_policy = policy;
+ internals->burst_xmit_hash = burst_xmit_l34_hash;
break;
default:
diff --git a/drivers/net/bonding/rte_eth_bond_pmd.c b/drivers/net/bonding/rte_eth_bond_pmd.c
index a86bcaf..3e5f023 100644
--- a/drivers/net/bonding/rte_eth_bond_pmd.c
+++ b/drivers/net/bonding/rte_eth_bond_pmd.c
@@ -800,7 +800,7 @@ burst_xmit_l2_hash(struct rte_mbuf **buf, uint16_t nb_pkts,
hash = ether_hash(eth_hdr);
- slaves[i++] = (hash ^= hash >> 8) % slave_count;
+ slaves[i] = (hash ^= hash >> 8) % slave_count;
}
}
@@ -838,7 +838,7 @@ burst_xmit_l23_hash(struct rte_mbuf **buf, uint16_t nb_pkts,
hash ^= hash >> 16;
hash ^= hash >> 8;
- slaves[i++] = hash % slave_count;
+ slaves[i] = hash % slave_count;
}
}
@@ -907,7 +907,7 @@ burst_xmit_l34_hash(struct rte_mbuf **buf, uint16_t nb_pkts,
hash ^= hash >> 16;
hash ^= hash >> 8;
- slaves[i++] = hash % slave_count;
+ slaves[i] = hash % slave_count;
}
}
@@ -1229,7 +1229,7 @@ bond_ethdev_tx_burst_balance(void *queue, struct rte_mbuf **bufs,
/* Number of mbufs for transmission on each slave */
uint16_t slave_nb_bufs[RTE_MAX_ETHPORTS] = { 0 };
/* Mapping array generated by hash function to map mbufs to slaves */
- uint16_t bufs_slave_port_idxs[RTE_MAX_ETHPORTS] = { 0 };
+ uint16_t bufs_slave_port_idxs[nb_bufs];
uint16_t slave_tx_count, slave_tx_fail_count[RTE_MAX_ETHPORTS] = { 0 };
uint16_t total_tx_count = 0, total_tx_fail_count = 0;
--
2.7.5
next reply other threads:[~2018-01-29 14:41 UTC|newest]
Thread overview: 2+ messages / expand[flat|nested] mbox.gz Atom feed top
2018-01-29 14:36 Radu Nicolau [this message]
2018-01-31 19:28 ` Ferruh Yigit
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1517236563-13546-1-git-send-email-radu.nicolau@intel.com \
--to=radu.nicolau@intel.com \
--cc=declan.doherty@intel.com \
--cc=dev@dpdk.org \
--cc=ferruh.yigit@intel.com \
--cc=keith.wiles@intel.com \
--cc=stable@dpdk.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).