* [dpdk-dev] [PATCH 0/2] Bugfixes for bonding
@ 2021-09-22 7:09 Min Hu (Connor)
2021-09-22 7:09 ` [dpdk-dev] [PATCH 1/2] net/bonding: fix dedicated queue mode in vector burst Min Hu (Connor)
` (2 more replies)
0 siblings, 3 replies; 4+ messages in thread
From: Min Hu (Connor) @ 2021-09-22 7:09 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit, thomas
This patchset contains two bugfixes for bonding.
Chengchang Tang (2):
net/bonding: fix dedicated queue mode in vector burst
net/bonding: fix RSS key length
drivers/net/bonding/rte_eth_bond_8023ad.c | 32 ++++++++++++-----
drivers/net/bonding/rte_eth_bond_api.c | 6 ++++
drivers/net/bonding/rte_eth_bond_pmd.c | 44 ++++++++++++++---------
3 files changed, 56 insertions(+), 26 deletions(-)
--
2.33.0
^ permalink raw reply [flat|nested] 4+ messages in thread
* [dpdk-dev] [PATCH 1/2] net/bonding: fix dedicated queue mode in vector burst
2021-09-22 7:09 [dpdk-dev] [PATCH 0/2] Bugfixes for bonding Min Hu (Connor)
@ 2021-09-22 7:09 ` Min Hu (Connor)
2021-09-22 7:09 ` [dpdk-dev] [PATCH 2/2] net/bonding: fix RSS key length Min Hu (Connor)
2021-10-08 17:24 ` [dpdk-dev] [PATCH 0/2] Bugfixes for bonding Ferruh Yigit
2 siblings, 0 replies; 4+ messages in thread
From: Min Hu (Connor) @ 2021-09-22 7:09 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit, thomas
From: Chengchang Tang <tangchengchang@huawei.com>
If the vector burst mode is selected, the dedicated queue mode will not
take effect on some PMDs because these PMDs may have some limitaions
in vector burst mode. For example, the limit on burst size. Currently,
both hns3 and intel I40E require four alignments when receiving packets
in vector mode. As a result, they cannt accept packets if burst size
below four. However, in dedicated queue mode, the burst size of periodic
packets processing is one.
This patch fixes the above problem by modifying the burst size to 32.
This approach also makes the packet processing of the dedicated queue
mode more reasonable. Currently, if multiple lacp protocol packets are
received in the hardware queue in a cycle, only one lacp packet will be
processed in this cycle, and the left packets will be processed in the
following cycle. After the modification, all the lacp packets will be
processed at one time, which seems more reasonable and closer to the
behavior of the bonding driver when the dedicated queue is not turned on.
Fixes: 112891cd27e5 ("net/bonding: add dedicated HW queues for LACP control")
Cc: stable@dpdk.org
Signed-off-by: Chengchang Tang <tangchengchang@huawei.com>
Signed-off-by: Min Hu(Connor) <humin29@huawei.com>
---
drivers/net/bonding/rte_eth_bond_8023ad.c | 32 ++++++++++++++++-------
1 file changed, 23 insertions(+), 9 deletions(-)
diff --git a/drivers/net/bonding/rte_eth_bond_8023ad.c b/drivers/net/bonding/rte_eth_bond_8023ad.c
index 8b5b32fcaf..fc8ebd6320 100644
--- a/drivers/net/bonding/rte_eth_bond_8023ad.c
+++ b/drivers/net/bonding/rte_eth_bond_8023ad.c
@@ -838,6 +838,27 @@ rx_machine_update(struct bond_dev_private *internals, uint16_t slave_id,
rx_machine(internals, slave_id, NULL);
}
+static void
+bond_mode_8023ad_dedicated_rxq_process(struct bond_dev_private *internals,
+ uint16_t slave_id)
+{
+#define DEDICATED_QUEUE_BURST_SIZE 32
+ struct rte_mbuf *lacp_pkt[DEDICATED_QUEUE_BURST_SIZE];
+ uint16_t rx_count = rte_eth_rx_burst(slave_id,
+ internals->mode4.dedicated_queues.rx_qid,
+ lacp_pkt, DEDICATED_QUEUE_BURST_SIZE);
+
+ if (rx_count) {
+ uint16_t i;
+
+ for (i = 0; i < rx_count; i++)
+ bond_mode_8023ad_handle_slow_pkt(internals, slave_id,
+ lacp_pkt[i]);
+ } else {
+ rx_machine_update(internals, slave_id, NULL);
+ }
+}
+
static void
bond_mode_8023ad_periodic_cb(void *arg)
{
@@ -926,15 +947,8 @@ bond_mode_8023ad_periodic_cb(void *arg)
rx_machine_update(internals, slave_id, lacp_pkt);
} else {
- uint16_t rx_count = rte_eth_rx_burst(slave_id,
- internals->mode4.dedicated_queues.rx_qid,
- &lacp_pkt, 1);
-
- if (rx_count == 1)
- bond_mode_8023ad_handle_slow_pkt(internals,
- slave_id, lacp_pkt);
- else
- rx_machine_update(internals, slave_id, NULL);
+ bond_mode_8023ad_dedicated_rxq_process(internals,
+ slave_id);
}
periodic_machine(internals, slave_id);
--
2.33.0
^ permalink raw reply [flat|nested] 4+ messages in thread
* [dpdk-dev] [PATCH 2/2] net/bonding: fix RSS key length
2021-09-22 7:09 [dpdk-dev] [PATCH 0/2] Bugfixes for bonding Min Hu (Connor)
2021-09-22 7:09 ` [dpdk-dev] [PATCH 1/2] net/bonding: fix dedicated queue mode in vector burst Min Hu (Connor)
@ 2021-09-22 7:09 ` Min Hu (Connor)
2021-10-08 17:24 ` [dpdk-dev] [PATCH 0/2] Bugfixes for bonding Ferruh Yigit
2 siblings, 0 replies; 4+ messages in thread
From: Min Hu (Connor) @ 2021-09-22 7:09 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit, thomas
From: Chengchang Tang <tangchengchang@huawei.com>
Now the hash_key_size information has not been set. So, apps can not
get the key size from dev_info(), this make some problem.
e.g, in testpmd, the hash_key_size will be checked before configure
or get the hash key:
testpmd> show port 4 rss-hash
dev_info did not provide a valid hash key size
testpmd> show port 4 rss-hash key
dev_info did not provide a valid hash key size
testpmd> port config 4 rss-hash-key ipv4 (hash key)
dev_info did not provide a valid hash key size
In this patch, the meaning of rss_key_len has been modified. It only
indicated the length of the configured hash key before. Therefore,
its value depends on the user's configuration. This seems unreasonable.
And now, it indicates the minimum hash key length required by the
bonded device. Its value will be the shortest hash key among all slave
drivers.
Fixes: 734ce47f71e0 ("bonding: support RSS dynamic configuration")
Cc: stable@dpdk.org
Signed-off-by: Chengchang Tang <tangchengchang@huawei.com>
Signed-off-by: Min Hu(Connor) <humin29@huawei.com>
---
drivers/net/bonding/rte_eth_bond_api.c | 6 ++++
drivers/net/bonding/rte_eth_bond_pmd.c | 44 ++++++++++++++++----------
2 files changed, 33 insertions(+), 17 deletions(-)
diff --git a/drivers/net/bonding/rte_eth_bond_api.c b/drivers/net/bonding/rte_eth_bond_api.c
index eb8d15d160..5140ef14c2 100644
--- a/drivers/net/bonding/rte_eth_bond_api.c
+++ b/drivers/net/bonding/rte_eth_bond_api.c
@@ -290,6 +290,7 @@ eth_bond_slave_inherit_dev_info_rx_first(struct bond_dev_private *internals,
struct rte_eth_rxconf *rxconf_i = &internals->default_rxconf;
internals->reta_size = di->reta_size;
+ internals->rss_key_len = di->hash_key_size;
/* Inherit Rx offload capabilities from the first slave device */
internals->rx_offload_capa = di->rx_offload_capa;
@@ -385,6 +386,11 @@ eth_bond_slave_inherit_dev_info_rx_next(struct bond_dev_private *internals,
*/
if (internals->reta_size > di->reta_size)
internals->reta_size = di->reta_size;
+ if (internals->rss_key_len > di->hash_key_size) {
+ RTE_BOND_LOG(WARNING, "slave has different rss key size, "
+ "configuring rss may fail");
+ internals->rss_key_len = di->hash_key_size;
+ }
if (!internals->max_rx_pktlen &&
di->max_rx_pktlen < internals->candidate_max_rx_pktlen)
diff --git a/drivers/net/bonding/rte_eth_bond_pmd.c b/drivers/net/bonding/rte_eth_bond_pmd.c
index 54987d96b3..15e09f01ef 100644
--- a/drivers/net/bonding/rte_eth_bond_pmd.c
+++ b/drivers/net/bonding/rte_eth_bond_pmd.c
@@ -1701,14 +1701,11 @@ slave_configure(struct rte_eth_dev *bonded_eth_dev,
/* If RSS is enabled for bonding, try to enable it for slaves */
if (bonded_eth_dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG) {
- if (internals->rss_key_len != 0) {
- slave_eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_key_len =
+ /* rss_key won't be empty if RSS is configured in bonded dev */
+ slave_eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_key_len =
internals->rss_key_len;
- slave_eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_key =
+ slave_eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_key =
internals->rss_key;
- } else {
- slave_eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_key = NULL;
- }
slave_eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_hf =
bonded_eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_hf;
@@ -2251,6 +2248,7 @@ bond_ethdev_info(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
dev_info->flow_type_rss_offloads = internals->flow_type_rss_offloads;
dev_info->reta_size = internals->reta_size;
+ dev_info->hash_key_size = internals->rss_key_len;
return 0;
}
@@ -3040,13 +3038,15 @@ bond_ethdev_rss_hash_update(struct rte_eth_dev *dev,
if (bond_rss_conf.rss_hf != 0)
dev->data->dev_conf.rx_adv_conf.rss_conf.rss_hf = bond_rss_conf.rss_hf;
- if (bond_rss_conf.rss_key && bond_rss_conf.rss_key_len <
- sizeof(internals->rss_key)) {
- if (bond_rss_conf.rss_key_len == 0)
- bond_rss_conf.rss_key_len = 40;
- internals->rss_key_len = bond_rss_conf.rss_key_len;
+ if (bond_rss_conf.rss_key) {
+ if (bond_rss_conf.rss_key_len < internals->rss_key_len)
+ return -EINVAL;
+ else if (bond_rss_conf.rss_key_len > internals->rss_key_len)
+ RTE_BOND_LOG(WARNING, "rss_key will be truncated");
+
memcpy(internals->rss_key, bond_rss_conf.rss_key,
internals->rss_key_len);
+ bond_rss_conf.rss_key_len = internals->rss_key_len;
}
for (i = 0; i < internals->slave_count; i++) {
@@ -3506,14 +3506,24 @@ bond_ethdev_configure(struct rte_eth_dev *dev)
* Fall back to default RSS key if the key is not specified
*/
if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS) {
- if (dev->data->dev_conf.rx_adv_conf.rss_conf.rss_key != NULL) {
- internals->rss_key_len =
- dev->data->dev_conf.rx_adv_conf.rss_conf.rss_key_len;
- memcpy(internals->rss_key,
- dev->data->dev_conf.rx_adv_conf.rss_conf.rss_key,
+ struct rte_eth_rss_conf *rss_conf =
+ &dev->data->dev_conf.rx_adv_conf.rss_conf;
+ if (rss_conf->rss_key != NULL) {
+ if (internals->rss_key_len > rss_conf->rss_key_len) {
+ RTE_BOND_LOG(ERR, "Invalid rss key length(%u)",
+ rss_conf->rss_key_len);
+ return -EINVAL;
+ }
+
+ memcpy(internals->rss_key, rss_conf->rss_key,
internals->rss_key_len);
} else {
- internals->rss_key_len = sizeof(default_rss_key);
+ if (internals->rss_key_len > sizeof(default_rss_key)) {
+ RTE_BOND_LOG(ERR,
+ "There is no suitable default hash key");
+ return -EINVAL;
+ }
+
memcpy(internals->rss_key, default_rss_key,
internals->rss_key_len);
}
--
2.33.0
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [dpdk-dev] [PATCH 0/2] Bugfixes for bonding
2021-09-22 7:09 [dpdk-dev] [PATCH 0/2] Bugfixes for bonding Min Hu (Connor)
2021-09-22 7:09 ` [dpdk-dev] [PATCH 1/2] net/bonding: fix dedicated queue mode in vector burst Min Hu (Connor)
2021-09-22 7:09 ` [dpdk-dev] [PATCH 2/2] net/bonding: fix RSS key length Min Hu (Connor)
@ 2021-10-08 17:24 ` Ferruh Yigit
2 siblings, 0 replies; 4+ messages in thread
From: Ferruh Yigit @ 2021-10-08 17:24 UTC (permalink / raw)
To: Min Hu (Connor), dev; +Cc: thomas
On 9/22/2021 8:09 AM, Min Hu (Connor) wrote:
> This patchset contains two bugfixes for bonding.
>
> Chengchang Tang (2):
> net/bonding: fix dedicated queue mode in vector burst
> net/bonding: fix RSS key length
>
Series applied to dpdk-next-net/main, thanks.
^ permalink raw reply [flat|nested] 4+ messages in thread
end of thread, other threads:[~2021-10-08 17:24 UTC | newest]
Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-09-22 7:09 [dpdk-dev] [PATCH 0/2] Bugfixes for bonding Min Hu (Connor)
2021-09-22 7:09 ` [dpdk-dev] [PATCH 1/2] net/bonding: fix dedicated queue mode in vector burst Min Hu (Connor)
2021-09-22 7:09 ` [dpdk-dev] [PATCH 2/2] net/bonding: fix RSS key length Min Hu (Connor)
2021-10-08 17:24 ` [dpdk-dev] [PATCH 0/2] Bugfixes for bonding Ferruh Yigit
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).