From: Gaoxiang Liu <gaoxiangliu0@163.com>
To: chas3@att.com, humin29@huawei.com
Cc: dev@dpdk.org, liugaoxiang@huawei.com,
Gaoxiang Liu <gaoxiangliu0@163.com>
Subject: [PATCH v2] net/bonding: another fix to LACP mempool size
Date: Sat, 5 Mar 2022 10:26:30 +0800 [thread overview]
Message-ID: <20220305022630.153-1-gaoxiangliu0@163.com> (raw)
In-Reply-To: <20220304095613.1717-1-gaoxiangliu0@163.com>
The following log message may appear after a slave is idle(or nearly
idle)
for a few minutes:"PMD: Failed to allocate LACP packet from pool".
And bond mode 4 negotiation may fail.
Problem: All mbufs from a slave' private pool(used exclusively for
transmitting LACPDUs)
have been allocated and are still sitting in the device's tx descriptor
ring and
other cores' mempool caches.
Solution: Ensure that each slave'tx (LACPDU) mempool owns more than
n-tx-queues * n-tx-descriptor + fwd_core_num *
per-core-mmempool-flush-threshold mbufs.
Note that the LACP tx machine fuction is the only code that allocates
from a slave's private pool. It runs in the context of the interrupt
thread, and thus it has no mempool cache of its own.
Signed-off-by: Gaoxiang Liu <liugaoxiang@huawei.com>
---
v2:
* Fixed compile issues.
---
drivers/net/bonding/rte_eth_bond_8023ad.c | 4 ++++
1 file changed, 4 insertions(+)
diff --git a/drivers/net/bonding/rte_eth_bond_8023ad.c b/drivers/net/bonding/rte_eth_bond_8023ad.c
index ca50583d62..831c7dc6ab 100644
--- a/drivers/net/bonding/rte_eth_bond_8023ad.c
+++ b/drivers/net/bonding/rte_eth_bond_8023ad.c
@@ -1050,6 +1050,7 @@ bond_mode_8023ad_activate_slave(struct rte_eth_dev *bond_dev,
uint32_t total_tx_desc;
struct bond_tx_queue *bd_tx_q;
uint16_t q_id;
+ uint32_t cache_size;
/* Given slave mus not be in active list */
RTE_ASSERT(find_slave_by_id(internals->active_slaves,
@@ -1100,6 +1101,9 @@ bond_mode_8023ad_activate_slave(struct rte_eth_dev *bond_dev,
total_tx_desc += bd_tx_q->nb_tx_desc;
}
+ cache_size = RTE_MEMPOOL_CACHE_MAX_SIZE >= 32 ?
+ 32 : RTE_MEMPOOL_CACHE_MAX_SIZE;
+ total_tx_desc = rte_lcore_count() * cache_size * 1.5;
snprintf(mem_name, RTE_DIM(mem_name), "slave_port%u_pool", slave_id);
port->mbuf_pool = rte_pktmbuf_pool_create(mem_name, total_tx_desc,
RTE_MEMPOOL_CACHE_MAX_SIZE >= 32 ?
--
2.32.0
next prev parent reply other threads:[~2022-03-05 2:27 UTC|newest]
Thread overview: 20+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-03-04 9:56 [PATCH] " Gaoxiang Liu
2022-03-05 2:26 ` Gaoxiang Liu [this message]
2022-03-05 7:09 ` [PATCH v3] " Gaoxiang Liu
2022-03-08 1:17 ` Min Hu (Connor)
2022-03-08 14:24 ` [PATCH v4] " Gaoxiang Liu
2022-03-09 0:32 ` Min Hu (Connor)
2022-03-09 1:25 ` Stephen Hemminger
2022-03-09 2:53 ` Min Hu (Connor)
2022-03-25 12:02 ` Gaoxiang Liu
2022-03-09 1:26 ` Stephen Hemminger
2022-03-25 12:10 ` [PATCH v5] " Gaoxiang Liu
2022-03-25 13:01 ` Gaoxiang Liu
2022-03-25 13:13 ` Morten Brørup
2022-03-26 12:57 ` Wang, Haiyue
2022-03-25 13:34 ` [PATCH v6] " Gaoxiang Liu
2022-03-25 14:04 ` Morten Brørup
2022-03-28 15:16 ` [PATCH v7] " Gaoxiang Liu
2022-04-29 14:20 ` Ferruh Yigit
2022-05-01 7:02 ` Matan Azrad
2024-04-12 19:04 ` Ferruh Yigit
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20220305022630.153-1-gaoxiangliu0@163.com \
--to=gaoxiangliu0@163.com \
--cc=chas3@att.com \
--cc=dev@dpdk.org \
--cc=humin29@huawei.com \
--cc=liugaoxiang@huawei.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).