DPDK patches and discussions
 help / color / mirror / Atom feed
From: "Morten Brørup" <mb@smartsharesystems.com>
To: "Gaoxiang Liu" <gaoxiangliu0@163.com>, <chas3@att.com>,
	<humin29@huawei.com>
Cc: <dev@dpdk.org>, <liugaoxiang@huawei.com>,
	<olivier.matz@6wind.com>, <andrew.rybchenko@oktetlabs.ru>
Subject: RE: [PATCH v5] net/bonding: another fix to LACP mempool size
Date: Fri, 25 Mar 2022 14:13:59 +0100	[thread overview]
Message-ID: <98CBD80474FA8B44BF855DF32C47DC35D86F73@smartserver.smartshare.dk> (raw)
In-Reply-To: <20220325130135.2207-1-gaoxiangliu0@163.com>

+CC mempool maintainers

> From: Gaoxiang Liu [mailto:gaoxiangliu0@163.com]
> Sent: Friday, 25 March 2022 14.02
> 
> The following log message may appear after a slave is idle(or nearly
> idle)
> for a few minutes:"PMD: Failed to allocate LACP packet from pool".
> And bond mode 4 negotiation may fail.
> 
> Problem:When bond mode 4 has been chosed and delicated queue has
> not been enable, all mbufs from a slave' private pool(used
> exclusively for transmitting LACPDUs) have been allocated in
> interrupt thread, and are still sitting in the device's tx
> descriptor ring and other cores' mempool caches in fwd thread.
> Thus the interrupt thread can not alloc LACP packet from pool.
> 
> Solution: Ensure that each slave'tx (LACPDU) mempool owns more than
> n-tx-queues * n-tx-descriptor + fwd_core_num *
> per-core-mmempool-flush-threshold mbufs.
> 
> Note that the LACP tx machine fuction is the only code that allocates
> from a slave's private pool. It runs in the context of the interrupt
> thread, and thus it has no mempool cache of its own.
> 
> Signed-off-by: Gaoxiang Liu <liugaoxiang@huawei.com>
> 
> ---
> v2:
> * Fixed compile issues.
> 
> v3:
> * delete duplicate code.
> 
> v4;
> * Fixed some issues.
> 1. total_tx_desc should use +=
> 2. add detailed logs
> 
> v5:
> * Fixed some issues.
> 1. move CACHE_FLUSHTHRESH_MULTIPLIER to rte_eth_bond-8023ad.c
> 2. use RTE_MIN
> ---
>  drivers/net/bonding/rte_eth_bond_8023ad.c | 11 ++++++++---
>  1 file changed, 8 insertions(+), 3 deletions(-)
> 
> diff --git a/drivers/net/bonding/rte_eth_bond_8023ad.c
> b/drivers/net/bonding/rte_eth_bond_8023ad.c
> index ca50583d62..2c39b0d062 100644
> --- a/drivers/net/bonding/rte_eth_bond_8023ad.c
> +++ b/drivers/net/bonding/rte_eth_bond_8023ad.c
> @@ -1050,6 +1050,7 @@ bond_mode_8023ad_activate_slave(struct
> rte_eth_dev *bond_dev,
>  	uint32_t total_tx_desc;
>  	struct bond_tx_queue *bd_tx_q;
>  	uint16_t q_id;
> +	uint32_t cache_size;
> 
>  	/* Given slave mus not be in active list */
>  	RTE_ASSERT(find_slave_by_id(internals->active_slaves,
> @@ -1100,11 +1101,15 @@ bond_mode_8023ad_activate_slave(struct
> rte_eth_dev *bond_dev,
>  		total_tx_desc += bd_tx_q->nb_tx_desc;
>  	}
> 
> +/* BONDING_8023AD_CACHE_FLUSHTHRESH_MULTIPLIER  is the same as
> + * CACHE_FLUSHTHRESH_MULTIPLIER already defined in rte_mempool.c */
> +#define BONDING_8023AD_CACHE_FLUSHTHRESH_MULTIPLIER 1.5

Very important comment. Thank you!

May I suggest that a similar comment is added to the rte_mempool.c file, so if CACHE_FLUSHTHRESH_MULTIPLIER is changed there, we don't forget to change the copy-pasted code in the rte_eth_bond_8023ad.c file too. It has previously been discussed changing it from 1.5 to 2 for symmetry reasons.

> +
> +	cache_size = RTE_MIN(RTE_MEMPOOL_CACHE_MAX_SIZE, 32);
> +	total_tx_desc += rte_lcore_count() * cache_size *
> BONDING_8023AD_CACHE_FLUSHTHRESH_MULTIPLIER;
>  	snprintf(mem_name, RTE_DIM(mem_name), "slave_port%u_pool",
> slave_id);
>  	port->mbuf_pool = rte_pktmbuf_pool_create(mem_name,
> total_tx_desc,
> -		RTE_MEMPOOL_CACHE_MAX_SIZE >= 32 ?
> -			32 : RTE_MEMPOOL_CACHE_MAX_SIZE,
> -		0, element_size, socket_id);
> +		cache_size, 0, element_size, socket_id);
> 
>  	/* Any memory allocation failure in initialization is critical
> because
>  	 * resources can't be free, so reinitialization is impossible. */
> --
> 2.32.0
> 


  reply	other threads:[~2022-03-25 13:14 UTC|newest]

Thread overview: 20+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-03-04  9:56 [PATCH] " Gaoxiang Liu
2022-03-05  2:26 ` [PATCH v2] " Gaoxiang Liu
2022-03-05  7:09   ` [PATCH v3] " Gaoxiang Liu
2022-03-08  1:17     ` Min Hu (Connor)
2022-03-08 14:24     ` [PATCH v4] " Gaoxiang Liu
2022-03-09  0:32       ` Min Hu (Connor)
2022-03-09  1:25       ` Stephen Hemminger
2022-03-09  2:53         ` Min Hu (Connor)
2022-03-25 12:02           ` Gaoxiang Liu
2022-03-09  1:26       ` Stephen Hemminger
2022-03-25 12:10       ` [PATCH v5] " Gaoxiang Liu
2022-03-25 13:01         ` Gaoxiang Liu
2022-03-25 13:13           ` Morten Brørup [this message]
2022-03-26 12:57             ` Wang, Haiyue
2022-03-25 13:34           ` [PATCH v6] " Gaoxiang Liu
2022-03-25 14:04             ` Morten Brørup
2022-03-28 15:16             ` [PATCH v7] " Gaoxiang Liu
2022-04-29 14:20               ` Ferruh Yigit
2022-05-01  7:02                 ` Matan Azrad
2024-04-12 19:04               ` Ferruh Yigit

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=98CBD80474FA8B44BF855DF32C47DC35D86F73@smartserver.smartshare.dk \
    --to=mb@smartsharesystems.com \
    --cc=andrew.rybchenko@oktetlabs.ru \
    --cc=chas3@att.com \
    --cc=dev@dpdk.org \
    --cc=gaoxiangliu0@163.com \
    --cc=humin29@huawei.com \
    --cc=liugaoxiang@huawei.com \
    --cc=olivier.matz@6wind.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).