DPDK patches and discussions
 help / color / mirror / Atom feed
From: "Wang, Haiyue" <haiyue.wang@intel.com>
To: "Morten Brørup" <mb@smartsharesystems.com>,
	"Gaoxiang Liu" <gaoxiangliu0@163.com>,
	"chas3@att.com" <chas3@att.com>,
	"humin29@huawei.com" <humin29@huawei.com>
Cc: "dev@dpdk.org" <dev@dpdk.org>,
	"liugaoxiang@huawei.com" <liugaoxiang@huawei.com>,
	"olivier.matz@6wind.com" <olivier.matz@6wind.com>,
	"andrew.rybchenko@oktetlabs.ru" <andrew.rybchenko@oktetlabs.ru>
Subject: RE: [PATCH v5] net/bonding: another fix to LACP mempool size
Date: Sat, 26 Mar 2022 12:57:20 +0000	[thread overview]
Message-ID: <BYAPR11MB34954095921426282EB5CF16F71B9@BYAPR11MB3495.namprd11.prod.outlook.com> (raw)
In-Reply-To: <98CBD80474FA8B44BF855DF32C47DC35D86F73@smartserver.smartshare.dk>

> -----Original Message-----
> From: Morten Brørup <mb@smartsharesystems.com>
> Sent: Friday, March 25, 2022 21:14
> To: Gaoxiang Liu <gaoxiangliu0@163.com>; chas3@att.com; humin29@huawei.com
> Cc: dev@dpdk.org; liugaoxiang@huawei.com; olivier.matz@6wind.com; andrew.rybchenko@oktetlabs.ru
> Subject: RE: [PATCH v5] net/bonding: another fix to LACP mempool size
> 
> +CC mempool maintainers
> 
> > From: Gaoxiang Liu [mailto:gaoxiangliu0@163.com]
> > Sent: Friday, 25 March 2022 14.02
> >
> > The following log message may appear after a slave is idle(or nearly
> > idle)
> > for a few minutes:"PMD: Failed to allocate LACP packet from pool".
> > And bond mode 4 negotiation may fail.
> >
> > Problem:When bond mode 4 has been chosed and delicated queue has
> > not been enable, all mbufs from a slave' private pool(used
> > exclusively for transmitting LACPDUs) have been allocated in
> > interrupt thread, and are still sitting in the device's tx
> > descriptor ring and other cores' mempool caches in fwd thread.
> > Thus the interrupt thread can not alloc LACP packet from pool.
> >
> > Solution: Ensure that each slave'tx (LACPDU) mempool owns more than
> > n-tx-queues * n-tx-descriptor + fwd_core_num *
> > per-core-mmempool-flush-threshold mbufs.
> >
> > Note that the LACP tx machine fuction is the only code that allocates
> > from a slave's private pool. It runs in the context of the interrupt
> > thread, and thus it has no mempool cache of its own.
> >
> > Signed-off-by: Gaoxiang Liu <liugaoxiang@huawei.com>
> >
> > ---
> > v2:
> > * Fixed compile issues.
> >
> > v3:
> > * delete duplicate code.
> >
> > v4;
> > * Fixed some issues.
> > 1. total_tx_desc should use +=
> > 2. add detailed logs
> >
> > v5:
> > * Fixed some issues.
> > 1. move CACHE_FLUSHTHRESH_MULTIPLIER to rte_eth_bond-8023ad.c
> > 2. use RTE_MIN
> > ---
> >  drivers/net/bonding/rte_eth_bond_8023ad.c | 11 ++++++++---


> >
> > +/* BONDING_8023AD_CACHE_FLUSHTHRESH_MULTIPLIER  is the same as
> > + * CACHE_FLUSHTHRESH_MULTIPLIER already defined in rte_mempool.c */
> > +#define BONDING_8023AD_CACHE_FLUSHTHRESH_MULTIPLIER 1.5
> 
> Very important comment. Thank you!
> 
> May I suggest that a similar comment is added to the rte_mempool.c file, so if
> CACHE_FLUSHTHRESH_MULTIPLIER is changed there, we don't forget to change the copy-pasted code in the
> rte_eth_bond_8023ad.c file too. It has previously been discussed changing it from 1.5 to 2 for
> symmetry reasons.

Then, introduce some kind of public API macro, like
RTE_MEMPOOL_CACHE_MAX_FLUSHTHRESH_MULTIPLIER as RTE_MEMPOOL_CACHE_MAX_SIZE does ?
So that when calling mempool create API, it can do other kind of calculation, like
RTE_MIN(user's new flush multiper, RTE_MEMPOOL_CACHE_MAX_FLUSHTHRESH_MULTIPLIER).

Just a suggestion, so that no need add strange comments like BONDING_8023AD in an
lib.

> 
> > +
> > +	cache_size = RTE_MIN(RTE_MEMPOOL_CACHE_MAX_SIZE, 32);
> > +	total_tx_desc += rte_lcore_count() * cache_size *
> > BONDING_8023AD_CACHE_FLUSHTHRESH_MULTIPLIER;
> >  	snprintf(mem_name, RTE_DIM(mem_name), "slave_port%u_pool",
> > slave_id);
> >  	port->mbuf_pool = rte_pktmbuf_pool_create(mem_name,
> > total_tx_desc,
> > -		RTE_MEMPOOL_CACHE_MAX_SIZE >= 32 ?
> > -			32 : RTE_MEMPOOL_CACHE_MAX_SIZE,
> > -		0, element_size, socket_id);
> > +		cache_size, 0, element_size, socket_id);
> >
> >  	/* Any memory allocation failure in initialization is critical
> > because
> >  	 * resources can't be free, so reinitialization is impossible. */
> > --
> > 2.32.0
> >


  reply	other threads:[~2022-03-26 12:57 UTC|newest]

Thread overview: 20+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-03-04  9:56 [PATCH] " Gaoxiang Liu
2022-03-05  2:26 ` [PATCH v2] " Gaoxiang Liu
2022-03-05  7:09   ` [PATCH v3] " Gaoxiang Liu
2022-03-08  1:17     ` Min Hu (Connor)
2022-03-08 14:24     ` [PATCH v4] " Gaoxiang Liu
2022-03-09  0:32       ` Min Hu (Connor)
2022-03-09  1:25       ` Stephen Hemminger
2022-03-09  2:53         ` Min Hu (Connor)
2022-03-25 12:02           ` Gaoxiang Liu
2022-03-09  1:26       ` Stephen Hemminger
2022-03-25 12:10       ` [PATCH v5] " Gaoxiang Liu
2022-03-25 13:01         ` Gaoxiang Liu
2022-03-25 13:13           ` Morten Brørup
2022-03-26 12:57             ` Wang, Haiyue [this message]
2022-03-25 13:34           ` [PATCH v6] " Gaoxiang Liu
2022-03-25 14:04             ` Morten Brørup
2022-03-28 15:16             ` [PATCH v7] " Gaoxiang Liu
2022-04-29 14:20               ` Ferruh Yigit
2022-05-01  7:02                 ` Matan Azrad
2024-04-12 19:04               ` Ferruh Yigit

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=BYAPR11MB34954095921426282EB5CF16F71B9@BYAPR11MB3495.namprd11.prod.outlook.com \
    --to=haiyue.wang@intel.com \
    --cc=andrew.rybchenko@oktetlabs.ru \
    --cc=chas3@att.com \
    --cc=dev@dpdk.org \
    --cc=gaoxiangliu0@163.com \
    --cc=humin29@huawei.com \
    --cc=liugaoxiang@huawei.com \
    --cc=mb@smartsharesystems.com \
    --cc=olivier.matz@6wind.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).