DPDK patches and discussions
 help / color / mirror / Atom feed
From: "Kulasek, TomaszX" <tomaszx.kulasek@intel.com>
To: Robert Sanford <rsanford2@gmail.com>, "dev@dpdk.org" <dev@dpdk.org>
Cc: "Doherty, Declan" <declan.doherty@intel.com>,
	"De Lara Guarch, Pablo" <pablo.de.lara.guarch@intel.com>,
	"olivier.matz@6wind.com" <olivier.matz@6wind.com>
Subject: Re: [dpdk-dev] [PATCH 3/4] net/bonding: another fix to LACP mempool size
Date: Mon, 7 Nov 2016 16:02:31 +0000	[thread overview]
Message-ID: <3042915272161B4EB253DA4D77EB373A14F46B11@IRSMSX102.ger.corp.intel.com> (raw)
In-Reply-To: <1470084176-79932-4-git-send-email-rsanford@akamai.com>



> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Robert Sanford
> Sent: Monday, August 1, 2016 22:43
> To: dev@dpdk.org
> Cc: Doherty, Declan <declan.doherty@intel.com>; De Lara Guarch, Pablo
> <pablo.de.lara.guarch@intel.com>; olivier.matz@6wind.com
> Subject: [dpdk-dev] [PATCH 3/4] net/bonding: another fix to LACP mempool
> size
> 
> The following log message may appear after a slave is idle (or nearly
> idle) for a few minutes: "PMD: Failed to allocate LACP packet from pool".
> 
> Problem: All mbufs from a slave's private pool (used exclusively for
> transmitting LACPDUs) have been allocated and are still sitting in the
> device's tx descriptor ring and other cores' mempool caches.
> 
> Solution: Ensure that each slaves' tx (LACPDU) mempool owns more than n-
> tx-queues * (n-tx-descriptors + per-core-mempool-flush-threshold) mbufs.
> 
> Note that the LACP tx machine function is the only code that allocates
> from a slave's private pool. It runs in the context of the interrupt
> thread, and thus it has no mempool cache of its own.
> 
> Signed-off-by: Robert Sanford <rsanford@akamai.com>
> ---
>  drivers/net/bonding/rte_eth_bond_8023ad.c |   10 +++++++---
>  1 files changed, 7 insertions(+), 3 deletions(-)
> 
> diff --git a/drivers/net/bonding/rte_eth_bond_8023ad.c
> b/drivers/net/bonding/rte_eth_bond_8023ad.c
> index 2f7ae70..1207896 100644
> --- a/drivers/net/bonding/rte_eth_bond_8023ad.c
> +++ b/drivers/net/bonding/rte_eth_bond_8023ad.c
> @@ -854,6 +854,8 @@ bond_mode_8023ad_activate_slave(struct rte_eth_dev
> *bond_dev, uint8_t slave_id)
>  	char mem_name[RTE_ETH_NAME_MAX_LEN];
>  	int socket_id;
>  	unsigned element_size;
> +	unsigned cache_size;
> +	unsigned cache_flushthresh;
>  	uint32_t total_tx_desc;
>  	struct bond_tx_queue *bd_tx_q;
>  	uint16_t q_id;
> @@ -890,19 +892,21 @@ bond_mode_8023ad_activate_slave(struct rte_eth_dev
> *bond_dev, uint8_t slave_id)
> 
>  	element_size = sizeof(struct slow_protocol_frame) + sizeof(struct
> rte_mbuf)
>  				+ RTE_PKTMBUF_HEADROOM;
> +	cache_size = RTE_MEMPOOL_CACHE_MAX_SIZE >= 32 ?
> +		32 : RTE_MEMPOOL_CACHE_MAX_SIZE;
> +	cache_flushthresh = RTE_MEMPOOL_CALC_CACHE_FLUSHTHRESH(cache_size);
> 
>  	/* The size of the mempool should be at least:
>  	 * the sum of the TX descriptors + BOND_MODE_8023AX_SLAVE_TX_PKTS */
>  	total_tx_desc = BOND_MODE_8023AX_SLAVE_TX_PKTS;
>  	for (q_id = 0; q_id < bond_dev->data->nb_tx_queues; q_id++) {
>  		bd_tx_q = (struct bond_tx_queue*)bond_dev->data-
> >tx_queues[q_id];
> -		total_tx_desc += bd_tx_q->nb_tx_desc;
> +		total_tx_desc += bd_tx_q->nb_tx_desc + cache_flushthresh;
>  	}
> 
>  	snprintf(mem_name, RTE_DIM(mem_name), "slave_port%u_pool",
> slave_id);
>  	port->mbuf_pool = rte_mempool_create(mem_name,
> -		total_tx_desc, element_size,
> -		RTE_MEMPOOL_CACHE_MAX_SIZE >= 32 ? 32 :
> RTE_MEMPOOL_CACHE_MAX_SIZE,
> +		total_tx_desc, element_size, cache_size,
>  		sizeof(struct rte_pktmbuf_pool_private), rte_pktmbuf_pool_init,
>  		NULL, rte_pktmbuf_init, NULL, socket_id, MEMPOOL_F_NO_SPREAD);
> 
> --
> 1.7.1

Reviewed-by: Tomasz Kulasek <tomaszx.kulasek@intel.com>

  parent reply	other threads:[~2016-11-07 16:03 UTC|newest]

Thread overview: 19+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-08-01 20:42 [dpdk-dev] [PATCH 0/4] net/bonding: bonding and LACP fixes Robert Sanford
2016-08-01 20:42 ` [dpdk-dev] [PATCH 1/4] testpmd: fix LACP ports to work with idle links Robert Sanford
2017-06-22  1:25   ` Wu, Jingjing
2017-10-31  1:07     ` Ferruh Yigit
2017-11-01 20:06       ` Ferruh Yigit
2016-08-01 20:42 ` [dpdk-dev] [PATCH 2/4] mempool: make cache flush threshold macro public Robert Sanford
2016-08-23 15:09   ` Olivier MATZ
2016-08-23 16:07     ` Sanford, Robert
2016-08-24 16:15       ` Olivier MATZ
2016-08-01 20:42 ` [dpdk-dev] [PATCH 3/4] net/bonding: another fix to LACP mempool size Robert Sanford
2016-08-23 15:09   ` Olivier MATZ
2016-08-23 20:01     ` Sanford, Robert
2016-08-24 16:14       ` Olivier MATZ
2016-11-07 16:02   ` Kulasek, TomaszX [this message]
2016-08-01 20:42 ` [dpdk-dev] [PATCH 4/4] net/bonding: fix configuration of LACP slaves Robert Sanford
2016-11-07 16:03   ` Kulasek, TomaszX
2017-02-08 17:14 ` [dpdk-dev] [PATCH 0/4] net/bonding: bonding and LACP fixes Thomas Monjalon
2017-03-09 13:19   ` Thomas Monjalon
2017-03-09 16:57     ` Declan Doherty

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=3042915272161B4EB253DA4D77EB373A14F46B11@IRSMSX102.ger.corp.intel.com \
    --to=tomaszx.kulasek@intel.com \
    --cc=declan.doherty@intel.com \
    --cc=dev@dpdk.org \
    --cc=olivier.matz@6wind.com \
    --cc=pablo.de.lara.guarch@intel.com \
    --cc=rsanford2@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).