From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from proxy.6wind.com (host.76.145.23.62.rev.coltfrance.com [62.23.145.76]) by dpdk.org (Postfix) with ESMTP id 21BB2475D for ; Tue, 23 Aug 2016 17:09:27 +0200 (CEST) Received: from [10.16.0.195] (unknown [10.16.0.195]) by proxy.6wind.com (Postfix) with ESMTP id 0F3C9268B4; Tue, 23 Aug 2016 17:09:26 +0200 (CEST) To: Robert Sanford , dev@dpdk.org References: <1470084176-79932-1-git-send-email-rsanford@akamai.com> <1470084176-79932-4-git-send-email-rsanford@akamai.com> Cc: declan.doherty@intel.com, pablo.de.lara.guarch@intel.com From: Olivier MATZ Message-ID: <57BC6723.7020106@6wind.com> Date: Tue, 23 Aug 2016 17:09:23 +0200 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:38.0) Gecko/20100101 Icedove/38.6.0 MIME-Version: 1.0 In-Reply-To: <1470084176-79932-4-git-send-email-rsanford@akamai.com> Content-Type: text/plain; charset=windows-1252; format=flowed Content-Transfer-Encoding: 7bit Subject: Re: [dpdk-dev] [PATCH 3/4] net/bonding: another fix to LACP mempool size X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 23 Aug 2016 15:09:27 -0000 Hi Robert, On 08/01/2016 10:42 PM, Robert Sanford wrote: > The following log message may appear after a slave is idle (or nearly > idle) for a few minutes: "PMD: Failed to allocate LACP packet from > pool". > > Problem: All mbufs from a slave's private pool (used exclusively for > transmitting LACPDUs) have been allocated and are still sitting in > the device's tx descriptor ring and other cores' mempool caches. > > Solution: Ensure that each slaves' tx (LACPDU) mempool owns more than > n-tx-queues * (n-tx-descriptors + per-core-mempool-flush-threshold) > mbufs. > > Note that the LACP tx machine function is the only code that allocates > from a slave's private pool. It runs in the context of the interrupt > thread, and thus it has no mempool cache of its own. > > Signed-off-by: Robert Sanford > --- > drivers/net/bonding/rte_eth_bond_8023ad.c | 10 +++++++--- > 1 files changed, 7 insertions(+), 3 deletions(-) > > diff --git a/drivers/net/bonding/rte_eth_bond_8023ad.c b/drivers/net/bonding/rte_eth_bond_8023ad.c > index 2f7ae70..1207896 100644 > --- a/drivers/net/bonding/rte_eth_bond_8023ad.c > +++ b/drivers/net/bonding/rte_eth_bond_8023ad.c > @@ -854,6 +854,8 @@ bond_mode_8023ad_activate_slave(struct rte_eth_dev *bond_dev, uint8_t slave_id) > char mem_name[RTE_ETH_NAME_MAX_LEN]; > int socket_id; > unsigned element_size; > + unsigned cache_size; > + unsigned cache_flushthresh; > uint32_t total_tx_desc; > struct bond_tx_queue *bd_tx_q; > uint16_t q_id; > @@ -890,19 +892,21 @@ bond_mode_8023ad_activate_slave(struct rte_eth_dev *bond_dev, uint8_t slave_id) > > element_size = sizeof(struct slow_protocol_frame) + sizeof(struct rte_mbuf) > + RTE_PKTMBUF_HEADROOM; > + cache_size = RTE_MEMPOOL_CACHE_MAX_SIZE >= 32 ? > + 32 : RTE_MEMPOOL_CACHE_MAX_SIZE; > + cache_flushthresh = RTE_MEMPOOL_CALC_CACHE_FLUSHTHRESH(cache_size); > > /* The size of the mempool should be at least: > * the sum of the TX descriptors + BOND_MODE_8023AX_SLAVE_TX_PKTS */ > total_tx_desc = BOND_MODE_8023AX_SLAVE_TX_PKTS; > for (q_id = 0; q_id < bond_dev->data->nb_tx_queues; q_id++) { > bd_tx_q = (struct bond_tx_queue*)bond_dev->data->tx_queues[q_id]; > - total_tx_desc += bd_tx_q->nb_tx_desc; > + total_tx_desc += bd_tx_q->nb_tx_desc + cache_flushthresh; > } > > snprintf(mem_name, RTE_DIM(mem_name), "slave_port%u_pool", slave_id); > port->mbuf_pool = rte_mempool_create(mem_name, > - total_tx_desc, element_size, > - RTE_MEMPOOL_CACHE_MAX_SIZE >= 32 ? 32 : RTE_MEMPOOL_CACHE_MAX_SIZE, > + total_tx_desc, element_size, cache_size, > sizeof(struct rte_pktmbuf_pool_private), rte_pktmbuf_pool_init, > NULL, rte_pktmbuf_init, NULL, socket_id, MEMPOOL_F_NO_SPREAD); > > I'm not very familiar with bonding code, so maybe your patch is correct. I think the size of the mempool should be: BOND_MODE_8023AX_SLAVE_TX_PKTS + n_cores * RTE_MEMPOOL_CALC_CACHE_FLUSHTHRESH(cache_size) With n_cores = number of cores that can dequeue from the mempool. The safest thing to do would be to have n_cores = RTE_MAX_LCORE. I don't know if bond_dev->data->nb_tx_queue corresponds to this definition, if yes you can ignore my comment ;) Regards, Olivier