DPDK patches and discussions
 help / color / mirror / Atom feed
* [dpdk-dev] Free up completed TX buffers
@ 2015-05-29 17:00 Zoltan Kiss
  2015-06-01  8:50 ` Andriy Berestovskyy
  0 siblings, 1 reply; 3+ messages in thread
From: Zoltan Kiss @ 2015-05-29 17:00 UTC (permalink / raw)
  To: dev

Hi,

I've came across an another problem while sorting out the one fixed by 
my patch "ixgbe: fix checking for tx_free_thresh". Even when the 
threshold check is correct it can happen that the application run out of 
free buffers, and the only solution would be to get back the ones from 
the TX rings. But if their number is still less than tx_free_thresh (per 
queue), currently there is no interface to achieve that.
The bad way is to set tx_free_thresh to 1, but it has a very bad 
performance penalty. The easy way is just to increase your buffer pool's 
size to make sure that doesn't happen. But there is no bulletproof way 
to calculate such a number, and based on my experience it's hard to 
debug if it causes problem.
I'm thinking about a foolproof way, which is exposing functions like 
ixgbe_tx_free_bufs from the PMDs, so the application can call it as a 
last resort to avoid deadlock. Instead it causes probably worse 
performance, but at least fools like me will easily see that from e.g. 
oprofile.
How does that sound? Or is there a better way to solve this problem?

Regards,

Zoli

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [dpdk-dev] Free up completed TX buffers
  2015-05-29 17:00 [dpdk-dev] Free up completed TX buffers Zoltan Kiss
@ 2015-06-01  8:50 ` Andriy Berestovskyy
  2015-06-01 17:51   ` Zoltan Kiss
  0 siblings, 1 reply; 3+ messages in thread
From: Andriy Berestovskyy @ 2015-06-01  8:50 UTC (permalink / raw)
  To: dev

Hi Zoltan,

On Fri, May 29, 2015 at 7:00 PM, Zoltan Kiss <zoltan.kiss@linaro.org> wrote:
> The easy way is just to increase your buffer pool's size to make
> sure that doesn't happen.

Go for it!

>  But there is no bulletproof way to calculate such
> a number

Yeah, there are many places for mbufs to stay :( I would try:

Mempool size = sum(numbers of all TX descriptors)
    + sum(rx_free_thresh)
    + (mempool cache size * (number of lcores - 1))
    + (burst size * number of lcores)

> I'm thinking about a foolproof way, which is exposing functions like
> ixgbe_tx_free_bufs from the PMDs, so the application can call it as a last
> resort to avoid deadlock.

Have a look at rte_eth_dev_tx_queue_stop()/start(). Some NICs (i.e.
ixgbe) do reset the queue and free all the mbufs.

Regards,
Andriy

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [dpdk-dev] Free up completed TX buffers
  2015-06-01  8:50 ` Andriy Berestovskyy
@ 2015-06-01 17:51   ` Zoltan Kiss
  0 siblings, 0 replies; 3+ messages in thread
From: Zoltan Kiss @ 2015-06-01 17:51 UTC (permalink / raw)
  To: Andriy Berestovskyy, dev



On 01/06/15 09:50, Andriy Berestovskyy wrote:
> Hi Zoltan,
>
> On Fri, May 29, 2015 at 7:00 PM, Zoltan Kiss <zoltan.kiss@linaro.org> wrote:
>> The easy way is just to increase your buffer pool's size to make
>> sure that doesn't happen.
>
> Go for it!

I went for it, my question is whether is it a good and worthwhile idea 
to give the applications a last resort option for rainy days? It's a 
problem which probably won't occur very often, but when it does, I think 
it can take painfully long until you figure out what's wrong.
>
>>   But there is no bulletproof way to calculate such
>> a number
>
> Yeah, there are many places for mbufs to stay :( I would try:
>
> Mempool size = sum(numbers of all TX descriptors)
>      + sum(rx_free_thresh)
>      + (mempool cache size * (number of lcores - 1))
>      + (burst size * number of lcores)

It heavily depends on what your application does, and I think it's easy 
to make a mistake in these calculations.

>
>> I'm thinking about a foolproof way, which is exposing functions like
>> ixgbe_tx_free_bufs from the PMDs, so the application can call it as a last
>> resort to avoid deadlock.
>
> Have a look at rte_eth_dev_tx_queue_stop()/start(). Some NICs (i.e.
> ixgbe) do reset the queue and free all the mbufs.

That's a bit drastic, I just want to flush the finished TX buffers, even 
if tx_free_thresh were not reached.
An easy option would be to use rte_eth_tx_burst(..., nb_pkts=0), I'm 
already using this to enforce TX completion if it's really needed. It 
checks for tx_free_thresh, like this:

	/* Check if the descriptor ring needs to be cleaned. */
	if ((txq->nb_tx_desc - txq->nb_tx_free) > txq->tx_free_thresh)
		i40e_xmit_cleanup(txq);

My idea is to extend this condition and add " || nb_pkts == 0", so you 
can force cleanup. But there might be others who uses this same way to 
do manual TX completion, and they expect that it only happens when 
tx_free_thresh is reached.

>
> Regards,
> Andriy
>

^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2015-06-01 17:51 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-05-29 17:00 [dpdk-dev] Free up completed TX buffers Zoltan Kiss
2015-06-01  8:50 ` Andriy Berestovskyy
2015-06-01 17:51   ` Zoltan Kiss

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).