patches for DPDK stable branches
 help / color / mirror / Atom feed
From: Matt <zhouyates@gmail.com>
To: Stephen Hemminger <stephen@networkplumber.org>
Cc: dev@dpdk.org, Hemant@freescale.com, stable@dpdk.org
Subject: Re: [PATCH v2] kni: fix possible alloc_q starvation when mbufs are exhausted
Date: Fri, 11 Nov 2022 17:12:20 +0800	[thread overview]
Message-ID: <CABLiTuzGXvJ1MANxb21ODK60WgfUvUY7fcvz+pYURSf+=Be0nw@mail.gmail.com> (raw)
In-Reply-To: <20221109083909.6536bce8@hermes.local>

[-- Attachment #1: Type: text/plain, Size: 1844 bytes --]

On Thu, Nov 10, 2022 at 12:39 AM Stephen Hemminger <
stephen@networkplumber.org> wrote:

> On Wed,  9 Nov 2022 14:04:34 +0800
> Yangchao Zhou <zhouyates@gmail.com> wrote:
>
> > In some scenarios, mbufs returned by rte_kni_rx_burst are not freed
> > immediately. So kni_allocate_mbufs may be failed, but we don't know.
> >
> > Even worse, when alloc_q is completely exhausted, kni_net_tx in
> > rte_kni.ko will drop all tx packets. kni_allocate_mbufs is never
> > called again, even if the mbufs are eventually freed.
> >
> > In this patch, we always try to allocate mbufs for alloc_q.
> >
> > Don't worry about alloc_q being allocated too many mbufs, in fact,
> > the old logic will gradually fill up alloc_q.
> > Also, the cost of more calls to kni_allocate_mbufs should be acceptable.
> >
> > Fixes: 3e12a98fe397 ("kni: optimize Rx burst")
> > Cc: Hemant@freescale.com
> > Cc: stable@dpdk.org
> >
> > Signed-off-by: Yangchao Zhou <zhouyates@gmail.com>
>
> Since fifo_get returning 0 (no buffers) is very common would this
> change impact performance.
>
It does add a little cost, but there is no extra mbuf allocation
and deallocation.

>
> If the problem is pool draining might be better to make the pool
> bigger.
>
Yes, using a larger pool can avoid this problem. But this may lead to
resource wastage and full resource calculation is a challenge for developers
as it involves to mempool caching mechanism, IP fragment cache,
ARP cache, NIC txq, other transit queue, etc.

The mbuf allocation failure may also occur on many NIC drivers,
but if the mbuf allocation fails, the mbuf is not taken out so that
it can be recovered after a retry later.
KNI currently does not have such a takedown and recovery mechanism.
It is also possible to consider implementing something similar to
the NIC driver, but with more changes and other overheads.

[-- Attachment #2: Type: text/html, Size: 2795 bytes --]

  reply	other threads:[~2022-11-11  9:12 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-11-09  5:13 [PATCH] " Yangchao Zhou
2022-11-09  6:04 ` [PATCH v2] " Yangchao Zhou
2022-11-09 16:39   ` Stephen Hemminger
2022-11-11  9:12     ` Matt [this message]
2022-12-09 16:15       ` Ferruh Yigit
2022-12-30  4:23   ` [PATCH v3] " Yangchao Zhou
2023-01-03 12:47     ` Ferruh Yigit
2023-01-04 11:57       ` Matt
2023-01-04 14:34         ` Ferruh Yigit
2023-03-11  9:16           ` Thomas Monjalon

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CABLiTuzGXvJ1MANxb21ODK60WgfUvUY7fcvz+pYURSf+=Be0nw@mail.gmail.com' \
    --to=zhouyates@gmail.com \
    --cc=Hemant@freescale.com \
    --cc=dev@dpdk.org \
    --cc=stable@dpdk.org \
    --cc=stephen@networkplumber.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).