From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id C55FEA00C4 for ; Wed, 4 Jan 2023 12:57:44 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id BCAB542D15; Wed, 4 Jan 2023 12:57:44 +0100 (CET) Received: from mail-pg1-f173.google.com (mail-pg1-f173.google.com [209.85.215.173]) by mails.dpdk.org (Postfix) with ESMTP id 203F14067B; Wed, 4 Jan 2023 12:57:42 +0100 (CET) Received: by mail-pg1-f173.google.com with SMTP id 36so22037933pgp.10; Wed, 04 Jan 2023 03:57:42 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:from:to:cc:subject:date:message-id:reply-to; bh=yRpX0IvtjubRTHNVhlVqD6w1edK9r+6XQUceQAZk474=; b=oEjmULLVm4sAj0/w4WnC06phfvqHLC9hwYgS2J92IFmP+jOtMygIAbbqKHmx5gyMrV xNaimueG2bnqG7nXvexl3LiWaYAok1hPiCQBd1Y0fQ6JjcYcuxckI6kFXB7MpOtKX1Qj utNA01pkqNYxvrBPmYO3YR7TmwECf6LPO2rmdp3WXwIeVhktE37L98yAy5vXtgW+MfYo WlihuzzWbQDk+9wpC8Ot74xMk4PImNUQJhaPRn97JrcaXtElL1j3PUmFx+Jv64YEpRUm pVOHZKcPf0Mt1G0Aoc4ba4u6MbW0LztP+0yxj5wtcGJSKeZNL9m9Eaj7QNy+2FITXKKD 6Ilg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=yRpX0IvtjubRTHNVhlVqD6w1edK9r+6XQUceQAZk474=; b=qDbQLldcIRnvEwv9H0VIiIFndKG8382MLeeCwHw0lJ1HT38DHzeXXrbo4Fl/Xq726c ZcvBmkcgrfxvib2Pp/HS672oRqBjX/bQHLYIdj99Qe1bfdXI8RTWTvSPvlB7A8wAw8+X YSasvgyrrBwRFVyN0BQOyrp1VZiKIa/R97ufTmOx/EYGeuV0NF8z6/ZUg7L5Tu/owXSV ZzZbowGZIYliUFWYhFd+5Rz2pY/ShOwfVqMTRdq8HTXbdpgc+zc2WvedTuufXVj346Pa /PGHlSXiNa1LO+a9dvZP6E8cJvpUoxAnNo8+cfImHhupT2hTUBd+EzXU8mZSd8RFTNl8 nFHg== X-Gm-Message-State: AFqh2krRQSdZtY4DglZkl5inDjq/iMjrkn42mW+jNokXQBmauPgi2hWB XKZ7JoJDJblD8KUF+AbjzEpFWqYcROmTx7kvC3Q= X-Google-Smtp-Source: AMrXdXsRwUCdfJuA5r36J8OI3VG09o73cH2nWDy6cLs//GLo92TRTqeTrFuFJCdtItRFaDDA26JSIbST1mgUEBvMow8= X-Received: by 2002:a65:58ce:0:b0:486:a74a:e4cf with SMTP id e14-20020a6558ce000000b00486a74ae4cfmr3327659pgu.324.1672833461064; Wed, 04 Jan 2023 03:57:41 -0800 (PST) MIME-Version: 1.0 References: <20221109060434.2012064-1-zhouyates@gmail.com> <20221230042338.1670768-1-zhouyates@gmail.com> <042de5cd-c410-7925-efe9-bea547fec736@amd.com> In-Reply-To: <042de5cd-c410-7925-efe9-bea547fec736@amd.com> From: Matt Date: Wed, 4 Jan 2023 19:57:29 +0800 Message-ID: Subject: Re: [PATCH v3] kni: fix possible alloc_q starvation when mbufs are exhausted To: Ferruh Yigit Cc: dev@dpdk.org, stephen@networkplumber.org, stable@dpdk.org Content-Type: multipart/alternative; boundary="0000000000003719ff05f16ee4ad" X-BeenThere: stable@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: patches for DPDK stable branches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: stable-bounces@dpdk.org --0000000000003719ff05f16ee4ad Content-Type: text/plain; charset="UTF-8" Hi Ferruh, In my case, the traffic is not large, so I can't see the impact. I also tested under high load(>2Mpps with 2 DPDK cores and 2 kernel threads) and found no significant difference in performance either. I think the reason should be that it will not run to 'kni_fifo_count(kni->alloc_q) == 0' under high load. On Tue, Jan 3, 2023 at 8:47 PM Ferruh Yigit wrote: > On 12/30/2022 4:23 AM, Yangchao Zhou wrote: > > In some scenarios, mbufs returned by rte_kni_rx_burst are not freed > > immediately. So kni_allocate_mbufs may be failed, but we don't know. > > > > Even worse, when alloc_q is completely exhausted, kni_net_tx in > > rte_kni.ko will drop all tx packets. kni_allocate_mbufs is never > > called again, even if the mbufs are eventually freed. > > > > In this patch, we try to allocate mbufs for alloc_q when it is empty. > > > > According to historical experience, the performance bottleneck of KNI > > is offen the usleep_range of kni thread in rte_kni.ko. > > The check of kni_fifo_count is trivial and the cost should be acceptable. > > > > Hi Yangchao, > > Are you observing any performance impact with this change in you use case? > > > > Fixes: 3e12a98fe397 ("kni: optimize Rx burst") > > Cc: stable@dpdk.org > > > > Signed-off-by: Yangchao Zhou > > --- > > lib/kni/rte_kni.c | 4 ++-- > > 1 file changed, 2 insertions(+), 2 deletions(-) > > > > diff --git a/lib/kni/rte_kni.c b/lib/kni/rte_kni.c > > index 8ab6c47153..bfa6a001ff 100644 > > --- a/lib/kni/rte_kni.c > > +++ b/lib/kni/rte_kni.c > > @@ -634,8 +634,8 @@ rte_kni_rx_burst(struct rte_kni *kni, struct > rte_mbuf **mbufs, unsigned int num) > > { > > unsigned int ret = kni_fifo_get(kni->tx_q, (void **)mbufs, num); > > > > - /* If buffers removed, allocate mbufs and then put them into > alloc_q */ > > - if (ret) > > + /* If buffers removed or alloc_q is empty, allocate mbufs and then > put them into alloc_q */ > > + if (ret || (kni_fifo_count(kni->alloc_q) == 0)) > > kni_allocate_mbufs(kni); > > > > return ret; > > --0000000000003719ff05f16ee4ad Content-Type: text/html; charset="UTF-8" Content-Transfer-Encoding: quoted-printable
Hi=C2=A0Ferruh,

In my case, the traffic= is not large, so I can't see the impact.
I also tested under high l= oad(>2Mpps with 2 DPDK=C2=A0cores and 2 kernel threads)
and found no = significant difference in performance either.
I think the reason should = be that it will not
run to 'kni_fifo_count(kni->alloc_q) =3D=3D 0= ' under high load.

On Tue, Jan 3, 2023 at 8:47 PM Ferruh Yigit= <ferruh.yigit@amd.com> w= rote:
On 12/30/2= 022 4:23 AM, Yangchao Zhou wrote:
> In some scenarios, mbufs returned by rte_kni_rx_burst are not freed > immediately. So kni_allocate_mbufs may be failed, but we don't kno= w.
>
> Even worse, when alloc_q is completely exhausted, kni_net_tx in
> rte_kni.ko will drop all tx packets. kni_allocate_mbufs is never
> called again, even if the mbufs are eventually freed.
>
> In this patch, we try to allocate mbufs for alloc_q when it is empty.<= br> >
> According to historical experience, the performance bottleneck of KNI<= br> > is offen the usleep_range of kni thread in rte_kni.ko.
> The check of kni_fifo_count is trivial and the cost should be acceptab= le.
>

Hi Yangchao,

Are you observing any performance impact with this change in you use case?<= br>

> Fixes: 3e12a98fe397 ("kni: optimize Rx burst")
> Cc: stable@dpdk.o= rg
>
> Signed-off-by: Yangchao Zhou <zhouyates@gmail.com>
> ---
>=C2=A0 lib/kni/rte_kni.c | 4 ++--
>=C2=A0 1 file changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/lib/kni/rte_kni.c b/lib/kni/rte_kni.c
> index 8ab6c47153..bfa6a001ff 100644
> --- a/lib/kni/rte_kni.c
> +++ b/lib/kni/rte_kni.c
> @@ -634,8 +634,8 @@ rte_kni_rx_burst(struct rte_kni *kni, struct rte_m= buf **mbufs, unsigned int num)
>=C2=A0 {
>=C2=A0 =C2=A0 =C2=A0 =C2=A0unsigned int ret =3D kni_fifo_get(kni->tx= _q, (void **)mbufs, num);
>=C2=A0
> -=C2=A0 =C2=A0 =C2=A0/* If buffers removed, allocate mbufs and then pu= t them into alloc_q */
> -=C2=A0 =C2=A0 =C2=A0if (ret)
> +=C2=A0 =C2=A0 =C2=A0/* If buffers removed or alloc_q is empty, alloca= te mbufs and then put them into alloc_q */
> +=C2=A0 =C2=A0 =C2=A0if (ret || (kni_fifo_count(kni->alloc_q) =3D= =3D 0))
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0kni_allocate_mbu= fs(kni);
>=C2=A0
>=C2=A0 =C2=A0 =C2=A0 =C2=A0return ret;

--0000000000003719ff05f16ee4ad--