From mboxrd@z Thu Jan  1 00:00:00 1970
Return-Path: <users-bounces@dpdk.org>
Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124])
	by inbox.dpdk.org (Postfix) with ESMTP id 0AB6C4236E
	for <public@inbox.dpdk.org>; Sun,  8 Jan 2023 18:13:43 +0100 (CET)
Received: from mails.dpdk.org (localhost [127.0.0.1])
	by mails.dpdk.org (Postfix) with ESMTP id 85DCD40687;
	Sun,  8 Jan 2023 18:13:42 +0100 (CET)
Received: from mail-pg1-f174.google.com (mail-pg1-f174.google.com
 [209.85.215.174])
 by mails.dpdk.org (Postfix) with ESMTP id 847AA4067C
 for <users@dpdk.org>; Sun,  8 Jan 2023 18:13:41 +0100 (CET)
Received: by mail-pg1-f174.google.com with SMTP id g68so3315443pgc.11
 for <users@dpdk.org>; Sun, 08 Jan 2023 09:13:41 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=networkplumber-org.20210112.gappssmtp.com; s=20210112;
 h=content-transfer-encoding:mime-version:references:in-reply-to
 :message-id:subject:cc:to:from:date:from:to:cc:subject:date
 :message-id:reply-to;
 bh=i0+Mn0kzrOfXB2ORTntgsh0ErI91H+BIQv/Qdkj6OSY=;
 b=0ThHzPARA7fOLv11h435YxOQS/YgIfEMVsl0hKd1Dub1Y7FKv1Td2M8AtRsiEs7lQu
 /xILHhDkYDwod43/Tgj+T6P/daVZdQJKQ/aotOAHdLvS7KB0yhiskatDTBclxd+GAQdp
 geHDTfrs9x6m7jh7RGu/gM51mnOz4JROqIqjboFbX51XHDKxGjz9EpnUNxkybp6+CJAL
 sCT9Ts4UzbLeUN7usFmroL4USk5cMm2Z+8yGqZbrWEYsc3/806kcBgJyRaCBZtsSEBUg
 +nwexSYdWeuN+PKr+UpwLwzSG6jShYe7Va1428iMcdSdTWuBa8fe0Eu0v4Mgkc0gK6p7
 CM5g==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20210112;
 h=content-transfer-encoding:mime-version:references:in-reply-to
 :message-id:subject:cc:to:from:date:x-gm-message-state:from:to:cc
 :subject:date:message-id:reply-to;
 bh=i0+Mn0kzrOfXB2ORTntgsh0ErI91H+BIQv/Qdkj6OSY=;
 b=SgHgOsvdLCUFWEE9GY/CzUkY7eCh1rkyDkdBr7/lbRWrQkHoMepy7QUl1fsYMF4zyw
 v6kyDRGdPtXR4pPsjXsFuSaGwKn1Vs6rMWOMVas4dlBfEMi5lkDFxa8hR4umcNPcVyfd
 yHEjWiSupD5QfpwT9U3VKk0hCRPTykq70hKBVH4FewVzvQimH8sbV35NWsCbBVNmKBWI
 niWDx0iKBOhYZ95U9Err6/NQqrbGxrYyGA4SORMk0usL4lny+JWyBzmgVAknUpiD5PTb
 +rLM8lq+qkuU7CAqzr0K9RFmr9ypRhdf503wQK2izrRvjL5GQAU/6ah1V/nlhPzUgouo
 VJvQ==
X-Gm-Message-State: AFqh2krlLkDAPTeTE1SIdKTfOoYPb6DaxHj4tL6QnLK3K5bgw5CR+lF7
 04IBSCITpC+K7XEpEyI4cA+43g==
X-Google-Smtp-Source: AMrXdXuke9QufdJkaKddC6QjIU4CmjtL3QGchTiQu3UWNAjADU9Vys2pQOg7hm0P7PVvhmyhOc0MGA==
X-Received: by 2002:aa7:8292:0:b0:576:7fb9:85cc with SMTP id
 s18-20020aa78292000000b005767fb985ccmr55572251pfm.14.1673198020466; 
 Sun, 08 Jan 2023 09:13:40 -0800 (PST)
Received: from hermes.local (204-195-120-218.wavecable.com. [204.195.120.218])
 by smtp.gmail.com with ESMTPSA id
 c29-20020aa7953d000000b0058866a160ebsm669213pfp.69.2023.01.08.09.13.39
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Sun, 08 Jan 2023 09:13:40 -0800 (PST)
Date: Sun, 8 Jan 2023 09:13:38 -0800
From: Stephen Hemminger <stephen@networkplumber.org>
To: "Ruslan R. Laishev" <zator@yandex.ru>
Cc: Dmitry Kozlyuk <dmitry.kozliuk@gmail.com>, "users@dpdk.org"
 <users@dpdk.org>
Subject: Re: One transmitter worker stop after "some time" (Was: Performance
 "problem" or how to get power of the DPDK )
Message-ID: <20230108091338.3ec17bcf@hermes.local>
In-Reply-To: <1241341673182815@mail.yandex.ru>
References: <207831672055161@mail.yandex.ru>
 <20221226160753.11db04bb@sovereign> <52451672060184@mail.yandex.ru>
 <270941672081661@mail.yandex.ru> <279421672082645@mail.yandex.ru>
 <20221226230451.690c3986@sovereign>
 <1241341673182815@mail.yandex.ru>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable
X-BeenThere: users@dpdk.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: DPDK usage discussions <users.dpdk.org>
List-Unsubscribe: <https://mails.dpdk.org/options/users>,
 <mailto:users-request@dpdk.org?subject=unsubscribe>
List-Archive: <http://mails.dpdk.org/archives/users/>
List-Post: <mailto:users@dpdk.org>
List-Help: <mailto:users-request@dpdk.org?subject=help>
List-Subscribe: <https://mails.dpdk.org/listinfo/users>,
 <mailto:users-request@dpdk.org?subject=subscribe>
Errors-To: users-bounces@dpdk.org

On Sun, 08 Jan 2023 16:23:13 +0300
Ruslan R. Laishev <zator@yandex.ru> wrote:

> Hello!
> =C2=A0
> According the advices (from previous mails) =C2=A0I recoded my little app=
 to use several lcore-queue pairs to generate traffic. Thanks it's works fi=
ne, I see 8Gbps+ now with 2 workers .
> But! Now I have some other situation which I cannot to resolve. 2 workers=
 =C2=A0(every worker run on assigned lcore and put packets to separated tx =
queue) :
> after start of the app - both worker works some time, but at "some moment=
" one worker cannot get mbufs by rte_pktmbuf_alloc_bulk() .=C2=A0 Juts for =
demonstration a piece of stats:
> =C2=A0
> At start :
> 08-01-2023 16:03:20.065 =C2=A058628 [CPPROC\s_proc_auxilary:822] %TTR2CP-=
I: =C2=A0[LCore:#001] TX/NoMbufs/Flush:1981397/0/1981397
> 08-01-2023 16:03:20.065 =C2=A058628 [CPPROC\s_proc_auxilary:822] %TTR2CP-=
I: =C2=A0[LCore:#002] TX/NoMbufs/Flush:1989108/0/1989108
> =C2=A0
> Since "some moment"
> 08-01-2023 16:15:20.110 =C2=A058628 [CPPROC\s_proc_auxilary:822] %TTR2CP-=
I: =C2=A0[LCore:#001] TX/NoMbufs/Flush:2197615/5778976181/2197631
> 08-01-2023 16:15:20.110 =C2=A058628 [CPPROC\s_proc_auxilary:822] %TTR2CP-=
I: =C2=A0[LCore:#002] TX/NoMbufs/Flush:3952732/0/3952732
> =C2=A0
> 08-01-2023 16:15:30.111 =C2=A058628 [CPPROC\s_proc_auxilary:822] %TTR2CP-=
I: =C2=A0[LCore:#001] TX/NoMbufs/Flush:2197615/5869388078/2197631
> 08-01-2023 16:15:30.111 =C2=A058628 [CPPROC\s_proc_auxilary:822] %TTR2CP-=
I: =C2=A0[LCore:#002] TX/NoMbufs/Flush:3980054/0/3980054
> =C2=A0
> 08-01-2023 16:15:40.111 =C2=A058628 [CPPROC\s_proc_auxilary:822] %TTR2CP-=
I: =C2=A0[LCore:#001] TX/NoMbufs/Flush:2197615/5959777107/2197631
> 08-01-2023 16:15:40.111 =C2=A058628 [CPPROC\s_proc_auxilary:822] %TTR2CP-=
I: =C2=A0[LCore:#002] TX/NoMbufs/Flush:4007378/0/4007378
> =C2=A0
> 08-01-2023 16:15:50.112 =C2=A058628 [CPPROC\s_proc_auxilary:822] %TTR2CP-=
I: =C2=A0[LCore:#001] TX/NoMbufs/Flush:2197615/6050173812/2197631
> 08-01-2023 16:15:50.112 =C2=A058628 [CPPROC\s_proc_auxilary:822] %TTR2CP-=
I: =C2=A0[LCore:#002] TX/NoMbufs/Flush:4034699/0/4034699
> =C2=A0
> 08-01-2023 16:16:00.112 =C2=A058628 [CPPROC\s_proc_auxilary:822] %TTR2CP-=
I: =C2=A0[LCore:#001] TX/NoMbufs/Flush:2197615/6140583818/2197631
> 08-01-2023 16:16:00.112 =C2=A058628 [CPPROC\s_proc_auxilary:822] %TTR2CP-=
I: =C2=A0[LCore:#002] TX/NoMbufs/Flush:4062021/0/4062021
> =C2=A0
> So one worker works fine and as expected, second worker - permanently don=
't getting mbufs .
> Is there what I have to check ?
> Thanks in advance!
> =C2=A0
> ---=C2=A0
> =D0=A1 =D1=83=D0=B2=D0=B0=D0=B6=D0=B5=D0=BD=D0=B8=D0=B5=D0=BC,
> Ruslan R. Laishev
> OpenVMS bigot, natural born system/network progger,=C2=A0C contractor.
> +79013163222
> +79910009922
> =C2=A0
>=20

Two thing to look at. First is the allocated mbuf pool big enough to handle=
 the maximum
number of mbufs in flight in your application. For Tx, that is the number o=
f transmit
queues multiplied by the number of transmit descriptors per ring. With some=
 additional
buffers for staging.  Similar for receive side.

Second, transmit mbufs need to get cleaned up by the device driver after th=
ey
are sent. Depending on the the device, this maybe triggered by the receive =
path.
So polling for receive data may be needed even if you aren't doing any rece=
ives.