From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 94B3845BB1 for ; Mon, 28 Oct 2024 15:34:56 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 5CF8B40E0B; Mon, 28 Oct 2024 15:34:56 +0100 (CET) Received: from mail-pj1-f53.google.com (mail-pj1-f53.google.com [209.85.216.53]) by mails.dpdk.org (Postfix) with ESMTP id 0D049400D7 for ; Mon, 28 Oct 2024 15:34:54 +0100 (CET) Received: by mail-pj1-f53.google.com with SMTP id 98e67ed59e1d1-2e2b549799eso3243424a91.3 for ; Mon, 28 Oct 2024 07:34:54 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=domainhart-com.20230601.gappssmtp.com; s=20230601; t=1730126093; x=1730730893; darn=dpdk.org; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:from:to:cc:subject:date:message-id:reply-to; bh=NoVq1qqdwU3vaVVSTEIP1lfhZZuKVE+aC44MQSjXwWM=; b=GkVObNeOn/diEpPziFnCRjxQxP5uK+lcMFneoLSIJe6i/p+gOAjRmJAZ+5FU1Sy8VU KUenRyqLNcK7AaFSPOdZfrtSROhPBHXQNQwOPNB3fZ4K89WzW7ZpM/f1NbcPaGkpMjnD AqnRTvDSycYixuTXCXwLzgxO9jdtJ80Q0exeAS7z1gokcG3gsmF1e5It4HSkrgHXzB7r SJqmDkb6trzBq77HBx1PPx5r3+sAsIn7E+NluzMgehTMMY8EVQ9xKqiQ+JqAXzxDGs6z F7q62hrZFBgI93nk2XvvpFjC+Nxhdt0vjW6TUBxso0s8H1Eb1/GhUgcRr9pWGuy0H6yg Dc1Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1730126093; x=1730730893; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=NoVq1qqdwU3vaVVSTEIP1lfhZZuKVE+aC44MQSjXwWM=; b=fgev5GRRucTCb6VazjxJcLzsLuShlvHn2zfVm8Nvoc/Yx2akgpWU5Kf37sTWGabQlF /tQmb3sUEvQOFjlwh1FMJyw4u1X6GJWH0d8ciKv3aijeHoOhy/T1L5adu9GebWnJFLLA 3N9mUlm4LipmjKooQnQOLGYMF4k7kWy1/VMXzFITUGiSmtCIK0nYGUToxw5I3KP7KI2S vsLH2Ff4v2byuC7AElkWsm4TCAqIPjJpLe4AySp0n0tRNxZsHGOlk629QfNKMM4xO2NT X39gt1MFGZAIdBWplaVvtG6VPMLgGC5rJxXE42JXtylcfsNN4eXVnGzmUVjv31rezIAO +kMQ== X-Gm-Message-State: AOJu0Yy9FefpH9KHbZUpXTSReI9GEwxW7coSroHvnl33ak7VBM6UteEJ aCRFGNH4GiNjd2pmeD0/QQ9A0rYaeffthj8IJDl0d5iEhoGcfBX45L6JyD+l6AtWgl5H2Vaolto 8da7v5DLCUYg6RYv2HPY5QCOEuEsRwZVb329wybUB+wPPvg3zuhs= X-Google-Smtp-Source: AGHT+IHtg8KH631tl/KDV7WXiJ1TQQqBEKTHs48o2UfALq9TFqFy7jSiiMc0HB6V193dCKgTzOdyEYfv4eioobeb80g= X-Received: by 2002:a17:90b:224b:b0:2e2:d7db:41fa with SMTP id 98e67ed59e1d1-2e8f11d1e8emr10685766a91.33.1730126093122; Mon, 28 Oct 2024 07:34:53 -0700 (PDT) MIME-Version: 1.0 References: In-Reply-To: From: Tony Hart Date: Mon, 28 Oct 2024 10:34:42 -0400 Message-ID: Subject: Re: mlx5: imissed versus prio0_buf_discards To: Dariusz Sosnowski Cc: "users@dpdk.org" Content-Type: multipart/alternative; boundary="000000000000329e1106258a5f77" X-BeenThere: users@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK usage discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: users-bounces@dpdk.org --000000000000329e1106258a5f77 Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable Hi Dariusz, Thanks for the reply. I'm using 68 byte UDP packets with diverse SIP,DIPs, the traffic is balanced between the cores (I included some logging below) The flows and groups I have are group 0: eth =3D> jump group 2 group 2: eth / ipv4 =3D> rss to sw queues group 2: eth =3D> rss to sw queues (lower priority) group 4: eth =3D> random N =3D> to sw queue 0 group 4: eth =3D> drop (lower priority) Group 4 is not actually referenced from anywhere, I assume this does not affect performance. 2024-10-28 10:25:25 INFO: <0> stats, ipackets:799741213, opackets:799741213 2024-10-28 10:25:25 INFO: <0> flow 0x188195140 grp:pri 0:001 end end =3D> jump jump 2/end end 2024-10-28 10:25:25 INFO: <0> flow 0x188174500 grp:pri 2:548 eth eth/ip4 ip4/end end =3D> count count/rss rss 0-31/end end (799741213, 51183437632) 2024-10-28 10:25:25 INFO: <0> flow 0x1883d73c0 grp:pri 2:549 eth eth/end end =3D> count count/rss rss 0-31/end end (0, 0) 2024-10-28 10:25:25 INFO: <0> flow 0x1883b67c0 grp:pri 4:001 random random/end end =3D> rss rss 0-31/end end 2024-10-28 10:25:25 INFO: <0> flow 0x1883d7140 grp:pri 4:549 end end =3D> drop drop/end end 2024-10-28 10:25:35 INFO: <0> flow 0x188195140 grp:pri 0:001 end end =3D> jump jump 2/end end 2024-10-28 10:25:35 INFO: <0> flow 0x188174500 grp:pri 2:548 eth eth/ip4 ip4/end end =3D> count count/rss rss 0-31/end end (799741213, 51183437632) 2024-10-28 10:25:35 INFO: <0> flow 0x1883d73c0 grp:pri 2:549 eth eth/end end =3D> count count/rss rss 0-31/end end (0, 0) 2024-10-28 10:25:35 INFO: <0> flow 0x1883b67c0 grp:pri 4:001 random random/end end =3D> rss rss 0-31/end end 2024-10-28 10:25:35 INFO: <0> flow 0x1883d7140 grp:pri 4:549 end end =3D> drop drop/end end 2024-10-28 10:25:40 INFO: <0> xstats, rx_good_packets:799741213, tx_good_packets:799741213, rx_good_bytes:51183437632, tx_good_bytes:51183437632, rx_q0_packets:24987083, rx_q0_bytes:1599173312, rx_q1_packets:24988480, rx_q1_bytes:1599262720, rx_q2_packets:24995304, rx_q2_bytes:1599699456, rx_q3_packets:24998711, rx_q3_bytes:1599917504, rx_q4_packets:24989350, rx_q4_bytes:1599318400, rx_q5_packets:24991546, rx_q5_bytes:1599458944, rx_q6_packets:24991647, rx_q6_bytes:1599465408, rx_q7_packets:24995441, rx_q7_bytes:1599708224, rx_q8_packets:24989564, rx_q8_bytes:1599332096, rx_q9_packets:24990980, rx_q9_bytes:1599422720, rx_q10_packets:24996265, rx_q10_bytes:1599760960, rx_q11_packets:24996320, rx_q11_bytes:1599764480, rx_q12_packets:24987707, rx_q12_bytes:1599213248, rx_q13_packets:24983936, rx_q13_bytes:1598971904, rx_q14_packets:24994621, rx_q14_bytes:1599655744, rx_q15_packets:24991660, rx_q15_bytes:1599466240, tx_q0_packets:24987083, tx_q0_bytes:1599173312, tx_q1_packets:24988480, tx_q1_bytes:1599262720, tx_q2_packets:24995304, tx_q2_bytes:1599699456, tx_q3_packets:24998711, tx_q3_bytes:1599917504, tx_q4_packets:24989350, tx_q4_bytes:1599318400, tx_q5_packets:24991546, tx_q5_bytes:1599458944, tx_q6_packets:24991647, tx_q6_bytes:1599465408, tx_q7_packets:24995441, tx_q7_bytes:1599708224, tx_q8_packets:24989564, tx_q8_bytes:1599332096, tx_q9_packets:24990980, tx_q9_bytes:1599422720, tx_q10_packets:24996265, tx_q10_bytes:1599760960, tx_q11_packets:24996320, tx_q11_bytes:1599764480, tx_q12_packets:24987707, tx_q12_bytes:1599213248, tx_q13_packets:24983936, tx_q13_bytes:1598971904, tx_q14_packets:24994621, tx_q14_bytes:1599655744, tx_q15_packets:24991660, tx_q15_bytes:1599466240, rx_unicast_bytes:51183437632, rx_unicast_packets:799741213, tx_unicast_bytes:51183437632, tx_unicast_packets:799741213, tx_phy_packets:799741213, rx_phy_packets:1500000000, rx_prio0_buf_discard_packets:700258787, tx_phy_bytes:54382402484, rx_phy_bytes:102000000000 2024-10-28 10:25:40 INFO: <0> stats, ipackets:799741213, opackets:799741213 ^C2024-10-28 10:25:42 INFO: forwarder13 exiting on core 21 n_rx_pkts: 24987707, n_sample_pkts: 0, max_rx_burst: 36, max_queue_depth: 30 2024-10-28 10:25:42 INFO: forwarder5 exiting on core 13 n_rx_pkts: 24989350, n_sample_pkts: 0, max_rx_burst: 34, max_queue_depth: 30 2024-10-28 10:25:42 INFO: forwarder8 exiting on core 16 n_rx_pkts: 24995441, n_sample_pkts: 0, max_rx_burst: 36, max_queue_depth: 36 2024-10-28 10:25:42 INFO: forwarder14 exiting on core 22 n_rx_pkts: 24983936, n_sample_pkts: 0, max_rx_burst: 36, max_queue_depth: 30 2024-10-28 10:25:42 INFO: forwarder19 exiting on core 27 n_rx_pkts: 24995349, n_sample_pkts: 0, max_rx_burst: 38, max_queue_depth: 34 2024-10-28 10:25:42 INFO: forwarder4 exiting on core 12 n_rx_pkts: 24998711, n_sample_pkts: 0, max_rx_burst: 33, max_queue_depth: 33 2024-10-28 10:25:42 INFO: forwarder7 exiting on core 15 n_rx_pkts: 24991647, n_sample_pkts: 0, max_rx_burst: 38, max_queue_depth: 34 2024-10-28 10:25:42 INFO: forwarder1 exiting on core 9 n_rx_pkts: 24987083, n_sample_pkts: 0, max_rx_burst: 32, max_queue_depth: 30 2024-10-28 10:25:42 INFO: forwarder11 exiting on core 19 n_rx_pkts: 24996265, n_sample_pkts: 0, max_rx_burst: 38, max_queue_depth: 34 2024-10-28 10:25:42 INFO: forwarder15 exiting on core 23 n_rx_pkts: 24994621, n_sample_pkts: 0, max_rx_burst: 38, max_queue_depth: 34 2024-10-28 10:25:42 INFO: forwarder10 exiting on core 18 n_rx_pkts: 24990980, n_sample_pkts: 0, max_rx_burst: 38, max_queue_depth: 32 2024-10-28 10:25:42 INFO: forwarder3 exiting on core 11 n_rx_pkts: 24995304, n_sample_pkts: 0, max_rx_burst: 25, max_queue_depth: 25 2024-10-28 10:25:42 INFO: forwarder2 exiting on core 10 n_rx_pkts: 24988480, n_sample_pkts: 0, max_rx_burst: 32, max_queue_depth: 30 2024-10-28 10:25:42 INFO: forwarder6 exiting on core 14 n_rx_pkts: 24991546, n_sample_pkts: 0, max_rx_burst: 34, max_queue_depth: 30 2024-10-28 10:25:42 INFO: forwarder12 exiting on core 20 n_rx_pkts: 24996320, n_sample_pkts: 0, max_rx_burst: 28, max_queue_depth: 28 2024-10-28 10:25:42 INFO: forwarder9 exiting on core 17 n_rx_pkts: 24989564, n_sample_pkts: 0, max_rx_burst: 38, max_queue_depth: 34 2024-10-28 10:25:42 INFO: forwarder17 exiting on core 25 n_rx_pkts: 24986227, n_sample_pkts: 0, max_rx_burst: 36, max_queue_depth: 32 2024-10-28 10:25:42 INFO: forwarder20 exiting on core 28 n_rx_pkts: 24994610, n_sample_pkts: 0, max_rx_burst: 37, max_queue_depth: 35 2024-10-28 10:25:42 INFO: forwarder16 exiting on core 24 n_rx_pkts: 24991660, n_sample_pkts: 0, max_rx_burst: 34, max_queue_depth: 34 2024-10-28 10:25:42 INFO: forwarder26 exiting on core 34 n_rx_pkts: 24994289, n_sample_pkts: 0, max_rx_burst: 29, max_queue_depth: 24 2024-10-28 10:25:42 INFO: forwarder23 exiting on core 31 n_rx_pkts: 24986582, n_sample_pkts: 0, max_rx_burst: 34, max_queue_depth: 34 2024-10-28 10:25:42 INFO: forwarder22 exiting on core 30 n_rx_pkts: 24995067, n_sample_pkts: 0, max_rx_burst: 38, max_queue_depth: 34 2024-10-28 10:25:42 INFO: forwarder21 exiting on core 29 n_rx_pkts: 24994793, n_sample_pkts: 0, max_rx_burst: 29, max_queue_depth: 23 2024-10-28 10:25:42 INFO: forwarder31 exiting on core 39 n_rx_pkts: 24993460, n_sample_pkts: 0, max_rx_burst: 26, max_queue_depth: 22 2024-10-28 10:25:42 INFO: forwarder29 exiting on core 37 n_rx_pkts: 24987219, n_sample_pkts: 0, max_rx_burst: 30, max_queue_depth: 30 2024-10-28 10:25:42 INFO: forwarder24 exiting on core 32 n_rx_pkts: 24986863, n_sample_pkts: 0, max_rx_burst: 38, max_queue_depth: 34 2024-10-28 10:25:42 INFO: forwarder32 exiting on core 40 n_rx_pkts: 24991661, n_sample_pkts: 0, max_rx_burst: 38, max_queue_depth: 34 2024-10-28 10:25:42 INFO: forwarder18 exiting on core 26 n_rx_pkts: 24991276, n_sample_pkts: 0, max_rx_burst: 38, max_queue_depth: 36 2024-10-28 10:25:42 INFO: forwarder30 exiting on core 38 n_rx_pkts: 24989407, n_sample_pkts: 0, max_rx_burst: 30, max_queue_depth: 30 2024-10-28 10:25:42 INFO: forwarder25 exiting on core 33 n_rx_pkts: 24994537, n_sample_pkts: 0, max_rx_burst: 26, max_queue_depth: 22 2024-10-28 10:25:42 INFO: forwarder28 exiting on core 36 n_rx_pkts: 24996627, n_sample_pkts: 0, max_rx_burst: 26, max_queue_depth: 22 2024-10-28 10:25:42 INFO: forwarder27 exiting on core 35 n_rx_pkts: 24994631, n_sample_pkts: 0, max_rx_burst: 34, max_queue_depth: 32 On Fri, Oct 25, 2024 at 8:03=E2=80=AFAM Dariusz Sosnowski wrote: > Hi Tony, > > I apologize for the late response. > > > From: Tony Hart > > Sent: Saturday, October 12, 2024 17:09 > > To: users@dpdk.org > > Subject: mlx5: imissed versus prio0_buf_discards > > > > External email: Use caution opening links or attachments > > > > > > > > I have a simple DPDK app that receives packets via RSS from a CX7 > (400G). The app uses 16 queues across 16 cores. What I see is dropped > packets even at only 50Mpps. > > > > Looking at rte_eth_port_xstats() I see rx_prio0_buf_discard_packets > matches the number of packets dropped however the imissed counter (from > rte_eth_port_stats) is 0. Indeed when I look at the rx_queue depths from > each thread in the app they barely reach 30 entries (I'm using the defaul= t > number of queue descs). > > > > What is the difference between rx_prio0_buf_discards and imissed > counters, why would rx_prio0_buf_discards increase but not imissed? > > Both counters measure packet drops, but at different levels: > > - imissed - Measures drops caused by lack of free descriptors in the Rx > queue. > This indicates that SW cannot keep up with current packet rate. > - rx_prio0_buf_discards - Measures drops caused by lack of free space in > NIC's Rx buffer. > This indicates that HW cannot keep up with current packet rate. > > What kind of traffic are you generating? > What kind of flow tables and rules do you create? > In your application, do you see that packets are roughly equally > distributed across all 16 Rx queues? > > > > > many thanks, > > tony > > > > fyi: this is using DPKD 24.07 and the HWS RTE FLOW Api to setup the RSS > flow. Firmware is 28.41 > > Best regards, > Dariusz Sosnowski > --=20 tony --000000000000329e1106258a5f77 Content-Type: text/html; charset="UTF-8" Content-Transfer-Encoding: quoted-printable
Hi Dariusz,
Thanks for the reply.=C2=A0=C2=A0

I'm using 68=C2=A0 byte UDP packets with diverse SIP,D= IPs, the traffic is balanced between the cores (I included some logging bel= ow)

The flows and groups I have are

=
=
group 0: eth =3D> jump group 2
group 2:=C2=A0 eth / ip= v4 =3D> rss to sw queues
group 2: eth =3D> rss to sw queu= es (lower priority)
group=C2=A04: eth =3D> random N =3D> = to sw queue 0
group 4: eth =3D> drop (lower priority)
<= /blockquote>

Group 4 is not actually referenced fr= om anywhere, I assume this does not affect=C2=A0performance.

=


2024-10-28 10:25:25 =C2=A0INFO: &l= t;0> =C2=A0stats, ipackets:799741213, opackets:799741213
2024-10-28 1= 0:25:25 =C2=A0INFO: <0> flow 0x188195140 grp:pri 0:001 end end =3D>= ; jump jump 2/end end
2024-10-28 10:25:25 =C2=A0INFO: <0> flow 0x1= 88174500 grp:pri 2:548 eth eth/ip4 ip4/end end =3D> count count/rss rss = 0-31/end end (799741213, 51183437632)
2024-10-28 10:25:25 =C2=A0INFO: &l= t;0> flow 0x1883d73c0 grp:pri 2:549 eth eth/end end =3D> count count/= rss rss 0-31/end end (0, 0)
2024-10-28 10:25:25 =C2=A0INFO: <0> fl= ow 0x1883b67c0 grp:pri 4:001 random random/end end =3D> rss rss 0-31/end= end
2024-10-28 10:25:25 =C2=A0INFO: <0> flow 0x1883d7140 grp:pri = 4:549 end end =3D> drop drop/end end
2024-10-28 10:25:35 =C2=A0INFO: = <0> flow 0x188195140 grp:pri 0:001 end end =3D> jump jump 2/end en= d
2024-10-28 10:25:35 =C2=A0INFO: <0> flow 0x188174500 grp:pri 2:5= 48 eth eth/ip4 ip4/end end =3D> count count/rss rss 0-31/end end (799741= 213, 51183437632)
2024-10-28 10:25:35 =C2=A0INFO: <0> flow 0x1883d= 73c0 grp:pri 2:549 eth eth/end end =3D> count count/rss rss 0-31/end end= (0, 0)
2024-10-28 10:25:35 =C2=A0INFO: <0> flow 0x1883b67c0 grp:p= ri 4:001 random random/end end =3D> rss rss 0-31/end end
2024-10-28 1= 0:25:35 =C2=A0INFO: <0> flow 0x1883d7140 grp:pri 4:549 end end =3D>= ; drop drop/end end
2024-10-28 10:25:40 =C2=A0INFO: <0> xstats, rx= _good_packets:799741213, tx_good_packets:799741213, rx_good_bytes:511834376= 32, tx_good_bytes:51183437632, rx_q0_packets:24987083, rx_q0_bytes:15991733= 12, rx_q1_packets:24988480, rx_q1_bytes:1599262720, rx_q2_packets:24995304,= rx_q2_bytes:1599699456, rx_q3_packets:24998711, rx_q3_bytes:1599917504, rx= _q4_packets:24989350, rx_q4_bytes:1599318400, rx_q5_packets:24991546, rx_q5= _bytes:1599458944, rx_q6_packets:24991647, rx_q6_bytes:1599465408, rx_q7_pa= ckets:24995441, rx_q7_bytes:1599708224, rx_q8_packets:24989564, rx_q8_bytes= :1599332096, rx_q9_packets:24990980, rx_q9_bytes:1599422720, rx_q10_packets= :24996265, rx_q10_bytes:1599760960, rx_q11_packets:24996320, rx_q11_bytes:1= 599764480, rx_q12_packets:24987707, rx_q12_bytes:1599213248, rx_q13_packets= :24983936, rx_q13_bytes:1598971904, rx_q14_packets:24994621, rx_q14_bytes:1= 599655744, rx_q15_packets:24991660, rx_q15_bytes:1599466240, tx_q0_packets:= 24987083, tx_q0_bytes:1599173312, tx_q1_packets:24988480, tx_q1_bytes:15992= 62720, tx_q2_packets:24995304, tx_q2_bytes:1599699456, tx_q3_packets:249987= 11, tx_q3_bytes:1599917504, tx_q4_packets:24989350, tx_q4_bytes:1599318400,= tx_q5_packets:24991546, tx_q5_bytes:1599458944, tx_q6_packets:24991647, tx= _q6_bytes:1599465408, tx_q7_packets:24995441, tx_q7_bytes:1599708224, tx_q8= _packets:24989564, tx_q8_bytes:1599332096, tx_q9_packets:24990980, tx_q9_by= tes:1599422720, tx_q10_packets:24996265, tx_q10_bytes:1599760960, tx_q11_pa= ckets:24996320, tx_q11_bytes:1599764480, tx_q12_packets:24987707, tx_q12_by= tes:1599213248, tx_q13_packets:24983936, tx_q13_bytes:1598971904, tx_q14_pa= ckets:24994621, tx_q14_bytes:1599655744, tx_q15_packets:24991660, tx_q15_by= tes:1599466240, rx_unicast_bytes:51183437632, rx_unicast_packets:799741213,= tx_unicast_bytes:51183437632, tx_unicast_packets:799741213, tx_phy_packets= :799741213, rx_phy_packets:1500000000, rx_prio0_buf_discard_packets:7002587= 87, tx_phy_bytes:54382402484, rx_phy_bytes:102000000000
2024-10-28 10:25= :40 =C2=A0INFO: <0> =C2=A0stats, ipackets:799741213, opackets:7997412= 13
^C2024-10-28 10:25:42 =C2=A0INFO: forwarder13 exiting on core 21 n_rx= _pkts: 24987707, n_sample_pkts: 0, max_rx_burst: 36, max_queue_depth: 302024-10-28 10:25:42 =C2=A0INFO: forwarder5 exiting on core 13 n_rx_pkts: 2= 4989350, n_sample_pkts: 0, max_rx_burst: 34, max_queue_depth: 30
2024-10= -28 10:25:42 =C2=A0INFO: forwarder8 exiting on core 16 n_rx_pkts: 24995441,= n_sample_pkts: 0, max_rx_burst: 36, max_queue_depth: 36
2024-10-28 10:2= 5:42 =C2=A0INFO: forwarder14 exiting on core 22 n_rx_pkts: 24983936, n_samp= le_pkts: 0, max_rx_burst: 36, max_queue_depth: 30
2024-10-28 10:25:42 = =C2=A0INFO: forwarder19 exiting on core 27 n_rx_pkts: 24995349, n_sample_pk= ts: 0, max_rx_burst: 38, max_queue_depth: 34
2024-10-28 10:25:42 =C2=A0I= NFO: forwarder4 exiting on core 12 n_rx_pkts: 24998711, n_sample_pkts: 0, m= ax_rx_burst: 33, max_queue_depth: 33
2024-10-28 10:25:42 =C2=A0INFO: for= warder7 exiting on core 15 n_rx_pkts: 24991647, n_sample_pkts: 0, max_rx_bu= rst: 38, max_queue_depth: 34
2024-10-28 10:25:42 =C2=A0INFO: forwarder1 = exiting on core 9 n_rx_pkts: 24987083, n_sample_pkts: 0, max_rx_burst: 32, = max_queue_depth: 30
2024-10-28 10:25:42 =C2=A0INFO: forwarder11 exiting = on core 19 n_rx_pkts: 24996265, n_sample_pkts: 0, max_rx_burst: 38, max_que= ue_depth: 34
2024-10-28 10:25:42 =C2=A0INFO: forwarder15 exiting on core= 23 n_rx_pkts: 24994621, n_sample_pkts: 0, max_rx_burst: 38, max_queue_dept= h: 34
2024-10-28 10:25:42 =C2=A0INFO: forwarder10 exiting on core 18 n_r= x_pkts: 24990980, n_sample_pkts: 0, max_rx_burst: 38, max_queue_depth: 322024-10-28 10:25:42 =C2=A0INFO: forwarder3 exiting on core 11 n_rx_pkts: = 24995304, n_sample_pkts: 0, max_rx_burst: 25, max_queue_depth: 25
2024-1= 0-28 10:25:42 =C2=A0INFO: forwarder2 exiting on core 10 n_rx_pkts: 24988480= , n_sample_pkts: 0, max_rx_burst: 32, max_queue_depth: 30
2024-10-28 10:= 25:42 =C2=A0INFO: forwarder6 exiting on core 14 n_rx_pkts: 24991546, n_samp= le_pkts: 0, max_rx_burst: 34, max_queue_depth: 30
2024-10-28 10:25:42 = =C2=A0INFO: forwarder12 exiting on core 20 n_rx_pkts: 24996320, n_sample_pk= ts: 0, max_rx_burst: 28, max_queue_depth: 28
2024-10-28 10:25:42 =C2=A0I= NFO: forwarder9 exiting on core 17 n_rx_pkts: 24989564, n_sample_pkts: 0, m= ax_rx_burst: 38, max_queue_depth: 34
2024-10-28 10:25:42 =C2=A0INFO: for= warder17 exiting on core 25 n_rx_pkts: 24986227, n_sample_pkts: 0, max_rx_b= urst: 36, max_queue_depth: 32
2024-10-28 10:25:42 =C2=A0INFO: forwarder2= 0 exiting on core 28 n_rx_pkts: 24994610, n_sample_pkts: 0, max_rx_burst: 3= 7, max_queue_depth: 35
2024-10-28 10:25:42 =C2=A0INFO: forwarder16 exiti= ng on core 24 n_rx_pkts: 24991660, n_sample_pkts: 0, max_rx_burst: 34, max_= queue_depth: 34
2024-10-28 10:25:42 =C2=A0INFO: forwarder26 exiting on c= ore 34 n_rx_pkts: 24994289, n_sample_pkts: 0, max_rx_burst: 29, max_queue_d= epth: 24
2024-10-28 10:25:42 =C2=A0INFO: forwarder23 exiting on core 31 = n_rx_pkts: 24986582, n_sample_pkts: 0, max_rx_burst: 34, max_queue_depth: 3= 4
2024-10-28 10:25:42 =C2=A0INFO: forwarder22 exiting on core 30 n_rx_pk= ts: 24995067, n_sample_pkts: 0, max_rx_burst: 38, max_queue_depth: 34
20= 24-10-28 10:25:42 =C2=A0INFO: forwarder21 exiting on core 29 n_rx_pkts: 249= 94793, n_sample_pkts: 0, max_rx_burst: 29, max_queue_depth: 23
2024-10-2= 8 10:25:42 =C2=A0INFO: forwarder31 exiting on core 39 n_rx_pkts: 24993460, = n_sample_pkts: 0, max_rx_burst: 26, max_queue_depth: 22
2024-10-28 10:25= :42 =C2=A0INFO: forwarder29 exiting on core 37 n_rx_pkts: 24987219, n_sampl= e_pkts: 0, max_rx_burst: 30, max_queue_depth: 30
2024-10-28 10:25:42 =C2= =A0INFO: forwarder24 exiting on core 32 n_rx_pkts: 24986863, n_sample_pkts:= 0, max_rx_burst: 38, max_queue_depth: 34
2024-10-28 10:25:42 =C2=A0INFO= : forwarder32 exiting on core 40 n_rx_pkts: 24991661, n_sample_pkts: 0, max= _rx_burst: 38, max_queue_depth: 34
2024-10-28 10:25:42 =C2=A0INFO: forwa= rder18 exiting on core 26 n_rx_pkts: 24991276, n_sample_pkts: 0, max_rx_bur= st: 38, max_queue_depth: 36
2024-10-28 10:25:42 =C2=A0INFO: forwarder30 = exiting on core 38 n_rx_pkts: 24989407, n_sample_pkts: 0, max_rx_burst: 30,= max_queue_depth: 30
2024-10-28 10:25:42 =C2=A0INFO: forwarder25 exiting= on core 33 n_rx_pkts: 24994537, n_sample_pkts: 0, max_rx_burst: 26, max_qu= eue_depth: 22
2024-10-28 10:25:42 =C2=A0INFO: forwarder28 exiting on cor= e 36 n_rx_pkts: 24996627, n_sample_pkts: 0, max_rx_burst: 26, max_queue_dep= th: 22
2024-10-28 10:25:42 =C2=A0INFO: forwarder27 exiting on core 35 n_= rx_pkts: 24994631, n_sample_pkts: 0, max_rx_burst: 34, max_queue_depth: 32<= /div>

On Fri, Oct 25, 2024 at 8:03=E2=80=AFAM Dariusz Sosnowski <dsosnowski@nvidia.com> wrote:
<= /div>
Hi Tony,

I apologize for the late response.

> From: Tony Hart <tony.hart@domainhart.com>
> Sent: Saturday, October 12, 2024 17:09
> To: users@dpdk.org=
> Subject: mlx5: imissed versus prio0_buf_discards
>
> External email: Use caution opening links or attachments
>
>
>
> I have a simple DPDK app that receives packets via RSS from a CX7 (400= G).=C2=A0 The app uses 16 queues across 16 cores.=C2=A0 What I see is dropp= ed packets even at only 50Mpps.
>
> Looking at rte_eth_port_xstats() I see=C2=A0 rx_prio0_buf_discard_pack= ets matches the number of packets dropped however the imissed counter (from= rte_eth_port_stats) is 0.=C2=A0 Indeed when I look at the rx_queue depths = from each thread in the app they barely reach 30 entries (I'm using the= default number of queue descs).
>
> What is the difference between rx_prio0_buf_discards and imissed count= ers, why would rx_prio0_buf_discards increase but not imissed?

Both counters measure packet drops, but at different levels:

- imissed - Measures drops caused by lack of free descriptors in the Rx que= ue.
=C2=A0 This indicates that SW cannot keep up with current packet rate.
- rx_prio0_buf_discards - Measures drops caused by lack of free space in NI= C's Rx buffer.
=C2=A0 This indicates that HW cannot keep up with current packet rate.

What kind of traffic are you generating?
What kind of flow tables and rules do you create?
In your application, do you see that packets are roughly equally distribute= d across all 16 Rx queues?

>
> many thanks,
> tony
>
> fyi: this is using DPKD 24.07 and the HWS RTE FLOW Api to setup the RS= S flow.=C2=A0 Firmware is 28.41

Best regards,
Dariusz Sosnowski


--
to= ny
--000000000000329e1106258a5f77--