DPDK usage discussions
 help / color / mirror / Atom feed
From: Thomas Monjalon <thomas@monjalon.net>
To: Steffen Weise <stweise.linux@gmail.com>
Cc: users@dpdk.org, Filip Janiszewski <contact@filipjaniszewski.com>
Subject: Re: [dpdk-users] MLX ConnectX-4 Discarding packets
Date: Wed, 29 Sep 2021 12:43:51 +0200
Message-ID: <19675912.xPyGqSisHP@thomas> (raw)
In-Reply-To: <1c8ec784-aebe-a081-e47d-dff6733917a4@filipjaniszewski.com>

Great, thanks for the update!


12/09/2021 11:32, Filip Janiszewski:
> Alright, nailed it down to a wrong preferred PCIe device in the BIOS
> configuration, it has not been changed after the NIC have been moved to
> another PCIe slot.
> 
> Now the EPYC is going really great, getting 100Gbps rate easily.
> 
> Thank
> 
> Il 9/11/21 4:34 PM, Filip Janiszewski ha scritto:
> > I wanted just to add, while running the same exact testpmd on the other
> > machine I won't get a single miss with the same patter traffic:
> > 
> > .
> > testpmd> stop
> > Telling cores to stop...
> > Waiting for lcores to finish...
> > 
> >   ------- Forward Stats for RX Port= 0/Queue= 0 -> TX Port= 0/Queue= 0
> > -------
> >   RX-packets: 61711939       TX-packets: 0              TX-dropped: 0
> > 
> > 
> >   ------- Forward Stats for RX Port= 0/Queue= 1 -> TX Port= 0/Queue= 1
> > -------
> >   RX-packets: 62889424       TX-packets: 0              TX-dropped: 0
> > 
> > 
> >   ------- Forward Stats for RX Port= 0/Queue= 2 -> TX Port= 0/Queue= 2
> > -------
> >   RX-packets: 61914199       TX-packets: 0              TX-dropped: 0
> > 
> > 
> >   ------- Forward Stats for RX Port= 0/Queue= 3 -> TX Port= 0/Queue= 3
> > -------
> >   RX-packets: 63484438       TX-packets: 0              TX-dropped: 0
> > 
> > 
> >   ---------------------- Forward statistics for port 0
> > ----------------------
> >   RX-packets: 250000000      RX-dropped: 0             RX-total: 250000000
> >   TX-packets: 0              TX-dropped: 0             TX-total: 0
> > 
> > ----------------------------------------------------------------------------
> > 
> >   +++++++++++++++ Accumulated forward statistics for all
> > ports+++++++++++++++
> >   RX-packets: 250000000      RX-dropped: 0             RX-total: 250000000
> >   TX-packets: 0              TX-dropped: 0             TX-total: 0
> > 
> > ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
> > .
> > 
> > In the lab I've the EPYC connected directly to the Xeon using a 100GbE
> > link, both same RHL8.4 and same DPDK 21.02, running:
> > 
> > .
> > ./dpdk-testpmd -l 21-31 -n 8 -w 81:00.1  -- -i --rxq=4 --txq=4
> > --burst=64 --forward-mode=rxonly --rss-ip --total-num-mbufs=4194304
> > --nb-cores=4
> > .
> > 
> > and sending from the other end with pktgen, the EPYC loss tons of
> > packets (see my previous email), the Xeon don't loss anything.
> > 
> > *Confusion!*
> > 
> > Il 9/11/21 4:19 PM, Filip Janiszewski ha scritto:
> >> Thanks,
> >>
> >> I knew that document and we've implemented many of those settings/rules,
> >> but perhaps there's one crucial I've forgot? Wonder which one.
> >>
> >> Anyway, increasing the amount of queues impinge the performance, while
> >> sending 250M packets over a 100GbE link to an Intel 810-cqda2 NIC
> >> mounted on the EPYC Milan server, i see:
> >>
> >> .
> >> 1 queue, 30Gbps, ~45Mpps, 64B frame = imiss: 54,590,111
> >> 2 queue, 30Gbps, ~45Mpps, 64B frame = imiss: 79,394,138
> >> 4 queue, 30Gbps, ~45Mpps, 64B frame = imiss: 87,414,030
> >> .
> >>
> >> With DPDK 21.02 on RHL8.4. I can't observe this situation while
> >> capturing from my Intel server where increasing the queues leads to
> >> better performance (while with the test input set I drop with one queue,
> >> I do not drop anymore with 2 on the Intel server.)
> >>
> >> A customer with a brand new EPYC Milan server in his lab observed as
> >> well this scenario which is a bit of a worry, but again it might be some
> >> config/compilation issue we need do deal with?
> >>
> >> BTW, the same issue can be reproduced with testpmd, using 4 queues and
> >> the same input data set (250M of 64bytes frame at 30Gbps):
> >>
> >> .
> >> testpmd> stop
> >> Telling cores to stop...
> >> Waiting for lcores to finish...
> >>
> >>   ------- Forward Stats for RX Port= 0/Queue= 0 -> TX Port= 0/Queue= 0
> >> -------
> >>   RX-packets: 41762999       TX-packets: 0              TX-dropped: 0
> >>
> >>
> >>   ------- Forward Stats for RX Port= 0/Queue= 1 -> TX Port= 0/Queue= 1
> >> -------
> >>   RX-packets: 40152306       TX-packets: 0              TX-dropped: 0
> >>
> >>
> >>   ------- Forward Stats for RX Port= 0/Queue= 2 -> TX Port= 0/Queue= 2
> >> -------
> >>   RX-packets: 41153402       TX-packets: 0              TX-dropped: 0
> >>
> >>
> >>   ------- Forward Stats for RX Port= 0/Queue= 3 -> TX Port= 0/Queue= 3
> >> -------
> >>   RX-packets: 38341370       TX-packets: 0              TX-dropped: 0
> >>
> >>
> >>   ---------------------- Forward statistics for port 0
> >> ----------------------
> >>   RX-packets: 161410077      RX-dropped: 88589923      RX-total: 250000000
> >>   TX-packets: 0              TX-dropped: 0             TX-total: 0
> >>
> >> ----------------------------------------------------------------------------
> >> .
> >>
> >> .
> >> testpmd> show port xstats 0
> >> ###### NIC extended statistics for port 0
> >> rx_good_packets: 161410081
> >> tx_good_packets: 0
> >> rx_good_bytes: 9684605284
> >> tx_good_bytes: 0
> >> rx_missed_errors: 88589923
> >> .
> >>
> >> Can't figure out what's wrong here..
> >>
> >>
> >> Il 9/11/21 12:20 PM, Steffen Weise ha scritto:
> >>> Hi Filip,
> >>>
> >>> i have not seen the same issues.
> >>> Are you aware of this tuning guide? I applied it and had no issues with
> >>> intel 100G NIC.
> >>>
> >>> HPC Tuning Guide for AMD EPYC Processors
> >>> http://developer.amd.com/wp-content/resources/56420.pdf
> >>> <http://developer.amd.com/wp-content/resources/56420.pdf>
> >>>
> >>> Hope it helps.
> >>>
> >>> Cheers,
> >>> Steffen Weise
> >>>
> >>>
> >>>> Am 11.09.2021 um 10:56 schrieb Filip Janiszewski
> >>>> <contact@filipjaniszewski.com>:
> >>>>
> >>>> I ran more tests,
> >>>>
> >>>> This AMD server is a bit confusing, I can tune it to capture 28Mpps (64
> >>>> bytes frame) from one single core, so I would assume that using one more
> >>>> core will at least increase a bit the capture capabilities, but it's
> >>>> not, 1% more speed and it drops regardless of how many queues are
> >>>> configured - I've not observed this situation on the Intel server, where
> >>>> adding more queues/cores scale to higher throughput.
> >>>>
> >>>> This issue have been verified now with both Mellanox and Intel (810
> >>>> series, 100GbE) NICs.
> >>>>
> >>>> Anybody encountered anything similar?
> >>>>
> >>>> Thanks
> >>>>
> >>>> Il 9/10/21 3:34 PM, Filip Janiszewski ha scritto:
> >>>>> Hi,
> >>>>>
> >>>>> I've switched a 100Gbe MLX ConnectX-4 card from an Intel Xeon server to
> >>>>> an AMD EPYC server (running 75F3 CPU, 256GiB of RAM and PCIe4 lanes),
> >>>>> and using the same capture software we can't get any faster than 10Gbps,
> >>>>> when exceeding that speed regardless of the amount of queues configured
> >>>>> the rx_discards_phy counter starts to raise and packets are lost in huge
> >>>>> amounts.
> >>>>>
> >>>>> On the Xeon machine, I was able to get easily to 50Gbps with 4 queues.
> >>>>>
> >>>>> Is there any specific DPDK configuration that we might want to setup for
> >>>>> those AMD servers? The software is DPDK based so I wonder if some build
> >>>>> option is missing somewhere.
> >>>>>
> >>>>> What else I might want to look for to investigate this issue?
> >>>>>
> >>>>> Thanks





      reply	other threads:[~2021-09-29 10:43 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-09-10 13:34 Filip Janiszewski
2021-09-11  8:56 ` Filip Janiszewski
2021-09-11 10:20   ` Steffen Weise
2021-09-11 14:19     ` Filip Janiszewski
2021-09-11 14:34       ` Filip Janiszewski
2021-09-12  9:32         ` Filip Janiszewski
2021-09-29 10:43           ` Thomas Monjalon [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=19675912.xPyGqSisHP@thomas \
    --to=thomas@monjalon.net \
    --cc=contact@filipjaniszewski.com \
    --cc=stweise.linux@gmail.com \
    --cc=users@dpdk.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link

DPDK usage discussions

This inbox may be cloned and mirrored by anyone:

	git clone --mirror http://inbox.dpdk.org/users/0 users/git/0.git

	# If you have public-inbox 1.1+ installed, you may
	# initialize and index your mirror using the following commands:
	public-inbox-init -V2 users users/ http://inbox.dpdk.org/users \
		users@dpdk.org
	public-inbox-index users

Example config snippet for mirrors.
Newsgroup available over NNTP:
	nntp://inbox.dpdk.org/inbox.dpdk.users


AGPL code for this site: git clone https://public-inbox.org/public-inbox.git