From: Hrvoje Habjanic <hrvoje.habjanic@zg.ht.hr>
To: users@dpdk.org
Subject: [dpdk-users] DPDK TX problems
Date: Wed, 20 Mar 2019 10:22:24 +0100 [thread overview]
Message-ID: <c188c7eb-ee23-8ca4-0e4a-69948a38f425@zg.ht.hr> (raw)
Hi.
I did write an application using dpdk 17.11 (did try also with 18.11),
and when doing some performance testing, i'm seeing very odd behavior.
To verify that this is not because of my app, i did the same test with
l2fwd example app, and i'm still confused by results.
In short, i'm trying to push a lot of L2 packets through dpdk engine -
packet processing is minimal. When testing, i'm starting with small
number of packets-per-second, and then gradually increase it to see
where is the limit. At some point, i do reach this limit - packets start
to get dropped. And this is when stuff become weird.
When i reach peek packet rate (at which packets start to get dropped), i
would expect that reducing packet rate will remove packet drops. But,
this is not the case. For example, let's assume that peek packet rate is
3.5Mpps. At this point everything works ok. Increasing pps to 4.0Mpps,
makes a lot of dropped packets. When reducing pps back to 3.5Mpps, app
is still broken - packets are still dropped.
At this point, i need to drastically reduce pps (1.4Mpps) to make
dropped packets go away. Also, app is unable to successfully forward
anything beyond this 1.4M, despite the fact that in the beginning it did
forward 3.5M! Only way to recover is to restart the app.
Also, sometimes, the app just stops forwarding any packets - packets are
received (as seen by counters), but app is unable to send anything back.
As i did mention, i'm seeing the same behavior with l2fwd example app. I
did test dpdk 17.11 and also dpdk 18.11 - the results are the same.
My test environment is HP DL380G8, with 82599ES 10Gig (ixgbe) cards,
connected with Cisco nexus 9300 sw. On the other side is ixia test
appliance. Application is run in virtual machine (VM), using KVM
(openstack, with sriov enabled, and numa restrictions). I did check that
VM is using only cpu's from NUMA node on which network card is
connected, so there is no cross-numa traffic. Openstack is Queens,
Ubuntu is Bionic release. Virtual machine is also using ubuntu bionic as OS.
I do not know how to debug this? Does someone else have the same
observations?
Regards,
H.
next reply other threads:[~2019-03-20 9:22 UTC|newest]
Thread overview: 6+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-03-20 9:22 Hrvoje Habjanic [this message]
2019-03-29 7:24 ` Hrvoje Habjanić
2019-04-08 9:52 ` Hrvoje Habjanić
2020-02-18 8:36 ` Hrvoje Habjanic
2020-03-26 20:54 ` Thomas Monjalon
2020-03-27 18:25 ` [dpdk-users] [dpdk-ci] " Lincoln Lavoie
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=c188c7eb-ee23-8ca4-0e4a-69948a38f425@zg.ht.hr \
--to=hrvoje.habjanic@zg.ht.hr \
--cc=users@dpdk.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).