DPDK usage discussions
 help / color / mirror / Atom feed
From: Matheus Salgueiro Castanho <matheus.castanho@dcc.ufmg.br>
To: users@dpdk.org
Subject: [dpdk-users] Possible causes for packet drop during TX buffer flush
Date: Mon, 27 Aug 2018 16:54:48 -0300	[thread overview]
Message-ID: <CAPyJoH=vnotTECaQAEe7aOEG8tCcYa6irZ_CJfTLzBKjFWqFgw@mail.gmail.com> (raw)

Hi all,

I've been working on a DPDK packet classifier, which receives packets
through one port, performs classification + encapsulation and then sends
packets out through a second port. The application is running inside a VM,
virtualized using QEMU/KVM and with OVS-DPDK for networking.

Although functional, it presents some packet loss (< 0.1%) starting at
350Mbps. I was able to reduce it through better resource allocation
(isolated and pinned CPU cores for VM through libvirt and for OVS PMD
threads in the host), but there is still some loss. What is even weirder is
that the loss grows linearly from 350 Mbps to 650 Mbps, and drops
dramatically at 700 Mbps, just to increase again from 750 Mbps up to 950
Mbps.

Right now I'm struggling to find other possible performance bottlenecks
that might be causing this issue.

I was able to confirm that the application does receive all packets sent
from my traffic generator (the number of packets match inside the
application and in the source), but for some reason some packets get
dropped during transmission. I'm counting packet loss using
rte_eth_tx_buffer_drop_callback(), registered through
rte_eth_tx_buffer_set_err_callback(). As the documentation states
<http://doc.dpdk.org/api/rte__ethdev_8h.html#aacd4952d9f45acd463e203c21db9a7bb>,
this callback is called when packets are dropped inside rte_eth_tx_buffer()
and rte_eth_tx_buffer_flush() APIs.

I imagined that packets might have been dropped because of a full TX
Buffer, so I increased its size to 1024, with no further improvement, so I
set it back to the same value as the burst size, which is 64.

I then changed my callback function to register the current TX buffer
length when a drop occurs. When I tested sending traffic at 650 Mbps, the
results showed that in 50% of the times packets get dropped when there is a
single packet in the TX buffer, other 23% happens with 32 packets in it,
and the rest is in between these values.

So I have a few questions:
- Since I'm able to receive, process and buffer all packets, what could be
happening that is causing packet drops?
- In which situations rte_eth_tx_buffer() and rte_eth_tx_buffer_flush()
drop packets? I couldn't get this information from the documentation.

I'd appreciate any kind of help.

Best Regards,
Matheus Castanho

                 reply	other threads:[~2018-08-27 19:55 UTC|newest]

Thread overview: [no followups] expand[flat|nested]  mbox.gz  Atom feed

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CAPyJoH=vnotTECaQAEe7aOEG8tCcYa6irZ_CJfTLzBKjFWqFgw@mail.gmail.com' \
    --to=matheus.castanho@dcc.ufmg.br \
    --cc=users@dpdk.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).