DPDK usage discussions
 help / color / mirror / Atom feed
From: Filip Janiszewski <contact@filipjaniszewski.com>
To: users@dpdk.org
Subject: [dpdk-users] Packets drop while fetching with rte_eth_rx_burst
Date: Sun, 25 Mar 2018 12:33:09 +0200	[thread overview]
Message-ID: <1d7ceec8-f9c8-47d1-9274-31f4527edab5@filipjaniszewski.com> (raw)

Hi Everybody,

I have a weird drop problem, and to understand my question the best way
is to have a look at this simple (and cleaned from all the not relevant
stuff) snippet:

while( 1 )
{
    if( config->running == false ) {
        break;
    }
    num_of_pkt = rte_eth_rx_burst( config->port_id,
                                   config->queue_idx,
                                   buffers,
                                   MAX_BURST_DEQ_SIZE);
    if( unlikely( num_of_pkt == MAX_BURST_DEQ_SIZE ) ) {
        rx_ring_full = true; //probably not the best name
    }

    if( likely( num_of_pkt > 0 ) )
    {
        pk_captured += num_of_pkt;

        num_of_enq_pkt =
rte_ring_sp_enqueue_bulk(config->incoming_pkts_ring,
                                               (void*)buffers,
                                               num_of_pkt,
                                               &rx_ring_free_space);
        //if num_of_enq_pkt == 0 free the mbufs..
     }
}

This loop is retrieving packets from the device and pushing them into a
queue for further processing by another lcore.

When I run a test with a Mellanox card sending 20M (20878300) packets at
2.5M p/s the loop seems to miss some packets and pk_captured is always
like 19M or similar.

rx_ring_full is never true, which means that num_of_pkt is always <
MAX_BURST_DEQ_SIZE, so according to the documentation I shall not have
drops at HW level. Also, num_of_enq_pkt is never 0 which means that all
the packets are enqueued.

Now, if from that snipped I remove the rte_ring_sp_enqueue_bulk call
(and make sure to release all the mbufs) then pk_captured is always
exactly equal to the amount of packets I've send to the NIC.

So it seems (but I cant deal with this idea) that
rte_ring_sp_enqueue_bulk is somehow too slow and between one call to
rte_eth_rx_burst and another some packets are dropped due to full ring
on the NIC, but, why num_of_pkt (from rte_eth_rx_burst) is always
smaller than MAX_BURST_DEQ_SIZE (much smaller) as if there was always
sufficient room for the packets?

Is anybody able to help me understand what's happening here?

Note, MAX_BURST_DEQ_SIZE is 512.

Thanks

             reply	other threads:[~2018-03-25 10:33 UTC|newest]

Thread overview: 3+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-03-25 10:33 Filip Janiszewski [this message]
     [not found] <277260559.2895913.1521977412628.ref@mail.yahoo.com>
2018-03-25 11:30 ` [dpdk-users] 回覆﹕ " MAC Lee
2018-03-25 12:14   ` Filip Janiszewski

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1d7ceec8-f9c8-47d1-9274-31f4527edab5@filipjaniszewski.com \
    --to=contact@filipjaniszewski.com \
    --cc=users@dpdk.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).