From: MAC Lee <mac_leehk@yahoo.com.hk>
To: <users@dpdk.org>, Filip Janiszewski <contact@filipjaniszewski.com>
Subject: [dpdk-users] 回覆﹕ Packets drop while fetching with rte_eth_rx_burst
Date: Sun, 25 Mar 2018 11:30:12 +0000 (UTC) [thread overview]
Message-ID: <277260559.2895913.1521977412628@mail.yahoo.com> (raw)
In-Reply-To: <277260559.2895913.1521977412628.ref@mail.yahoo.com>
Hi Filip,
which dpdk version are you using? You can take a look to the source code of dpdk , the rxdrop counter may be not implemented in dpdk. So you always get 0 in rxdrop.
Thanks,
Marco
--------------------------------------------
18/3/25 (週日),Filip Janiszewski <contact@filipjaniszewski.com> 寫道:
主旨: [dpdk-users] Packets drop while fetching with rte_eth_rx_burst
收件者: users@dpdk.org
日期: 2018年3月25日,日,下午6:33
Hi Everybody,
I have a weird drop problem, and to
understand my question the best way
is to have a look at this simple (and
cleaned from all the not relevant
stuff) snippet:
while( 1 )
{
if( config->running ==
false ) {
break;
}
num_of_pkt =
rte_eth_rx_burst( config->port_id,
config->queue_idx,
buffers,
MAX_BURST_DEQ_SIZE);
if( unlikely( num_of_pkt
== MAX_BURST_DEQ_SIZE ) ) {
rx_ring_full = true; //probably not the best name
}
if( likely( num_of_pkt
> 0 ) )
{
pk_captured
+= num_of_pkt;
num_of_enq_pkt =
rte_ring_sp_enqueue_bulk(config->incoming_pkts_ring,
(void*)buffers,
num_of_pkt,
&rx_ring_free_space);
//if
num_of_enq_pkt == 0 free the mbufs..
}
}
This loop is retrieving packets from
the device and pushing them into a
queue for further processing by another
lcore.
When I run a test with a Mellanox card
sending 20M (20878300) packets at
2.5M p/s the loop seems to miss some
packets and pk_captured is always
like 19M or similar.
rx_ring_full is never true, which means
that num_of_pkt is always <
MAX_BURST_DEQ_SIZE, so according to the
documentation I shall not have
drops at HW level. Also, num_of_enq_pkt
is never 0 which means that all
the packets are enqueued.
Now, if from that snipped I remove the
rte_ring_sp_enqueue_bulk call
(and make sure to release all the
mbufs) then pk_captured is always
exactly equal to the amount of packets
I've send to the NIC.
So it seems (but I cant deal with this
idea) that
rte_ring_sp_enqueue_bulk is somehow too
slow and between one call to
rte_eth_rx_burst and another some
packets are dropped due to full ring
on the NIC, but, why num_of_pkt (from
rte_eth_rx_burst) is always
smaller than MAX_BURST_DEQ_SIZE (much
smaller) as if there was always
sufficient room for the packets?
Is anybody able to help me understand
what's happening here?
Note, MAX_BURST_DEQ_SIZE is 512.
Thanks
next parent reply other threads:[~2018-03-25 11:30 UTC|newest]
Thread overview: 3+ messages / expand[flat|nested] mbox.gz Atom feed top
[not found] <277260559.2895913.1521977412628.ref@mail.yahoo.com>
2018-03-25 11:30 ` MAC Lee [this message]
2018-03-25 12:14 ` Filip Janiszewski
2018-03-25 10:33 [dpdk-users] " Filip Janiszewski
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=277260559.2895913.1521977412628@mail.yahoo.com \
--to=mac_leehk@yahoo.com.hk \
--cc=contact@filipjaniszewski.com \
--cc=users@dpdk.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).