From: Dor Green <dorgreen1@gmail.com>
To: dev@dpdk.org
Subject: [dpdk-dev] Packet data out of bounds after rte_eth_rx_burst
Date: Mon, 23 Mar 2015 16:24:18 +0200 [thread overview]
Message-ID: <CAKedurxdjBt5FiCo0J=2V9d4=GEac6cBAxtWuXfwJ4hMX83L9w@mail.gmail.com> (raw)
I'm running a small app which captures packets on a single lcore and
then passes it to other workers for processing.
Before even sending it to processing, when checking some minor
information in the packet mbuf's data I get a segfault.
This code, for example gets a segfault:
struct rte_mbuf *pkts[PKTS_BURST_SIZE];
for (p = 0; p < portnb; ++p) {
nbrx = rte_eth_rx_burst(p, 0, pkts, PKTS_BURST_SIZE);
if (unlikely(nbrx == 0)) {
continue;
}
for (i = 0; likely(i < nbrx); i++) {
printf("Pkt %c\n", pkts[i]->pkt->data[0]);
rte_mempool_put(pktmbuf_pool, (void *const)pkts[i]);
}
}
This doesn't happen on most packets, but when I used packets from a
certain cap it happened often (SSL traffic). In gdb the packet objects
looked like this:
{next = 0x0, data = 0x62132136406a6f6, data_len = 263, nb_segs = 1
'\001', in_port = 0 '\000', pkt_len = 263, vlan_macip = {data = 55111,
f = {l3_len = 327, l2_len = 107, vlan_tci = 0}}, hash = {
rss = 311317915, fdir = {hash = 21915, id = 4750}, sched =
311317915}} (Invalid)
{next = 0x0, data = 0x7ffe43d8f640, data_len = 73, nb_segs = 1
'\001', in_port = 0 '\000', pkt_len = 73, vlan_macip = {data = 0, f =
{l3_len = 0, l2_len = 0, vlan_tci = 0}}, hash = {rss = 311317915,
fdir = {hash = 21915, id = 4750}, sched = 311317915}} (Valid)
{next = 0x0, data = 0x7ffe43d7fa40, data_len = 74, nb_segs = 1 '\001',
in_port = 0 '\000', pkt_len = 74, vlan_macip = {data = 0, f = {l3_len
= 0, l2_len = 0, vlan_tci = 0}}, hash = {rss = 311317915,
fdir = {hash = 21915, id = 4750}, sched = 311317915}} (Valid)
{next = 0x0, data = 0x7ffe43d7ff80, data_len = 66, nb_segs = 1 '\001',
in_port = 0 '\000', pkt_len = 66, vlan_macip = {data = 0, f = {l3_len
= 0, l2_len = 0, vlan_tci = 0}}, hash = {rss = 311317915,
fdir = {hash = 21915, id = 4750}, sched = 311317915}} (Valid)
{next = 0x0, data = 0x28153a8e63b3afc4, data_len = 263, nb_segs = 1
'\001', in_port = 0 '\000', pkt_len = 263, vlan_macip = {data = 59535,
f = {l3_len = 143, l2_len = 116, vlan_tci = 0}}, hash = {
rss = 311317915, fdir = {hash = 21915, id = 4750}, sched =
311317915}} (Invalid)
Note that in the first packet, the length does not match the actual
packet length (it does in the last though). The rest of the packets
are placed in the hugemem range as they should be.
I'm running on Linux 3.2.0-77, the NIC is "10G 2P X520", I have 4 1GB
huge pages.
Any ideas will be appreciated.
next reply other threads:[~2015-03-23 14:24 UTC|newest]
Thread overview: 12+ messages / expand[flat|nested] mbox.gz Atom feed top
2015-03-23 14:24 Dor Green [this message]
2015-03-23 14:59 ` Bruce Richardson
2015-03-23 15:19 ` Dor Green
2015-03-23 21:24 ` Matthew Hall
2015-03-24 9:55 ` Dor Green
2015-03-24 10:54 ` Dor Green
2015-03-24 13:17 ` Bruce Richardson
2015-03-24 14:10 ` Dor Green
2015-03-24 16:21 ` Bruce Richardson
2015-03-25 8:22 ` Dor Green
2015-03-25 9:32 ` Dor Green
2015-03-25 10:30 ` Bruce Richardson
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to='CAKedurxdjBt5FiCo0J=2V9d4=GEac6cBAxtWuXfwJ4hMX83L9w@mail.gmail.com' \
--to=dorgreen1@gmail.com \
--cc=dev@dpdk.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).