DPDK patches and discussions
 help / color / mirror / Atom feed
From: Martin Weiser <martin.weiser@allegro-packets.com>
To: dev@dpdk.org
Subject: Re: [dpdk-dev] Segmentation fault in ixgbe_rxtx_vec.c:444 with 1.8.0
Date: Tue, 20 Jan 2015 11:39:03 +0100	[thread overview]
Message-ID: <54BE3047.9060909@allegro-packets.com> (raw)
In-Reply-To: <54BCDBF1.8020909@allegro-packets.com>

Hi again,

I did some further testing and it seems like this issue is linked to
jumbo frames. I think a similar issue has already been reported by
Prashant Upadhyaya with the subject 'Packet Rx issue with DPDK1.8'.
In our application we use the following rxmode port configuration:

.mq_mode    = ETH_MQ_RX_RSS,
.split_hdr_size = 0,
.header_split   = 0,
.hw_ip_checksum = 1,
.hw_vlan_filter = 0,
.jumbo_frame    = 1,
.hw_strip_crc   = 1,
.max_rx_pkt_len = 9000,

and the mbuf size is calculated like the following:

(2048 + sizeof(struct rte_mbuf) + RTE_PKTMBUF_HEADROOM)

This works fine with DPDK 1.7 and jumbo frames are split into buffer
chains and can be forwarded on another port without a problem.
With DPDK 1.8 and the default configuration (CONFIG_RTE_IXGBE_INC_VECTOR
enabled) the application sometimes crashes like described in my first
mail and sometimes packet receiving stops with subsequently arriving
packets counted as rx errors. When CONFIG_RTE_IXGBE_INC_VECTOR is
disabled the packet processing also comes to a halt as soon as jumbo
frames arrive with a the slightly different effect that now
rte_eth_tx_burst refuses to send any previously received packets.

Is there anything special to consider regarding jumbo frames when moving
from DPDK 1.7 to 1.8 that we might have missed?

Martin



On 19.01.15 11:26, Martin Weiser wrote:
> Hi everybody,
>
> we quite recently updated one of our applications to DPDK 1.8.0 and are
> now seeing a segmentation fault in ixgbe_rxtx_vec.c:444 after a few minutes.
> I just did some quick debugging and I only have a very limited
> understanding of the code in question but it seems that the 'continue'
> in line 445 without increasing 'buf_idx' might cause the problem. In one
> debugging session when the crash occurred the value of 'buf_idx' was 2
> and the value of 'pkt_idx' was 8965.
> Any help with this issue would be greatly appreciated. If you need any
> further information just let me know.
>
> Martin
>
>

  reply	other threads:[~2015-01-20 10:39 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2015-01-19 10:26 Martin Weiser
2015-01-20 10:39 ` Martin Weiser [this message]
2015-01-21 13:49   ` Bruce Richardson
2015-01-22 14:05     ` Prashant Upadhyaya
2015-01-22 15:19       ` Bruce Richardson
2015-01-23 11:37     ` Martin Weiser
2015-01-23 11:52       ` Bruce Richardson
2015-01-23 14:59         ` Martin Weiser
2015-02-06 13:41           ` [dpdk-dev] [PATCH] ixgbe: fix vector PMD chained mbuf receive Bruce Richardson
2015-02-20 11:00             ` Thomas Monjalon

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=54BE3047.9060909@allegro-packets.com \
    --to=martin.weiser@allegro-packets.com \
    --cc=dev@dpdk.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).