From: Bruce Richardson <bruce.richardson@intel.com>
To: Prashant Upadhyaya <praupadhyaya@gmail.com>
Cc: dev@dpdk.org
Subject: Re: [dpdk-dev] Segmentation fault in ixgbe_rxtx_vec.c:444 with 1.8.0
Date: Thu, 22 Jan 2015 15:19:45 +0000 [thread overview]
Message-ID: <20150122151945.GE4580@bricha3-MOBL3> (raw)
In-Reply-To: <CAPBAu3U6D6UMrgSbzeVwXTXSMDS+K6JNpQqJF6qRAFPU-5MiSw@mail.gmail.com>
On Thu, Jan 22, 2015 at 07:35:45PM +0530, Prashant Upadhyaya wrote:
> On Wed, Jan 21, 2015 at 7:19 PM, Bruce Richardson <
> bruce.richardson@intel.com> wrote:
>
> > On Tue, Jan 20, 2015 at 11:39:03AM +0100, Martin Weiser wrote:
> > > Hi again,
> > >
> > > I did some further testing and it seems like this issue is linked to
> > > jumbo frames. I think a similar issue has already been reported by
> > > Prashant Upadhyaya with the subject 'Packet Rx issue with DPDK1.8'.
> > > In our application we use the following rxmode port configuration:
> > >
> > > .mq_mode = ETH_MQ_RX_RSS,
> > > .split_hdr_size = 0,
> > > .header_split = 0,
> > > .hw_ip_checksum = 1,
> > > .hw_vlan_filter = 0,
> > > .jumbo_frame = 1,
> > > .hw_strip_crc = 1,
> > > .max_rx_pkt_len = 9000,
> > >
> > > and the mbuf size is calculated like the following:
> > >
> > > (2048 + sizeof(struct rte_mbuf) + RTE_PKTMBUF_HEADROOM)
> > >
> > > This works fine with DPDK 1.7 and jumbo frames are split into buffer
> > > chains and can be forwarded on another port without a problem.
> > > With DPDK 1.8 and the default configuration (CONFIG_RTE_IXGBE_INC_VECTOR
> > > enabled) the application sometimes crashes like described in my first
> > > mail and sometimes packet receiving stops with subsequently arriving
> > > packets counted as rx errors. When CONFIG_RTE_IXGBE_INC_VECTOR is
> > > disabled the packet processing also comes to a halt as soon as jumbo
> > > frames arrive with a the slightly different effect that now
> > > rte_eth_tx_burst refuses to send any previously received packets.
> > >
> > > Is there anything special to consider regarding jumbo frames when moving
> > > from DPDK 1.7 to 1.8 that we might have missed?
> > >
> > > Martin
> > >
> > >
> > >
> > > On 19.01.15 11:26, Martin Weiser wrote:
> > > > Hi everybody,
> > > >
> > > > we quite recently updated one of our applications to DPDK 1.8.0 and are
> > > > now seeing a segmentation fault in ixgbe_rxtx_vec.c:444 after a few
> > minutes.
> > > > I just did some quick debugging and I only have a very limited
> > > > understanding of the code in question but it seems that the 'continue'
> > > > in line 445 without increasing 'buf_idx' might cause the problem. In
> > one
> > > > debugging session when the crash occurred the value of 'buf_idx' was 2
> > > > and the value of 'pkt_idx' was 8965.
> > > > Any help with this issue would be greatly appreciated. If you need any
> > > > further information just let me know.
> > > >
> > > > Martin
> > > >
> > > >
> > >
> > Hi Martin, Prashant,
> >
> > I've managed to reproduce the issue here and had a look at it. Could you
> > both perhaps try the proposed change below and see if it fixes the problem
> > for
> > you and gives you a working system? If so, I'll submit this as a patch fix
> > officially - or go back to the drawing board, if not. :-)
> >
> > diff --git a/lib/librte_pmd_ixgbe/ixgbe_rxtx_vec.c
> > b/lib/librte_pmd_ixgbe/ixgbe_rxtx_vec.c
> > index b54cb19..dfaccee 100644
> > --- a/lib/librte_pmd_ixgbe/ixgbe_rxtx_vec.c
> > +++ b/lib/librte_pmd_ixgbe/ixgbe_rxtx_vec.c
> > @@ -402,10 +402,10 @@ reassemble_packets(struct igb_rx_queue *rxq, struct
> > rte_mbuf **rx_bufs,
> > struct rte_mbuf *pkts[RTE_IXGBE_VPMD_RX_BURST]; /*finished pkts*/
> > struct rte_mbuf *start = rxq->pkt_first_seg;
> > struct rte_mbuf *end = rxq->pkt_last_seg;
> > - unsigned pkt_idx = 0, buf_idx = 0;
> > + unsigned pkt_idx, buf_idx;
> >
> >
> > - while (buf_idx < nb_bufs) {
> > + for (buf_idx = 0, pkt_idx = 0; buf_idx < nb_bufs; buf_idx++) {
> > if (end != NULL) {
> > /* processing a split packet */
> > end->next = rx_bufs[buf_idx];
> > @@ -448,7 +448,6 @@ reassemble_packets(struct igb_rx_queue *rxq, struct
> > rte_mbuf **rx_bufs,
> > rx_bufs[buf_idx]->data_len += rxq->crc_len;
> > rx_bufs[buf_idx]->pkt_len += rxq->crc_len;
> > }
> > - buf_idx++;
> > }
> >
> > /* save the partial packet for next time */
> >
> >
> > Regards,
> > /Bruce
> >
> > Hi Bruce,
>
> I am afraid your patch did not work for me. In my case I am not trying to
> receive jumbo frames but normal frames. They are not received at my
> application. Further, your patched function is not getting stimulated in my
> usecase.
>
> Regards
> -Prashant
Hi Prashant,
can your problem be reproduced using testpmd? If so can you perhaps send me the
command-line for testpmd and traffic profile needed to reproduce the issue?
Thanks,
/Bruce
next prev parent reply other threads:[~2015-01-22 15:20 UTC|newest]
Thread overview: 10+ messages / expand[flat|nested] mbox.gz Atom feed top
2015-01-19 10:26 Martin Weiser
2015-01-20 10:39 ` Martin Weiser
2015-01-21 13:49 ` Bruce Richardson
2015-01-22 14:05 ` Prashant Upadhyaya
2015-01-22 15:19 ` Bruce Richardson [this message]
2015-01-23 11:37 ` Martin Weiser
2015-01-23 11:52 ` Bruce Richardson
2015-01-23 14:59 ` Martin Weiser
2015-02-06 13:41 ` [dpdk-dev] [PATCH] ixgbe: fix vector PMD chained mbuf receive Bruce Richardson
2015-02-20 11:00 ` Thomas Monjalon
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20150122151945.GE4580@bricha3-MOBL3 \
--to=bruce.richardson@intel.com \
--cc=dev@dpdk.org \
--cc=praupadhyaya@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).