From: Bruce Richardson <bruce.richardson@intel.com>
To: Martin Weiser <martin.weiser@allegro-packets.com>
Cc: dev@dpdk.org
Subject: Re: [dpdk-dev] Segmentation fault in ixgbe_rxtx_vec.c:444 with 1.8.0
Date: Fri, 23 Jan 2015 11:52:44 +0000 [thread overview]
Message-ID: <20150123115244.GA10808@bricha3-MOBL3> (raw)
In-Reply-To: <54C23265.8090403@allegro-packets.com>
On Fri, Jan 23, 2015 at 12:37:09PM +0100, Martin Weiser wrote:
> Hi Bruce,
>
> I now had the chance to reproduce the issue we are seeing with a DPDK
> example app.
> I started out with a vanilla DPDK 1.8.0 and only made the following changes:
>
> diff --git a/examples/l2fwd/main.c b/examples/l2fwd/main.c
> index e684234..48e6b7c 100644
> --- a/examples/l2fwd/main.c
> +++ b/examples/l2fwd/main.c
> @@ -118,8 +118,9 @@ static const struct rte_eth_conf port_conf = {
> .header_split = 0, /**< Header Split disabled */
> .hw_ip_checksum = 0, /**< IP checksum offload disabled */
> .hw_vlan_filter = 0, /**< VLAN filtering disabled */
> - .jumbo_frame = 0, /**< Jumbo Frame Support disabled */
> + .jumbo_frame = 1, /**< Jumbo Frame Support disabled */
> .hw_strip_crc = 0, /**< CRC stripped by hardware */
> + .max_rx_pkt_len = 9000,
> },
> .txmode = {
> .mq_mode = ETH_MQ_TX_NONE,
> diff --git a/lib/librte_pmd_ixgbe/ixgbe_rxtx_vec.c
> b/lib/librte_pmd_ixgbe/ixgbe_rxtx_vec.c
> index b54cb19..dfaccee 100644
> --- a/lib/librte_pmd_ixgbe/ixgbe_rxtx_vec.c
> +++ b/lib/librte_pmd_ixgbe/ixgbe_rxtx_vec.c
> @@ -402,10 +402,10 @@ reassemble_packets(struct igb_rx_queue *rxq,
> struct rte_mbuf **rx_bufs,
> struct rte_mbuf *pkts[RTE_IXGBE_VPMD_RX_BURST]; /*finished pkts*/
> struct rte_mbuf *start = rxq->pkt_first_seg;
> struct rte_mbuf *end = rxq->pkt_last_seg;
> - unsigned pkt_idx = 0, buf_idx = 0;
> + unsigned pkt_idx, buf_idx;
>
>
> - while (buf_idx < nb_bufs) {
> + for (buf_idx = 0, pkt_idx = 0; buf_idx < nb_bufs; buf_idx++) {
> if (end != NULL) {
> /* processing a split packet */
> end->next = rx_bufs[buf_idx];
> @@ -448,7 +448,6 @@ reassemble_packets(struct igb_rx_queue *rxq, struct
> rte_mbuf **rx_bufs,
> rx_bufs[buf_idx]->data_len += rxq->crc_len;
> rx_bufs[buf_idx]->pkt_len += rxq->crc_len;
> }
> - buf_idx++;
> }
>
> /* save the partial packet for next time */
>
>
> This includes your previously posted fix and makes a small modification
> to the l2fwd example app to enable jumbo frames of up to 9000 bytes.
> The system is equipped with a two port Intel 82599 card and both ports
> are hooked up to a packet generator. The packet generator produces
> simple Ethernet/IPv4/UDP packets.
> I started the l2fwd app with the following command line:
>
> $ ./build/l2fwd -c f -n 4 -- -q 8 -p 3
>
> Both build variants that I have tested (CONFIG_RTE_IXGBE_INC_VECTOR=y
> and CONFIG_RTE_IXGBE_INC_VECTOR=n) now give me the same result:
> As long as the packet size is <= 2048 bytes the application behaves
> normally and all packets are forwarded as expected.
> As soon as the packet size exceeds 2048 bytes the application will only
> forward some packets and then stop forwarding altogether. Even small
> packets will not be forwarded anymore.
>
> If you want me to try out anything else just let me know.
>
>
> Best regards,
> Martin
>
I think the txq flags are at fault here. The default txq flags setting for
the l2fwd sample application includes the flag ETH_TXQ_FLAGS_NOMULTSEGS which
disables support for sending packets with multiple segments i.e. jumbo frames
in this case. If you change l2fwd to explicitly pass a txqflags parameter in
as part of the port setup (as was the case in previous releases), and set txqflags
to 0, does the problem go away?
/Bruce
>
>
> On 21.01.15 14:49, Bruce Richardson wrote:
> > On Tue, Jan 20, 2015 at 11:39:03AM +0100, Martin Weiser wrote:
> >> Hi again,
> >>
> >> I did some further testing and it seems like this issue is linked to
> >> jumbo frames. I think a similar issue has already been reported by
> >> Prashant Upadhyaya with the subject 'Packet Rx issue with DPDK1.8'.
> >> In our application we use the following rxmode port configuration:
> >>
> >> .mq_mode = ETH_MQ_RX_RSS,
> >> .split_hdr_size = 0,
> >> .header_split = 0,
> >> .hw_ip_checksum = 1,
> >> .hw_vlan_filter = 0,
> >> .jumbo_frame = 1,
> >> .hw_strip_crc = 1,
> >> .max_rx_pkt_len = 9000,
> >>
> >> and the mbuf size is calculated like the following:
> >>
> >> (2048 + sizeof(struct rte_mbuf) + RTE_PKTMBUF_HEADROOM)
> >>
> >> This works fine with DPDK 1.7 and jumbo frames are split into buffer
> >> chains and can be forwarded on another port without a problem.
> >> With DPDK 1.8 and the default configuration (CONFIG_RTE_IXGBE_INC_VECTOR
> >> enabled) the application sometimes crashes like described in my first
> >> mail and sometimes packet receiving stops with subsequently arriving
> >> packets counted as rx errors. When CONFIG_RTE_IXGBE_INC_VECTOR is
> >> disabled the packet processing also comes to a halt as soon as jumbo
> >> frames arrive with a the slightly different effect that now
> >> rte_eth_tx_burst refuses to send any previously received packets.
> >>
> >> Is there anything special to consider regarding jumbo frames when moving
> >> from DPDK 1.7 to 1.8 that we might have missed?
> >>
> >> Martin
> >>
> >>
> >>
> >> On 19.01.15 11:26, Martin Weiser wrote:
> >>> Hi everybody,
> >>>
> >>> we quite recently updated one of our applications to DPDK 1.8.0 and are
> >>> now seeing a segmentation fault in ixgbe_rxtx_vec.c:444 after a few minutes.
> >>> I just did some quick debugging and I only have a very limited
> >>> understanding of the code in question but it seems that the 'continue'
> >>> in line 445 without increasing 'buf_idx' might cause the problem. In one
> >>> debugging session when the crash occurred the value of 'buf_idx' was 2
> >>> and the value of 'pkt_idx' was 8965.
> >>> Any help with this issue would be greatly appreciated. If you need any
> >>> further information just let me know.
> >>>
> >>> Martin
> >>>
> >>>
> > Hi Martin, Prashant,
> >
> > I've managed to reproduce the issue here and had a look at it. Could you
> > both perhaps try the proposed change below and see if it fixes the problem for
> > you and gives you a working system? If so, I'll submit this as a patch fix
> > officially - or go back to the drawing board, if not. :-)
> >
> > diff --git a/lib/librte_pmd_ixgbe/ixgbe_rxtx_vec.c b/lib/librte_pmd_ixgbe/ixgbe_rxtx_vec.c
> > index b54cb19..dfaccee 100644
> > --- a/lib/librte_pmd_ixgbe/ixgbe_rxtx_vec.c
> > +++ b/lib/librte_pmd_ixgbe/ixgbe_rxtx_vec.c
> > @@ -402,10 +402,10 @@ reassemble_packets(struct igb_rx_queue *rxq, struct rte_mbuf **rx_bufs,
> > struct rte_mbuf *pkts[RTE_IXGBE_VPMD_RX_BURST]; /*finished pkts*/
> > struct rte_mbuf *start = rxq->pkt_first_seg;
> > struct rte_mbuf *end = rxq->pkt_last_seg;
> > - unsigned pkt_idx = 0, buf_idx = 0;
> > + unsigned pkt_idx, buf_idx;
> >
> >
> > - while (buf_idx < nb_bufs) {
> > + for (buf_idx = 0, pkt_idx = 0; buf_idx < nb_bufs; buf_idx++) {
> > if (end != NULL) {
> > /* processing a split packet */
> > end->next = rx_bufs[buf_idx];
> > @@ -448,7 +448,6 @@ reassemble_packets(struct igb_rx_queue *rxq, struct rte_mbuf **rx_bufs,
> > rx_bufs[buf_idx]->data_len += rxq->crc_len;
> > rx_bufs[buf_idx]->pkt_len += rxq->crc_len;
> > }
> > - buf_idx++;
> > }
> >
> > /* save the partial packet for next time */
> >
> >
> > Regards,
> > /Bruce
> >
>
next prev parent reply other threads:[~2015-01-23 11:52 UTC|newest]
Thread overview: 10+ messages / expand[flat|nested] mbox.gz Atom feed top
2015-01-19 10:26 Martin Weiser
2015-01-20 10:39 ` Martin Weiser
2015-01-21 13:49 ` Bruce Richardson
2015-01-22 14:05 ` Prashant Upadhyaya
2015-01-22 15:19 ` Bruce Richardson
2015-01-23 11:37 ` Martin Weiser
2015-01-23 11:52 ` Bruce Richardson [this message]
2015-01-23 14:59 ` Martin Weiser
2015-02-06 13:41 ` [dpdk-dev] [PATCH] ixgbe: fix vector PMD chained mbuf receive Bruce Richardson
2015-02-20 11:00 ` Thomas Monjalon
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20150123115244.GA10808@bricha3-MOBL3 \
--to=bruce.richardson@intel.com \
--cc=dev@dpdk.org \
--cc=martin.weiser@allegro-packets.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).