DPDK patches and discussions
 help / color / mirror / Atom feed
* [dpdk-dev] Segmentation fault in ixgbe_rxtx_vec.c:444 with 1.8.0
@ 2015-01-19 10:26 Martin Weiser
  2015-01-20 10:39 ` Martin Weiser
  0 siblings, 1 reply; 10+ messages in thread
From: Martin Weiser @ 2015-01-19 10:26 UTC (permalink / raw)
  To: dev

Hi everybody,

we quite recently updated one of our applications to DPDK 1.8.0 and are
now seeing a segmentation fault in ixgbe_rxtx_vec.c:444 after a few minutes.
I just did some quick debugging and I only have a very limited
understanding of the code in question but it seems that the 'continue'
in line 445 without increasing 'buf_idx' might cause the problem. In one
debugging session when the crash occurred the value of 'buf_idx' was 2
and the value of 'pkt_idx' was 8965.
Any help with this issue would be greatly appreciated. If you need any
further information just let me know.

Martin

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [dpdk-dev] Segmentation fault in ixgbe_rxtx_vec.c:444 with 1.8.0
  2015-01-19 10:26 [dpdk-dev] Segmentation fault in ixgbe_rxtx_vec.c:444 with 1.8.0 Martin Weiser
@ 2015-01-20 10:39 ` Martin Weiser
  2015-01-21 13:49   ` Bruce Richardson
  0 siblings, 1 reply; 10+ messages in thread
From: Martin Weiser @ 2015-01-20 10:39 UTC (permalink / raw)
  To: dev

Hi again,

I did some further testing and it seems like this issue is linked to
jumbo frames. I think a similar issue has already been reported by
Prashant Upadhyaya with the subject 'Packet Rx issue with DPDK1.8'.
In our application we use the following rxmode port configuration:

.mq_mode    = ETH_MQ_RX_RSS,
.split_hdr_size = 0,
.header_split   = 0,
.hw_ip_checksum = 1,
.hw_vlan_filter = 0,
.jumbo_frame    = 1,
.hw_strip_crc   = 1,
.max_rx_pkt_len = 9000,

and the mbuf size is calculated like the following:

(2048 + sizeof(struct rte_mbuf) + RTE_PKTMBUF_HEADROOM)

This works fine with DPDK 1.7 and jumbo frames are split into buffer
chains and can be forwarded on another port without a problem.
With DPDK 1.8 and the default configuration (CONFIG_RTE_IXGBE_INC_VECTOR
enabled) the application sometimes crashes like described in my first
mail and sometimes packet receiving stops with subsequently arriving
packets counted as rx errors. When CONFIG_RTE_IXGBE_INC_VECTOR is
disabled the packet processing also comes to a halt as soon as jumbo
frames arrive with a the slightly different effect that now
rte_eth_tx_burst refuses to send any previously received packets.

Is there anything special to consider regarding jumbo frames when moving
from DPDK 1.7 to 1.8 that we might have missed?

Martin



On 19.01.15 11:26, Martin Weiser wrote:
> Hi everybody,
>
> we quite recently updated one of our applications to DPDK 1.8.0 and are
> now seeing a segmentation fault in ixgbe_rxtx_vec.c:444 after a few minutes.
> I just did some quick debugging and I only have a very limited
> understanding of the code in question but it seems that the 'continue'
> in line 445 without increasing 'buf_idx' might cause the problem. In one
> debugging session when the crash occurred the value of 'buf_idx' was 2
> and the value of 'pkt_idx' was 8965.
> Any help with this issue would be greatly appreciated. If you need any
> further information just let me know.
>
> Martin
>
>

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [dpdk-dev] Segmentation fault in ixgbe_rxtx_vec.c:444 with 1.8.0
  2015-01-20 10:39 ` Martin Weiser
@ 2015-01-21 13:49   ` Bruce Richardson
  2015-01-22 14:05     ` Prashant Upadhyaya
  2015-01-23 11:37     ` Martin Weiser
  0 siblings, 2 replies; 10+ messages in thread
From: Bruce Richardson @ 2015-01-21 13:49 UTC (permalink / raw)
  To: Martin Weiser; +Cc: dev

On Tue, Jan 20, 2015 at 11:39:03AM +0100, Martin Weiser wrote:
> Hi again,
> 
> I did some further testing and it seems like this issue is linked to
> jumbo frames. I think a similar issue has already been reported by
> Prashant Upadhyaya with the subject 'Packet Rx issue with DPDK1.8'.
> In our application we use the following rxmode port configuration:
> 
> .mq_mode    = ETH_MQ_RX_RSS,
> .split_hdr_size = 0,
> .header_split   = 0,
> .hw_ip_checksum = 1,
> .hw_vlan_filter = 0,
> .jumbo_frame    = 1,
> .hw_strip_crc   = 1,
> .max_rx_pkt_len = 9000,
> 
> and the mbuf size is calculated like the following:
> 
> (2048 + sizeof(struct rte_mbuf) + RTE_PKTMBUF_HEADROOM)
> 
> This works fine with DPDK 1.7 and jumbo frames are split into buffer
> chains and can be forwarded on another port without a problem.
> With DPDK 1.8 and the default configuration (CONFIG_RTE_IXGBE_INC_VECTOR
> enabled) the application sometimes crashes like described in my first
> mail and sometimes packet receiving stops with subsequently arriving
> packets counted as rx errors. When CONFIG_RTE_IXGBE_INC_VECTOR is
> disabled the packet processing also comes to a halt as soon as jumbo
> frames arrive with a the slightly different effect that now
> rte_eth_tx_burst refuses to send any previously received packets.
> 
> Is there anything special to consider regarding jumbo frames when moving
> from DPDK 1.7 to 1.8 that we might have missed?
> 
> Martin
> 
> 
> 
> On 19.01.15 11:26, Martin Weiser wrote:
> > Hi everybody,
> >
> > we quite recently updated one of our applications to DPDK 1.8.0 and are
> > now seeing a segmentation fault in ixgbe_rxtx_vec.c:444 after a few minutes.
> > I just did some quick debugging and I only have a very limited
> > understanding of the code in question but it seems that the 'continue'
> > in line 445 without increasing 'buf_idx' might cause the problem. In one
> > debugging session when the crash occurred the value of 'buf_idx' was 2
> > and the value of 'pkt_idx' was 8965.
> > Any help with this issue would be greatly appreciated. If you need any
> > further information just let me know.
> >
> > Martin
> >
> >
> 
Hi Martin, Prashant,

I've managed to reproduce the issue here and had a look at it. Could you
both perhaps try the proposed change below and see if it fixes the problem for
you and gives you a working system? If so, I'll submit this as a patch fix 
officially - or go back to the drawing board, if not. :-)

diff --git a/lib/librte_pmd_ixgbe/ixgbe_rxtx_vec.c b/lib/librte_pmd_ixgbe/ixgbe_rxtx_vec.c
index b54cb19..dfaccee 100644
--- a/lib/librte_pmd_ixgbe/ixgbe_rxtx_vec.c
+++ b/lib/librte_pmd_ixgbe/ixgbe_rxtx_vec.c
@@ -402,10 +402,10 @@ reassemble_packets(struct igb_rx_queue *rxq, struct rte_mbuf **rx_bufs,
        struct rte_mbuf *pkts[RTE_IXGBE_VPMD_RX_BURST]; /*finished pkts*/
        struct rte_mbuf *start = rxq->pkt_first_seg;
        struct rte_mbuf *end =  rxq->pkt_last_seg;
-       unsigned pkt_idx = 0, buf_idx = 0;
+       unsigned pkt_idx, buf_idx;


-       while (buf_idx < nb_bufs) {
+       for (buf_idx = 0, pkt_idx = 0; buf_idx < nb_bufs; buf_idx++) {
                if (end != NULL) {
                        /* processing a split packet */
                        end->next = rx_bufs[buf_idx];
@@ -448,7 +448,6 @@ reassemble_packets(struct igb_rx_queue *rxq, struct rte_mbuf **rx_bufs,
                        rx_bufs[buf_idx]->data_len += rxq->crc_len;
                        rx_bufs[buf_idx]->pkt_len += rxq->crc_len;
                }
-               buf_idx++;
        }

        /* save the partial packet for next time */


Regards,
/Bruce

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [dpdk-dev] Segmentation fault in ixgbe_rxtx_vec.c:444 with 1.8.0
  2015-01-21 13:49   ` Bruce Richardson
@ 2015-01-22 14:05     ` Prashant Upadhyaya
  2015-01-22 15:19       ` Bruce Richardson
  2015-01-23 11:37     ` Martin Weiser
  1 sibling, 1 reply; 10+ messages in thread
From: Prashant Upadhyaya @ 2015-01-22 14:05 UTC (permalink / raw)
  To: Bruce Richardson; +Cc: dev

On Wed, Jan 21, 2015 at 7:19 PM, Bruce Richardson <
bruce.richardson@intel.com> wrote:

> On Tue, Jan 20, 2015 at 11:39:03AM +0100, Martin Weiser wrote:
> > Hi again,
> >
> > I did some further testing and it seems like this issue is linked to
> > jumbo frames. I think a similar issue has already been reported by
> > Prashant Upadhyaya with the subject 'Packet Rx issue with DPDK1.8'.
> > In our application we use the following rxmode port configuration:
> >
> > .mq_mode    = ETH_MQ_RX_RSS,
> > .split_hdr_size = 0,
> > .header_split   = 0,
> > .hw_ip_checksum = 1,
> > .hw_vlan_filter = 0,
> > .jumbo_frame    = 1,
> > .hw_strip_crc   = 1,
> > .max_rx_pkt_len = 9000,
> >
> > and the mbuf size is calculated like the following:
> >
> > (2048 + sizeof(struct rte_mbuf) + RTE_PKTMBUF_HEADROOM)
> >
> > This works fine with DPDK 1.7 and jumbo frames are split into buffer
> > chains and can be forwarded on another port without a problem.
> > With DPDK 1.8 and the default configuration (CONFIG_RTE_IXGBE_INC_VECTOR
> > enabled) the application sometimes crashes like described in my first
> > mail and sometimes packet receiving stops with subsequently arriving
> > packets counted as rx errors. When CONFIG_RTE_IXGBE_INC_VECTOR is
> > disabled the packet processing also comes to a halt as soon as jumbo
> > frames arrive with a the slightly different effect that now
> > rte_eth_tx_burst refuses to send any previously received packets.
> >
> > Is there anything special to consider regarding jumbo frames when moving
> > from DPDK 1.7 to 1.8 that we might have missed?
> >
> > Martin
> >
> >
> >
> > On 19.01.15 11:26, Martin Weiser wrote:
> > > Hi everybody,
> > >
> > > we quite recently updated one of our applications to DPDK 1.8.0 and are
> > > now seeing a segmentation fault in ixgbe_rxtx_vec.c:444 after a few
> minutes.
> > > I just did some quick debugging and I only have a very limited
> > > understanding of the code in question but it seems that the 'continue'
> > > in line 445 without increasing 'buf_idx' might cause the problem. In
> one
> > > debugging session when the crash occurred the value of 'buf_idx' was 2
> > > and the value of 'pkt_idx' was 8965.
> > > Any help with this issue would be greatly appreciated. If you need any
> > > further information just let me know.
> > >
> > > Martin
> > >
> > >
> >
> Hi Martin, Prashant,
>
> I've managed to reproduce the issue here and had a look at it. Could you
> both perhaps try the proposed change below and see if it fixes the problem
> for
> you and gives you a working system? If so, I'll submit this as a patch fix
> officially - or go back to the drawing board, if not. :-)
>
> diff --git a/lib/librte_pmd_ixgbe/ixgbe_rxtx_vec.c
> b/lib/librte_pmd_ixgbe/ixgbe_rxtx_vec.c
> index b54cb19..dfaccee 100644
> --- a/lib/librte_pmd_ixgbe/ixgbe_rxtx_vec.c
> +++ b/lib/librte_pmd_ixgbe/ixgbe_rxtx_vec.c
> @@ -402,10 +402,10 @@ reassemble_packets(struct igb_rx_queue *rxq, struct
> rte_mbuf **rx_bufs,
>         struct rte_mbuf *pkts[RTE_IXGBE_VPMD_RX_BURST]; /*finished pkts*/
>         struct rte_mbuf *start = rxq->pkt_first_seg;
>         struct rte_mbuf *end =  rxq->pkt_last_seg;
> -       unsigned pkt_idx = 0, buf_idx = 0;
> +       unsigned pkt_idx, buf_idx;
>
>
> -       while (buf_idx < nb_bufs) {
> +       for (buf_idx = 0, pkt_idx = 0; buf_idx < nb_bufs; buf_idx++) {
>                 if (end != NULL) {
>                         /* processing a split packet */
>                         end->next = rx_bufs[buf_idx];
> @@ -448,7 +448,6 @@ reassemble_packets(struct igb_rx_queue *rxq, struct
> rte_mbuf **rx_bufs,
>                         rx_bufs[buf_idx]->data_len += rxq->crc_len;
>                         rx_bufs[buf_idx]->pkt_len += rxq->crc_len;
>                 }
> -               buf_idx++;
>         }
>
>         /* save the partial packet for next time */
>
>
> Regards,
> /Bruce
>
> Hi Bruce,

I am afraid your patch did not work for me. In my case I am not trying to
receive jumbo frames but normal frames. They are not received at my
application. Further, your patched function is not getting stimulated in my
usecase.

Regards
-Prashant

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [dpdk-dev] Segmentation fault in ixgbe_rxtx_vec.c:444 with 1.8.0
  2015-01-22 14:05     ` Prashant Upadhyaya
@ 2015-01-22 15:19       ` Bruce Richardson
  0 siblings, 0 replies; 10+ messages in thread
From: Bruce Richardson @ 2015-01-22 15:19 UTC (permalink / raw)
  To: Prashant Upadhyaya; +Cc: dev

On Thu, Jan 22, 2015 at 07:35:45PM +0530, Prashant Upadhyaya wrote:
> On Wed, Jan 21, 2015 at 7:19 PM, Bruce Richardson <
> bruce.richardson@intel.com> wrote:
> 
> > On Tue, Jan 20, 2015 at 11:39:03AM +0100, Martin Weiser wrote:
> > > Hi again,
> > >
> > > I did some further testing and it seems like this issue is linked to
> > > jumbo frames. I think a similar issue has already been reported by
> > > Prashant Upadhyaya with the subject 'Packet Rx issue with DPDK1.8'.
> > > In our application we use the following rxmode port configuration:
> > >
> > > .mq_mode    = ETH_MQ_RX_RSS,
> > > .split_hdr_size = 0,
> > > .header_split   = 0,
> > > .hw_ip_checksum = 1,
> > > .hw_vlan_filter = 0,
> > > .jumbo_frame    = 1,
> > > .hw_strip_crc   = 1,
> > > .max_rx_pkt_len = 9000,
> > >
> > > and the mbuf size is calculated like the following:
> > >
> > > (2048 + sizeof(struct rte_mbuf) + RTE_PKTMBUF_HEADROOM)
> > >
> > > This works fine with DPDK 1.7 and jumbo frames are split into buffer
> > > chains and can be forwarded on another port without a problem.
> > > With DPDK 1.8 and the default configuration (CONFIG_RTE_IXGBE_INC_VECTOR
> > > enabled) the application sometimes crashes like described in my first
> > > mail and sometimes packet receiving stops with subsequently arriving
> > > packets counted as rx errors. When CONFIG_RTE_IXGBE_INC_VECTOR is
> > > disabled the packet processing also comes to a halt as soon as jumbo
> > > frames arrive with a the slightly different effect that now
> > > rte_eth_tx_burst refuses to send any previously received packets.
> > >
> > > Is there anything special to consider regarding jumbo frames when moving
> > > from DPDK 1.7 to 1.8 that we might have missed?
> > >
> > > Martin
> > >
> > >
> > >
> > > On 19.01.15 11:26, Martin Weiser wrote:
> > > > Hi everybody,
> > > >
> > > > we quite recently updated one of our applications to DPDK 1.8.0 and are
> > > > now seeing a segmentation fault in ixgbe_rxtx_vec.c:444 after a few
> > minutes.
> > > > I just did some quick debugging and I only have a very limited
> > > > understanding of the code in question but it seems that the 'continue'
> > > > in line 445 without increasing 'buf_idx' might cause the problem. In
> > one
> > > > debugging session when the crash occurred the value of 'buf_idx' was 2
> > > > and the value of 'pkt_idx' was 8965.
> > > > Any help with this issue would be greatly appreciated. If you need any
> > > > further information just let me know.
> > > >
> > > > Martin
> > > >
> > > >
> > >
> > Hi Martin, Prashant,
> >
> > I've managed to reproduce the issue here and had a look at it. Could you
> > both perhaps try the proposed change below and see if it fixes the problem
> > for
> > you and gives you a working system? If so, I'll submit this as a patch fix
> > officially - or go back to the drawing board, if not. :-)
> >
> > diff --git a/lib/librte_pmd_ixgbe/ixgbe_rxtx_vec.c
> > b/lib/librte_pmd_ixgbe/ixgbe_rxtx_vec.c
> > index b54cb19..dfaccee 100644
> > --- a/lib/librte_pmd_ixgbe/ixgbe_rxtx_vec.c
> > +++ b/lib/librte_pmd_ixgbe/ixgbe_rxtx_vec.c
> > @@ -402,10 +402,10 @@ reassemble_packets(struct igb_rx_queue *rxq, struct
> > rte_mbuf **rx_bufs,
> >         struct rte_mbuf *pkts[RTE_IXGBE_VPMD_RX_BURST]; /*finished pkts*/
> >         struct rte_mbuf *start = rxq->pkt_first_seg;
> >         struct rte_mbuf *end =  rxq->pkt_last_seg;
> > -       unsigned pkt_idx = 0, buf_idx = 0;
> > +       unsigned pkt_idx, buf_idx;
> >
> >
> > -       while (buf_idx < nb_bufs) {
> > +       for (buf_idx = 0, pkt_idx = 0; buf_idx < nb_bufs; buf_idx++) {
> >                 if (end != NULL) {
> >                         /* processing a split packet */
> >                         end->next = rx_bufs[buf_idx];
> > @@ -448,7 +448,6 @@ reassemble_packets(struct igb_rx_queue *rxq, struct
> > rte_mbuf **rx_bufs,
> >                         rx_bufs[buf_idx]->data_len += rxq->crc_len;
> >                         rx_bufs[buf_idx]->pkt_len += rxq->crc_len;
> >                 }
> > -               buf_idx++;
> >         }
> >
> >         /* save the partial packet for next time */
> >
> >
> > Regards,
> > /Bruce
> >
> > Hi Bruce,
> 
> I am afraid your patch did not work for me. In my case I am not trying to
> receive jumbo frames but normal frames. They are not received at my
> application. Further, your patched function is not getting stimulated in my
> usecase.
> 
> Regards
> -Prashant

Hi Prashant,

can your problem be reproduced using testpmd? If so can you perhaps send me the
command-line for testpmd and traffic profile needed to reproduce the issue?

Thanks,
/Bruce

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [dpdk-dev] Segmentation fault in ixgbe_rxtx_vec.c:444 with 1.8.0
  2015-01-21 13:49   ` Bruce Richardson
  2015-01-22 14:05     ` Prashant Upadhyaya
@ 2015-01-23 11:37     ` Martin Weiser
  2015-01-23 11:52       ` Bruce Richardson
  1 sibling, 1 reply; 10+ messages in thread
From: Martin Weiser @ 2015-01-23 11:37 UTC (permalink / raw)
  To: Bruce Richardson; +Cc: dev

Hi Bruce,

I now had the chance to reproduce the issue we are seeing with a DPDK
example app.
I started out with a vanilla DPDK 1.8.0 and only made the following changes:

diff --git a/examples/l2fwd/main.c b/examples/l2fwd/main.c
index e684234..48e6b7c 100644
--- a/examples/l2fwd/main.c
+++ b/examples/l2fwd/main.c
@@ -118,8 +118,9 @@ static const struct rte_eth_conf port_conf = {
                .header_split   = 0, /**< Header Split disabled */
                .hw_ip_checksum = 0, /**< IP checksum offload disabled */
                .hw_vlan_filter = 0, /**< VLAN filtering disabled */
-               .jumbo_frame    = 0, /**< Jumbo Frame Support disabled */
+               .jumbo_frame    = 1, /**< Jumbo Frame Support disabled */
                .hw_strip_crc   = 0, /**< CRC stripped by hardware */
+               .max_rx_pkt_len = 9000,
        },
        .txmode = {
                .mq_mode = ETH_MQ_TX_NONE,
diff --git a/lib/librte_pmd_ixgbe/ixgbe_rxtx_vec.c
b/lib/librte_pmd_ixgbe/ixgbe_rxtx_vec.c
index b54cb19..dfaccee 100644
--- a/lib/librte_pmd_ixgbe/ixgbe_rxtx_vec.c
+++ b/lib/librte_pmd_ixgbe/ixgbe_rxtx_vec.c
@@ -402,10 +402,10 @@ reassemble_packets(struct igb_rx_queue *rxq,
struct rte_mbuf **rx_bufs,
        struct rte_mbuf *pkts[RTE_IXGBE_VPMD_RX_BURST]; /*finished pkts*/
        struct rte_mbuf *start = rxq->pkt_first_seg;
        struct rte_mbuf *end =  rxq->pkt_last_seg;
-       unsigned pkt_idx = 0, buf_idx = 0;
+       unsigned pkt_idx, buf_idx;
 
 
-       while (buf_idx < nb_bufs) {
+       for (buf_idx = 0, pkt_idx = 0; buf_idx < nb_bufs; buf_idx++) {
                if (end != NULL) {
                        /* processing a split packet */
                        end->next = rx_bufs[buf_idx];
@@ -448,7 +448,6 @@ reassemble_packets(struct igb_rx_queue *rxq, struct
rte_mbuf **rx_bufs,
                        rx_bufs[buf_idx]->data_len += rxq->crc_len;
                        rx_bufs[buf_idx]->pkt_len += rxq->crc_len;
                }
-               buf_idx++;
        }
 
        /* save the partial packet for next time */


This includes your previously posted fix and makes a small modification
to the l2fwd example app to enable jumbo frames of up to 9000 bytes.
The system is equipped with a two port Intel 82599 card and both ports
are hooked up to a packet generator. The packet generator produces
simple Ethernet/IPv4/UDP packets.
I started the l2fwd app with the following command line:

$ ./build/l2fwd -c f -n 4 -- -q 8 -p 3

Both build variants that I have tested (CONFIG_RTE_IXGBE_INC_VECTOR=y
and CONFIG_RTE_IXGBE_INC_VECTOR=n) now give me the same result:
As long as the packet size is <= 2048 bytes the application behaves
normally and all packets are forwarded as expected.
As soon as the packet size exceeds 2048 bytes the application will only
forward some packets and then stop forwarding altogether. Even small
packets will not be forwarded anymore.

If you want me to try out anything else just let me know.


Best regards,
Martin




On 21.01.15 14:49, Bruce Richardson wrote:
> On Tue, Jan 20, 2015 at 11:39:03AM +0100, Martin Weiser wrote:
>> Hi again,
>>
>> I did some further testing and it seems like this issue is linked to
>> jumbo frames. I think a similar issue has already been reported by
>> Prashant Upadhyaya with the subject 'Packet Rx issue with DPDK1.8'.
>> In our application we use the following rxmode port configuration:
>>
>> .mq_mode    = ETH_MQ_RX_RSS,
>> .split_hdr_size = 0,
>> .header_split   = 0,
>> .hw_ip_checksum = 1,
>> .hw_vlan_filter = 0,
>> .jumbo_frame    = 1,
>> .hw_strip_crc   = 1,
>> .max_rx_pkt_len = 9000,
>>
>> and the mbuf size is calculated like the following:
>>
>> (2048 + sizeof(struct rte_mbuf) + RTE_PKTMBUF_HEADROOM)
>>
>> This works fine with DPDK 1.7 and jumbo frames are split into buffer
>> chains and can be forwarded on another port without a problem.
>> With DPDK 1.8 and the default configuration (CONFIG_RTE_IXGBE_INC_VECTOR
>> enabled) the application sometimes crashes like described in my first
>> mail and sometimes packet receiving stops with subsequently arriving
>> packets counted as rx errors. When CONFIG_RTE_IXGBE_INC_VECTOR is
>> disabled the packet processing also comes to a halt as soon as jumbo
>> frames arrive with a the slightly different effect that now
>> rte_eth_tx_burst refuses to send any previously received packets.
>>
>> Is there anything special to consider regarding jumbo frames when moving
>> from DPDK 1.7 to 1.8 that we might have missed?
>>
>> Martin
>>
>>
>>
>> On 19.01.15 11:26, Martin Weiser wrote:
>>> Hi everybody,
>>>
>>> we quite recently updated one of our applications to DPDK 1.8.0 and are
>>> now seeing a segmentation fault in ixgbe_rxtx_vec.c:444 after a few minutes.
>>> I just did some quick debugging and I only have a very limited
>>> understanding of the code in question but it seems that the 'continue'
>>> in line 445 without increasing 'buf_idx' might cause the problem. In one
>>> debugging session when the crash occurred the value of 'buf_idx' was 2
>>> and the value of 'pkt_idx' was 8965.
>>> Any help with this issue would be greatly appreciated. If you need any
>>> further information just let me know.
>>>
>>> Martin
>>>
>>>
> Hi Martin, Prashant,
>
> I've managed to reproduce the issue here and had a look at it. Could you
> both perhaps try the proposed change below and see if it fixes the problem for
> you and gives you a working system? If so, I'll submit this as a patch fix 
> officially - or go back to the drawing board, if not. :-)
>
> diff --git a/lib/librte_pmd_ixgbe/ixgbe_rxtx_vec.c b/lib/librte_pmd_ixgbe/ixgbe_rxtx_vec.c
> index b54cb19..dfaccee 100644
> --- a/lib/librte_pmd_ixgbe/ixgbe_rxtx_vec.c
> +++ b/lib/librte_pmd_ixgbe/ixgbe_rxtx_vec.c
> @@ -402,10 +402,10 @@ reassemble_packets(struct igb_rx_queue *rxq, struct rte_mbuf **rx_bufs,
>         struct rte_mbuf *pkts[RTE_IXGBE_VPMD_RX_BURST]; /*finished pkts*/
>         struct rte_mbuf *start = rxq->pkt_first_seg;
>         struct rte_mbuf *end =  rxq->pkt_last_seg;
> -       unsigned pkt_idx = 0, buf_idx = 0;
> +       unsigned pkt_idx, buf_idx;
>
>
> -       while (buf_idx < nb_bufs) {
> +       for (buf_idx = 0, pkt_idx = 0; buf_idx < nb_bufs; buf_idx++) {
>                 if (end != NULL) {
>                         /* processing a split packet */
>                         end->next = rx_bufs[buf_idx];
> @@ -448,7 +448,6 @@ reassemble_packets(struct igb_rx_queue *rxq, struct rte_mbuf **rx_bufs,
>                         rx_bufs[buf_idx]->data_len += rxq->crc_len;
>                         rx_bufs[buf_idx]->pkt_len += rxq->crc_len;
>                 }
> -               buf_idx++;
>         }
>
>         /* save the partial packet for next time */
>
>
> Regards,
> /Bruce
>

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [dpdk-dev] Segmentation fault in ixgbe_rxtx_vec.c:444 with 1.8.0
  2015-01-23 11:37     ` Martin Weiser
@ 2015-01-23 11:52       ` Bruce Richardson
  2015-01-23 14:59         ` Martin Weiser
  0 siblings, 1 reply; 10+ messages in thread
From: Bruce Richardson @ 2015-01-23 11:52 UTC (permalink / raw)
  To: Martin Weiser; +Cc: dev

On Fri, Jan 23, 2015 at 12:37:09PM +0100, Martin Weiser wrote:
> Hi Bruce,
> 
> I now had the chance to reproduce the issue we are seeing with a DPDK
> example app.
> I started out with a vanilla DPDK 1.8.0 and only made the following changes:
> 
> diff --git a/examples/l2fwd/main.c b/examples/l2fwd/main.c
> index e684234..48e6b7c 100644
> --- a/examples/l2fwd/main.c
> +++ b/examples/l2fwd/main.c
> @@ -118,8 +118,9 @@ static const struct rte_eth_conf port_conf = {
>                 .header_split   = 0, /**< Header Split disabled */
>                 .hw_ip_checksum = 0, /**< IP checksum offload disabled */
>                 .hw_vlan_filter = 0, /**< VLAN filtering disabled */
> -               .jumbo_frame    = 0, /**< Jumbo Frame Support disabled */
> +               .jumbo_frame    = 1, /**< Jumbo Frame Support disabled */
>                 .hw_strip_crc   = 0, /**< CRC stripped by hardware */
> +               .max_rx_pkt_len = 9000,
>         },
>         .txmode = {
>                 .mq_mode = ETH_MQ_TX_NONE,
> diff --git a/lib/librte_pmd_ixgbe/ixgbe_rxtx_vec.c
> b/lib/librte_pmd_ixgbe/ixgbe_rxtx_vec.c
> index b54cb19..dfaccee 100644
> --- a/lib/librte_pmd_ixgbe/ixgbe_rxtx_vec.c
> +++ b/lib/librte_pmd_ixgbe/ixgbe_rxtx_vec.c
> @@ -402,10 +402,10 @@ reassemble_packets(struct igb_rx_queue *rxq,
> struct rte_mbuf **rx_bufs,
>         struct rte_mbuf *pkts[RTE_IXGBE_VPMD_RX_BURST]; /*finished pkts*/
>         struct rte_mbuf *start = rxq->pkt_first_seg;
>         struct rte_mbuf *end =  rxq->pkt_last_seg;
> -       unsigned pkt_idx = 0, buf_idx = 0;
> +       unsigned pkt_idx, buf_idx;
>  
>  
> -       while (buf_idx < nb_bufs) {
> +       for (buf_idx = 0, pkt_idx = 0; buf_idx < nb_bufs; buf_idx++) {
>                 if (end != NULL) {
>                         /* processing a split packet */
>                         end->next = rx_bufs[buf_idx];
> @@ -448,7 +448,6 @@ reassemble_packets(struct igb_rx_queue *rxq, struct
> rte_mbuf **rx_bufs,
>                         rx_bufs[buf_idx]->data_len += rxq->crc_len;
>                         rx_bufs[buf_idx]->pkt_len += rxq->crc_len;
>                 }
> -               buf_idx++;
>         }
>  
>         /* save the partial packet for next time */
> 
> 
> This includes your previously posted fix and makes a small modification
> to the l2fwd example app to enable jumbo frames of up to 9000 bytes.
> The system is equipped with a two port Intel 82599 card and both ports
> are hooked up to a packet generator. The packet generator produces
> simple Ethernet/IPv4/UDP packets.
> I started the l2fwd app with the following command line:
> 
> $ ./build/l2fwd -c f -n 4 -- -q 8 -p 3
> 
> Both build variants that I have tested (CONFIG_RTE_IXGBE_INC_VECTOR=y
> and CONFIG_RTE_IXGBE_INC_VECTOR=n) now give me the same result:
> As long as the packet size is <= 2048 bytes the application behaves
> normally and all packets are forwarded as expected.
> As soon as the packet size exceeds 2048 bytes the application will only
> forward some packets and then stop forwarding altogether. Even small
> packets will not be forwarded anymore.
> 
> If you want me to try out anything else just let me know.
> 
> 
> Best regards,
> Martin
> 
I think the txq flags are at fault here. The default txq flags setting for
the l2fwd sample application includes the flag ETH_TXQ_FLAGS_NOMULTSEGS which
disables support for sending packets with multiple segments i.e. jumbo frames
in this case. If you change l2fwd to explicitly pass a txqflags parameter in
as part of the port setup (as was the case in previous releases), and set txqflags
to 0, does the problem go away?

/Bruce

> 
> 
> On 21.01.15 14:49, Bruce Richardson wrote:
> > On Tue, Jan 20, 2015 at 11:39:03AM +0100, Martin Weiser wrote:
> >> Hi again,
> >>
> >> I did some further testing and it seems like this issue is linked to
> >> jumbo frames. I think a similar issue has already been reported by
> >> Prashant Upadhyaya with the subject 'Packet Rx issue with DPDK1.8'.
> >> In our application we use the following rxmode port configuration:
> >>
> >> .mq_mode    = ETH_MQ_RX_RSS,
> >> .split_hdr_size = 0,
> >> .header_split   = 0,
> >> .hw_ip_checksum = 1,
> >> .hw_vlan_filter = 0,
> >> .jumbo_frame    = 1,
> >> .hw_strip_crc   = 1,
> >> .max_rx_pkt_len = 9000,
> >>
> >> and the mbuf size is calculated like the following:
> >>
> >> (2048 + sizeof(struct rte_mbuf) + RTE_PKTMBUF_HEADROOM)
> >>
> >> This works fine with DPDK 1.7 and jumbo frames are split into buffer
> >> chains and can be forwarded on another port without a problem.
> >> With DPDK 1.8 and the default configuration (CONFIG_RTE_IXGBE_INC_VECTOR
> >> enabled) the application sometimes crashes like described in my first
> >> mail and sometimes packet receiving stops with subsequently arriving
> >> packets counted as rx errors. When CONFIG_RTE_IXGBE_INC_VECTOR is
> >> disabled the packet processing also comes to a halt as soon as jumbo
> >> frames arrive with a the slightly different effect that now
> >> rte_eth_tx_burst refuses to send any previously received packets.
> >>
> >> Is there anything special to consider regarding jumbo frames when moving
> >> from DPDK 1.7 to 1.8 that we might have missed?
> >>
> >> Martin
> >>
> >>
> >>
> >> On 19.01.15 11:26, Martin Weiser wrote:
> >>> Hi everybody,
> >>>
> >>> we quite recently updated one of our applications to DPDK 1.8.0 and are
> >>> now seeing a segmentation fault in ixgbe_rxtx_vec.c:444 after a few minutes.
> >>> I just did some quick debugging and I only have a very limited
> >>> understanding of the code in question but it seems that the 'continue'
> >>> in line 445 without increasing 'buf_idx' might cause the problem. In one
> >>> debugging session when the crash occurred the value of 'buf_idx' was 2
> >>> and the value of 'pkt_idx' was 8965.
> >>> Any help with this issue would be greatly appreciated. If you need any
> >>> further information just let me know.
> >>>
> >>> Martin
> >>>
> >>>
> > Hi Martin, Prashant,
> >
> > I've managed to reproduce the issue here and had a look at it. Could you
> > both perhaps try the proposed change below and see if it fixes the problem for
> > you and gives you a working system? If so, I'll submit this as a patch fix 
> > officially - or go back to the drawing board, if not. :-)
> >
> > diff --git a/lib/librte_pmd_ixgbe/ixgbe_rxtx_vec.c b/lib/librte_pmd_ixgbe/ixgbe_rxtx_vec.c
> > index b54cb19..dfaccee 100644
> > --- a/lib/librte_pmd_ixgbe/ixgbe_rxtx_vec.c
> > +++ b/lib/librte_pmd_ixgbe/ixgbe_rxtx_vec.c
> > @@ -402,10 +402,10 @@ reassemble_packets(struct igb_rx_queue *rxq, struct rte_mbuf **rx_bufs,
> >         struct rte_mbuf *pkts[RTE_IXGBE_VPMD_RX_BURST]; /*finished pkts*/
> >         struct rte_mbuf *start = rxq->pkt_first_seg;
> >         struct rte_mbuf *end =  rxq->pkt_last_seg;
> > -       unsigned pkt_idx = 0, buf_idx = 0;
> > +       unsigned pkt_idx, buf_idx;
> >
> >
> > -       while (buf_idx < nb_bufs) {
> > +       for (buf_idx = 0, pkt_idx = 0; buf_idx < nb_bufs; buf_idx++) {
> >                 if (end != NULL) {
> >                         /* processing a split packet */
> >                         end->next = rx_bufs[buf_idx];
> > @@ -448,7 +448,6 @@ reassemble_packets(struct igb_rx_queue *rxq, struct rte_mbuf **rx_bufs,
> >                         rx_bufs[buf_idx]->data_len += rxq->crc_len;
> >                         rx_bufs[buf_idx]->pkt_len += rxq->crc_len;
> >                 }
> > -               buf_idx++;
> >         }
> >
> >         /* save the partial packet for next time */
> >
> >
> > Regards,
> > /Bruce
> >
> 

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [dpdk-dev] Segmentation fault in ixgbe_rxtx_vec.c:444 with 1.8.0
  2015-01-23 11:52       ` Bruce Richardson
@ 2015-01-23 14:59         ` Martin Weiser
  2015-02-06 13:41           ` [dpdk-dev] [PATCH] ixgbe: fix vector PMD chained mbuf receive Bruce Richardson
  0 siblings, 1 reply; 10+ messages in thread
From: Martin Weiser @ 2015-01-23 14:59 UTC (permalink / raw)
  To: Bruce Richardson; +Cc: dev

Hi Bruce,

yes, you are absolutely right. That resolves the problem.
I was really happy to see that DPDK 1.8 includes proper default
configurations for each driver and I made use of this. But unfortunately
I was not aware that the default configuration did include the
ETH_TXQ_FLAGS_NOMULTSEGS flag for ixgbe and i40e.
I now use rte_eth_dev_info_get to get the default config for the port
and then modify the txq_flags to not not include ETH_TXQ_FLAGS_NOMULTSEGS.

With your fix this now works for CONFIG_RTE_IXGBE_INC_VECTOR=y, too.

Sorry for missing this and thanks for the quick help.

Best regards,
Martin


On 23.01.15 12:52, Bruce Richardson wrote:
> On Fri, Jan 23, 2015 at 12:37:09PM +0100, Martin Weiser wrote:
>> Hi Bruce,
>>
>> I now had the chance to reproduce the issue we are seeing with a DPDK
>> example app.
>> I started out with a vanilla DPDK 1.8.0 and only made the following changes:
>>
>> diff --git a/examples/l2fwd/main.c b/examples/l2fwd/main.c
>> index e684234..48e6b7c 100644
>> --- a/examples/l2fwd/main.c
>> +++ b/examples/l2fwd/main.c
>> @@ -118,8 +118,9 @@ static const struct rte_eth_conf port_conf = {
>>                 .header_split   = 0, /**< Header Split disabled */
>>                 .hw_ip_checksum = 0, /**< IP checksum offload disabled */
>>                 .hw_vlan_filter = 0, /**< VLAN filtering disabled */
>> -               .jumbo_frame    = 0, /**< Jumbo Frame Support disabled */
>> +               .jumbo_frame    = 1, /**< Jumbo Frame Support disabled */
>>                 .hw_strip_crc   = 0, /**< CRC stripped by hardware */
>> +               .max_rx_pkt_len = 9000,
>>         },
>>         .txmode = {
>>                 .mq_mode = ETH_MQ_TX_NONE,
>> diff --git a/lib/librte_pmd_ixgbe/ixgbe_rxtx_vec.c
>> b/lib/librte_pmd_ixgbe/ixgbe_rxtx_vec.c
>> index b54cb19..dfaccee 100644
>> --- a/lib/librte_pmd_ixgbe/ixgbe_rxtx_vec.c
>> +++ b/lib/librte_pmd_ixgbe/ixgbe_rxtx_vec.c
>> @@ -402,10 +402,10 @@ reassemble_packets(struct igb_rx_queue *rxq,
>> struct rte_mbuf **rx_bufs,
>>         struct rte_mbuf *pkts[RTE_IXGBE_VPMD_RX_BURST]; /*finished pkts*/
>>         struct rte_mbuf *start = rxq->pkt_first_seg;
>>         struct rte_mbuf *end =  rxq->pkt_last_seg;
>> -       unsigned pkt_idx = 0, buf_idx = 0;
>> +       unsigned pkt_idx, buf_idx;
>>  
>>  
>> -       while (buf_idx < nb_bufs) {
>> +       for (buf_idx = 0, pkt_idx = 0; buf_idx < nb_bufs; buf_idx++) {
>>                 if (end != NULL) {
>>                         /* processing a split packet */
>>                         end->next = rx_bufs[buf_idx];
>> @@ -448,7 +448,6 @@ reassemble_packets(struct igb_rx_queue *rxq, struct
>> rte_mbuf **rx_bufs,
>>                         rx_bufs[buf_idx]->data_len += rxq->crc_len;
>>                         rx_bufs[buf_idx]->pkt_len += rxq->crc_len;
>>                 }
>> -               buf_idx++;
>>         }
>>  
>>         /* save the partial packet for next time */
>>
>>
>> This includes your previously posted fix and makes a small modification
>> to the l2fwd example app to enable jumbo frames of up to 9000 bytes.
>> The system is equipped with a two port Intel 82599 card and both ports
>> are hooked up to a packet generator. The packet generator produces
>> simple Ethernet/IPv4/UDP packets.
>> I started the l2fwd app with the following command line:
>>
>> $ ./build/l2fwd -c f -n 4 -- -q 8 -p 3
>>
>> Both build variants that I have tested (CONFIG_RTE_IXGBE_INC_VECTOR=y
>> and CONFIG_RTE_IXGBE_INC_VECTOR=n) now give me the same result:
>> As long as the packet size is <= 2048 bytes the application behaves
>> normally and all packets are forwarded as expected.
>> As soon as the packet size exceeds 2048 bytes the application will only
>> forward some packets and then stop forwarding altogether. Even small
>> packets will not be forwarded anymore.
>>
>> If you want me to try out anything else just let me know.
>>
>>
>> Best regards,
>> Martin
>>
> I think the txq flags are at fault here. The default txq flags setting for
> the l2fwd sample application includes the flag ETH_TXQ_FLAGS_NOMULTSEGS which
> disables support for sending packets with multiple segments i.e. jumbo frames
> in this case. If you change l2fwd to explicitly pass a txqflags parameter in
> as part of the port setup (as was the case in previous releases), and set txqflags
> to 0, does the problem go away?
>
> /Bruce
>
>>
>> On 21.01.15 14:49, Bruce Richardson wrote:
>>> On Tue, Jan 20, 2015 at 11:39:03AM +0100, Martin Weiser wrote:
>>>> Hi again,
>>>>
>>>> I did some further testing and it seems like this issue is linked to
>>>> jumbo frames. I think a similar issue has already been reported by
>>>> Prashant Upadhyaya with the subject 'Packet Rx issue with DPDK1.8'.
>>>> In our application we use the following rxmode port configuration:
>>>>
>>>> .mq_mode    = ETH_MQ_RX_RSS,
>>>> .split_hdr_size = 0,
>>>> .header_split   = 0,
>>>> .hw_ip_checksum = 1,
>>>> .hw_vlan_filter = 0,
>>>> .jumbo_frame    = 1,
>>>> .hw_strip_crc   = 1,
>>>> .max_rx_pkt_len = 9000,
>>>>
>>>> and the mbuf size is calculated like the following:
>>>>
>>>> (2048 + sizeof(struct rte_mbuf) + RTE_PKTMBUF_HEADROOM)
>>>>
>>>> This works fine with DPDK 1.7 and jumbo frames are split into buffer
>>>> chains and can be forwarded on another port without a problem.
>>>> With DPDK 1.8 and the default configuration (CONFIG_RTE_IXGBE_INC_VECTOR
>>>> enabled) the application sometimes crashes like described in my first
>>>> mail and sometimes packet receiving stops with subsequently arriving
>>>> packets counted as rx errors. When CONFIG_RTE_IXGBE_INC_VECTOR is
>>>> disabled the packet processing also comes to a halt as soon as jumbo
>>>> frames arrive with a the slightly different effect that now
>>>> rte_eth_tx_burst refuses to send any previously received packets.
>>>>
>>>> Is there anything special to consider regarding jumbo frames when moving
>>>> from DPDK 1.7 to 1.8 that we might have missed?
>>>>
>>>> Martin
>>>>
>>>>
>>>>
>>>> On 19.01.15 11:26, Martin Weiser wrote:
>>>>> Hi everybody,
>>>>>
>>>>> we quite recently updated one of our applications to DPDK 1.8.0 and are
>>>>> now seeing a segmentation fault in ixgbe_rxtx_vec.c:444 after a few minutes.
>>>>> I just did some quick debugging and I only have a very limited
>>>>> understanding of the code in question but it seems that the 'continue'
>>>>> in line 445 without increasing 'buf_idx' might cause the problem. In one
>>>>> debugging session when the crash occurred the value of 'buf_idx' was 2
>>>>> and the value of 'pkt_idx' was 8965.
>>>>> Any help with this issue would be greatly appreciated. If you need any
>>>>> further information just let me know.
>>>>>
>>>>> Martin
>>>>>
>>>>>
>>> Hi Martin, Prashant,
>>>
>>> I've managed to reproduce the issue here and had a look at it. Could you
>>> both perhaps try the proposed change below and see if it fixes the problem for
>>> you and gives you a working system? If so, I'll submit this as a patch fix 
>>> officially - or go back to the drawing board, if not. :-)
>>>
>>> diff --git a/lib/librte_pmd_ixgbe/ixgbe_rxtx_vec.c b/lib/librte_pmd_ixgbe/ixgbe_rxtx_vec.c
>>> index b54cb19..dfaccee 100644
>>> --- a/lib/librte_pmd_ixgbe/ixgbe_rxtx_vec.c
>>> +++ b/lib/librte_pmd_ixgbe/ixgbe_rxtx_vec.c
>>> @@ -402,10 +402,10 @@ reassemble_packets(struct igb_rx_queue *rxq, struct rte_mbuf **rx_bufs,
>>>         struct rte_mbuf *pkts[RTE_IXGBE_VPMD_RX_BURST]; /*finished pkts*/
>>>         struct rte_mbuf *start = rxq->pkt_first_seg;
>>>         struct rte_mbuf *end =  rxq->pkt_last_seg;
>>> -       unsigned pkt_idx = 0, buf_idx = 0;
>>> +       unsigned pkt_idx, buf_idx;
>>>
>>>
>>> -       while (buf_idx < nb_bufs) {
>>> +       for (buf_idx = 0, pkt_idx = 0; buf_idx < nb_bufs; buf_idx++) {
>>>                 if (end != NULL) {
>>>                         /* processing a split packet */
>>>                         end->next = rx_bufs[buf_idx];
>>> @@ -448,7 +448,6 @@ reassemble_packets(struct igb_rx_queue *rxq, struct rte_mbuf **rx_bufs,
>>>                         rx_bufs[buf_idx]->data_len += rxq->crc_len;
>>>                         rx_bufs[buf_idx]->pkt_len += rxq->crc_len;
>>>                 }
>>> -               buf_idx++;
>>>         }
>>>
>>>         /* save the partial packet for next time */
>>>
>>>
>>> Regards,
>>> /Bruce
>>>

^ permalink raw reply	[flat|nested] 10+ messages in thread

* [dpdk-dev] [PATCH] ixgbe: fix vector PMD chained mbuf receive
  2015-01-23 14:59         ` Martin Weiser
@ 2015-02-06 13:41           ` Bruce Richardson
  2015-02-20 11:00             ` Thomas Monjalon
  0 siblings, 1 reply; 10+ messages in thread
From: Bruce Richardson @ 2015-02-06 13:41 UTC (permalink / raw)
  To: dev

When the vector pmd was receiving a mix of packets of various sizes,
some of which were split across multiple mbufs, there was an issue
with reassembly of the jumbo frames. This was due to a skipped increment
when using "continue" in a while loop. Changing the loop to a "for"
loop fixes this problem, by ensuring the increment is always performed.

Reported-by: Prashant Upadhyaya <praupadhyaya@gmail.com>
Reported-by: Martin Weiser <martin.weiser@allegro-packets.com>
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Tested-by: Martin Weiser <martin.weiser@allegro-packets.com>
---
 lib/librte_pmd_ixgbe/ixgbe_rxtx_vec.c | 5 ++---
 1 file changed, 2 insertions(+), 3 deletions(-)

diff --git a/lib/librte_pmd_ixgbe/ixgbe_rxtx_vec.c b/lib/librte_pmd_ixgbe/ixgbe_rxtx_vec.c
index b54cb19..dfaccee 100644
--- a/lib/librte_pmd_ixgbe/ixgbe_rxtx_vec.c
+++ b/lib/librte_pmd_ixgbe/ixgbe_rxtx_vec.c
@@ -402,10 +402,10 @@ reassemble_packets(struct igb_rx_queue *rxq, struct rte_mbuf **rx_bufs,
 	struct rte_mbuf *pkts[RTE_IXGBE_VPMD_RX_BURST]; /*finished pkts*/
 	struct rte_mbuf *start = rxq->pkt_first_seg;
 	struct rte_mbuf *end =  rxq->pkt_last_seg;
-	unsigned pkt_idx = 0, buf_idx = 0;
+	unsigned pkt_idx, buf_idx;
 
 
-	while (buf_idx < nb_bufs) {
+	for (buf_idx = 0, pkt_idx = 0; buf_idx < nb_bufs; buf_idx++) {
 		if (end != NULL) {
 			/* processing a split packet */
 			end->next = rx_bufs[buf_idx];
@@ -448,7 +448,6 @@ reassemble_packets(struct igb_rx_queue *rxq, struct rte_mbuf **rx_bufs,
 			rx_bufs[buf_idx]->data_len += rxq->crc_len;
 			rx_bufs[buf_idx]->pkt_len += rxq->crc_len;
 		}
-		buf_idx++;
 	}
 
 	/* save the partial packet for next time */
-- 
2.1.0

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [dpdk-dev] [PATCH] ixgbe: fix vector PMD chained mbuf receive
  2015-02-06 13:41           ` [dpdk-dev] [PATCH] ixgbe: fix vector PMD chained mbuf receive Bruce Richardson
@ 2015-02-20 11:00             ` Thomas Monjalon
  0 siblings, 0 replies; 10+ messages in thread
From: Thomas Monjalon @ 2015-02-20 11:00 UTC (permalink / raw)
  To: Bruce Richardson; +Cc: dev

> When the vector pmd was receiving a mix of packets of various sizes,
> some of which were split across multiple mbufs, there was an issue
> with reassembly of the jumbo frames. This was due to a skipped increment
> when using "continue" in a while loop. Changing the loop to a "for"
> loop fixes this problem, by ensuring the increment is always performed.
> 
> Reported-by: Prashant Upadhyaya <praupadhyaya@gmail.com>
> Reported-by: Martin Weiser <martin.weiser@allegro-packets.com>
> Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
> Tested-by: Martin Weiser <martin.weiser@allegro-packets.com>

Applied, thanks

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2015-02-20 11:00 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-01-19 10:26 [dpdk-dev] Segmentation fault in ixgbe_rxtx_vec.c:444 with 1.8.0 Martin Weiser
2015-01-20 10:39 ` Martin Weiser
2015-01-21 13:49   ` Bruce Richardson
2015-01-22 14:05     ` Prashant Upadhyaya
2015-01-22 15:19       ` Bruce Richardson
2015-01-23 11:37     ` Martin Weiser
2015-01-23 11:52       ` Bruce Richardson
2015-01-23 14:59         ` Martin Weiser
2015-02-06 13:41           ` [dpdk-dev] [PATCH] ixgbe: fix vector PMD chained mbuf receive Bruce Richardson
2015-02-20 11:00             ` Thomas Monjalon

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).