From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from smtprelay04.ispgateway.de (smtprelay04.ispgateway.de [80.67.31.32]) by dpdk.org (Postfix) with ESMTP id D4E125B07 for ; Fri, 23 Jan 2015 15:59:55 +0100 (CET) Received: from [80.153.53.149] (helo=nb-mweiser.local) by smtprelay04.ispgateway.de with esmtpsa (TLSv1.2:DHE-RSA-AES128-SHA:128) (Exim 4.84) (envelope-from ) id 1YEfiF-0008H0-ED; Fri, 23 Jan 2015 15:59:55 +0100 Message-ID: <54C261EC.8000104@allegro-packets.com> Date: Fri, 23 Jan 2015 15:59:56 +0100 From: Martin Weiser User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.10; rv:31.0) Gecko/20100101 Thunderbird/31.4.0 MIME-Version: 1.0 To: Bruce Richardson References: <54BCDBF1.8020909@allegro-packets.com> <54BE3047.9060909@allegro-packets.com> <20150121134921.GA2592@bricha3-MOBL3> <54C23265.8090403@allegro-packets.com> <20150123115244.GA10808@bricha3-MOBL3> In-Reply-To: <20150123115244.GA10808@bricha3-MOBL3> Content-Type: text/plain; charset=windows-1252 Content-Transfer-Encoding: quoted-printable X-Df-Sender: bWFydGluLndlaXNlckBhbGxlZ3JvLXBhY2tldHMuY29t Cc: dev@dpdk.org Subject: Re: [dpdk-dev] Segmentation fault in ixgbe_rxtx_vec.c:444 with 1.8.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 23 Jan 2015 14:59:56 -0000 Hi Bruce, yes, you are absolutely right. That resolves the problem. I was really happy to see that DPDK 1.8 includes proper default configurations for each driver and I made use of this. But unfortunately I was not aware that the default configuration did include the ETH_TXQ_FLAGS_NOMULTSEGS flag for ixgbe and i40e. I now use rte_eth_dev_info_get to get the default config for the port and then modify the txq_flags to not not include ETH_TXQ_FLAGS_NOMULTSEGS= =2E With your fix this now works for CONFIG_RTE_IXGBE_INC_VECTOR=3Dy, too. Sorry for missing this and thanks for the quick help. Best regards, Martin On 23.01.15 12:52, Bruce Richardson wrote: > On Fri, Jan 23, 2015 at 12:37:09PM +0100, Martin Weiser wrote: >> Hi Bruce, >> >> I now had the chance to reproduce the issue we are seeing with a DPDK >> example app. >> I started out with a vanilla DPDK 1.8.0 and only made the following ch= anges: >> >> diff --git a/examples/l2fwd/main.c b/examples/l2fwd/main.c >> index e684234..48e6b7c 100644 >> --- a/examples/l2fwd/main.c >> +++ b/examples/l2fwd/main.c >> @@ -118,8 +118,9 @@ static const struct rte_eth_conf port_conf =3D { >> .header_split =3D 0, /**< Header Split disabled */ >> .hw_ip_checksum =3D 0, /**< IP checksum offload disabl= ed */ >> .hw_vlan_filter =3D 0, /**< VLAN filtering disabled */= >> - .jumbo_frame =3D 0, /**< Jumbo Frame Support disabl= ed */ >> + .jumbo_frame =3D 1, /**< Jumbo Frame Support disabl= ed */ >> .hw_strip_crc =3D 0, /**< CRC stripped by hardware *= / >> + .max_rx_pkt_len =3D 9000, >> }, >> .txmode =3D { >> .mq_mode =3D ETH_MQ_TX_NONE, >> diff --git a/lib/librte_pmd_ixgbe/ixgbe_rxtx_vec.c >> b/lib/librte_pmd_ixgbe/ixgbe_rxtx_vec.c >> index b54cb19..dfaccee 100644 >> --- a/lib/librte_pmd_ixgbe/ixgbe_rxtx_vec.c >> +++ b/lib/librte_pmd_ixgbe/ixgbe_rxtx_vec.c >> @@ -402,10 +402,10 @@ reassemble_packets(struct igb_rx_queue *rxq, >> struct rte_mbuf **rx_bufs, >> struct rte_mbuf *pkts[RTE_IXGBE_VPMD_RX_BURST]; /*finished pkt= s*/ >> struct rte_mbuf *start =3D rxq->pkt_first_seg; >> struct rte_mbuf *end =3D rxq->pkt_last_seg; >> - unsigned pkt_idx =3D 0, buf_idx =3D 0; >> + unsigned pkt_idx, buf_idx; >> =20 >> =20 >> - while (buf_idx < nb_bufs) { >> + for (buf_idx =3D 0, pkt_idx =3D 0; buf_idx < nb_bufs; buf_idx+= +) { >> if (end !=3D NULL) { >> /* processing a split packet */ >> end->next =3D rx_bufs[buf_idx]; >> @@ -448,7 +448,6 @@ reassemble_packets(struct igb_rx_queue *rxq, struc= t >> rte_mbuf **rx_bufs, >> rx_bufs[buf_idx]->data_len +=3D rxq->crc_len; >> rx_bufs[buf_idx]->pkt_len +=3D rxq->crc_len; >> } >> - buf_idx++; >> } >> =20 >> /* save the partial packet for next time */ >> >> >> This includes your previously posted fix and makes a small modificatio= n >> to the l2fwd example app to enable jumbo frames of up to 9000 bytes. >> The system is equipped with a two port Intel 82599 card and both ports= >> are hooked up to a packet generator. The packet generator produces >> simple Ethernet/IPv4/UDP packets. >> I started the l2fwd app with the following command line: >> >> $ ./build/l2fwd -c f -n 4 -- -q 8 -p 3 >> >> Both build variants that I have tested (CONFIG_RTE_IXGBE_INC_VECTOR=3D= y >> and CONFIG_RTE_IXGBE_INC_VECTOR=3Dn) now give me the same result: >> As long as the packet size is <=3D 2048 bytes the application behaves >> normally and all packets are forwarded as expected. >> As soon as the packet size exceeds 2048 bytes the application will onl= y >> forward some packets and then stop forwarding altogether. Even small >> packets will not be forwarded anymore. >> >> If you want me to try out anything else just let me know. >> >> >> Best regards, >> Martin >> > I think the txq flags are at fault here. The default txq flags setting = for > the l2fwd sample application includes the flag ETH_TXQ_FLAGS_NOMULTSEGS= which > disables support for sending packets with multiple segments i.e. jumbo = frames > in this case. If you change l2fwd to explicitly pass a txqflags paramet= er in > as part of the port setup (as was the case in previous releases), and s= et txqflags > to 0, does the problem go away? > > /Bruce > >> >> On 21.01.15 14:49, Bruce Richardson wrote: >>> On Tue, Jan 20, 2015 at 11:39:03AM +0100, Martin Weiser wrote: >>>> Hi again, >>>> >>>> I did some further testing and it seems like this issue is linked to= >>>> jumbo frames. I think a similar issue has already been reported by >>>> Prashant Upadhyaya with the subject 'Packet Rx issue with DPDK1.8'. >>>> In our application we use the following rxmode port configuration: >>>> >>>> .mq_mode =3D ETH_MQ_RX_RSS, >>>> .split_hdr_size =3D 0, >>>> .header_split =3D 0, >>>> .hw_ip_checksum =3D 1, >>>> .hw_vlan_filter =3D 0, >>>> .jumbo_frame =3D 1, >>>> .hw_strip_crc =3D 1, >>>> .max_rx_pkt_len =3D 9000, >>>> >>>> and the mbuf size is calculated like the following: >>>> >>>> (2048 + sizeof(struct rte_mbuf) + RTE_PKTMBUF_HEADROOM) >>>> >>>> This works fine with DPDK 1.7 and jumbo frames are split into buffer= >>>> chains and can be forwarded on another port without a problem. >>>> With DPDK 1.8 and the default configuration (CONFIG_RTE_IXGBE_INC_VE= CTOR >>>> enabled) the application sometimes crashes like described in my firs= t >>>> mail and sometimes packet receiving stops with subsequently arriving= >>>> packets counted as rx errors. When CONFIG_RTE_IXGBE_INC_VECTOR is >>>> disabled the packet processing also comes to a halt as soon as jumbo= >>>> frames arrive with a the slightly different effect that now >>>> rte_eth_tx_burst refuses to send any previously received packets. >>>> >>>> Is there anything special to consider regarding jumbo frames when mo= ving >>>> from DPDK 1.7 to 1.8 that we might have missed? >>>> >>>> Martin >>>> >>>> >>>> >>>> On 19.01.15 11:26, Martin Weiser wrote: >>>>> Hi everybody, >>>>> >>>>> we quite recently updated one of our applications to DPDK 1.8.0 and= are >>>>> now seeing a segmentation fault in ixgbe_rxtx_vec.c:444 after a few= minutes. >>>>> I just did some quick debugging and I only have a very limited >>>>> understanding of the code in question but it seems that the 'contin= ue' >>>>> in line 445 without increasing 'buf_idx' might cause the problem. I= n one >>>>> debugging session when the crash occurred the value of 'buf_idx' wa= s 2 >>>>> and the value of 'pkt_idx' was 8965. >>>>> Any help with this issue would be greatly appreciated. If you need = any >>>>> further information just let me know. >>>>> >>>>> Martin >>>>> >>>>> >>> Hi Martin, Prashant, >>> >>> I've managed to reproduce the issue here and had a look at it. Could = you >>> both perhaps try the proposed change below and see if it fixes the pr= oblem for >>> you and gives you a working system? If so, I'll submit this as a patc= h fix=20 >>> officially - or go back to the drawing board, if not. :-) >>> >>> diff --git a/lib/librte_pmd_ixgbe/ixgbe_rxtx_vec.c b/lib/librte_pmd_i= xgbe/ixgbe_rxtx_vec.c >>> index b54cb19..dfaccee 100644 >>> --- a/lib/librte_pmd_ixgbe/ixgbe_rxtx_vec.c >>> +++ b/lib/librte_pmd_ixgbe/ixgbe_rxtx_vec.c >>> @@ -402,10 +402,10 @@ reassemble_packets(struct igb_rx_queue *rxq, st= ruct rte_mbuf **rx_bufs, >>> struct rte_mbuf *pkts[RTE_IXGBE_VPMD_RX_BURST]; /*finished pk= ts*/ >>> struct rte_mbuf *start =3D rxq->pkt_first_seg; >>> struct rte_mbuf *end =3D rxq->pkt_last_seg; >>> - unsigned pkt_idx =3D 0, buf_idx =3D 0; >>> + unsigned pkt_idx, buf_idx; >>> >>> >>> - while (buf_idx < nb_bufs) { >>> + for (buf_idx =3D 0, pkt_idx =3D 0; buf_idx < nb_bufs; buf_idx= ++) { >>> if (end !=3D NULL) { >>> /* processing a split packet */ >>> end->next =3D rx_bufs[buf_idx]; >>> @@ -448,7 +448,6 @@ reassemble_packets(struct igb_rx_queue *rxq, stru= ct rte_mbuf **rx_bufs, >>> rx_bufs[buf_idx]->data_len +=3D rxq->crc_len;= >>> rx_bufs[buf_idx]->pkt_len +=3D rxq->crc_len; >>> } >>> - buf_idx++; >>> } >>> >>> /* save the partial packet for next time */ >>> >>> >>> Regards, >>> /Bruce >>>