From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by dpdk.org (Postfix) with ESMTP id 8634B1B4A2 for ; Fri, 15 Feb 2019 14:30:33 +0100 (CET) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga003.jf.intel.com ([10.7.209.27]) by fmsmga105.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 15 Feb 2019 05:30:32 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.58,372,1544515200"; d="scan'208";a="126749860" Received: from fmsmsx108.amr.corp.intel.com ([10.18.124.206]) by orsmga003.jf.intel.com with ESMTP; 15 Feb 2019 05:30:32 -0800 Received: from fmsmsx114.amr.corp.intel.com (10.18.116.8) by FMSMSX108.amr.corp.intel.com (10.18.124.206) with Microsoft SMTP Server (TLS) id 14.3.408.0; Fri, 15 Feb 2019 05:30:31 -0800 Received: from fmsmsx117.amr.corp.intel.com ([169.254.3.143]) by FMSMSX114.amr.corp.intel.com ([10.18.116.8]) with mapi id 14.03.0415.000; Fri, 15 Feb 2019 05:30:31 -0800 From: "Wiles, Keith" To: Filip Janiszewski CC: "users@dpdk.org" Thread-Topic: [dpdk-users] RX of multi-segment jumbo frames Thread-Index: AQHUwGgyeG5oDlToQUuXamursyiZkKXYAoeAgAAargCAAAJsAIAIzPYAgAB98gA= Date: Fri, 15 Feb 2019 13:30:30 +0000 Message-ID: <1E987BE7-8E9B-4D89-9E52-7D44BD9C778A@intel.com> References: <4bd38b68-02e7-f031-5627-2bd2c9a38333@filipjaniszewski.com> <95B2277E-2E64-4703-97C3-022967A7F175@intel.com> <2ACB2CB5-241D-44AC-8203-5E2827885150@intel.com> In-Reply-To: Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [10.251.136.143] Content-Type: text/plain; charset="us-ascii" Content-ID: <675F6669DF251B4AB70538BC1974DFF7@intel.com> Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Subject: Re: [dpdk-users] RX of multi-segment jumbo frames X-BeenThere: users@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK usage discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 15 Feb 2019 13:30:34 -0000 > On Feb 14, 2019, at 11:59 PM, Filip Janiszewski wrote: >=20 > Unfortunately I didn't get much help from the maintainers at Mellanox, > but I discovered that with DPDK 18.05 there's the flag > ignore_offload_bitfield which once toggled to 1 along with the offloads > set to DEV_RX_OFFLOAD_JUMBO_FRAME|DEV_RX_OFFLOAD_SCATTER allows DPDK to > capture Jumbo on Mellanox: >=20 > https://doc.dpdk.org/api-18.05/structrte__eth__rxmode.html >=20 > In DPDK 19.02 this flag is missing and I can't capture Jumbos with my > current configuration. >=20 > Sadly, even if setting ignore_offload_bitfield to 1 fix my problem it > creates a bunch more, the packets coming in are not timestamped for > example (setting hw_timestamp to 1 does not fix the issue as the > timestamp are still EPOCH + some ms.). >=20 > Not sure if this can trigger any idea, for me it is not completely clear > what was the purpose of ignore_offload_bitfield (removed later) and how > to enable Jumbos properly. >=20 > What I've attempted so far (apart from the ignore_offload_bitfield): >=20 > 1) Set mtu to 9600 (rte_eth_dev_set_mtu) > 2) Configure port with offloads DEV_RX_OFFLOAD_SCATTER | > DEV_RX_OFFLOAD_JUMBO_FRAME, max_rx_pkt_len set to 9600 > 3) Configure RX queue with default_rxconf (from rte_eth_dev_info) adding > the offloads from the port configuration (DEV_RX_OFFLOAD_SCATTER | > DEV_RX_OFFLOAD_JUMBO_FRAME) >=20 > The JF are reported as ierror in rte_eth_stats. sorry, the last time i had any dealings with mellanox i was not able to get= it to work. so not going to be much help here. >=20 > Thanks >=20 > Il 09/02/19 16:36, Wiles, Keith ha scritto: >>=20 >>=20 >>> On Feb 9, 2019, at 9:27 AM, Filip Janiszewski wrote: >>>=20 >>>=20 >>>=20 >>> Il 09/02/19 14:51, Wiles, Keith ha scritto: >>>>=20 >>>>=20 >>>>> On Feb 9, 2019, at 5:11 AM, Filip Janiszewski wrote: >>>>>=20 >>>>> Hi, >>>>>=20 >>>>> I'm attempting to receive jumbo frames (~9000 bytes) on a Mellonox ca= rd >>>>> using DPDK, I've configured the DEV_RX_OFFLOAD_JUMBO_FRAME offload fo= r >>>>> rte_eth_conf and rte_eth_rxconf (per RX Queue), but I can capture jum= bo >>>>> frames only if the mbuf is large enough to contain the whole packet, = is >>>>> there a way to enable DPDK to chain the incoming data in mbufs smalle= r >>>>> than the actual packet? >>>>>=20 >>>>> We don't have many of those big packets coming in, so would be optima= l >>>>> to leave the mbuf size to RTE_MBUF_DEFAULT_BUF_SIZE and then configur= e >>>>> the RX device to chain those bufs for larger packets, but can't find = a >>>>> way to do it, any suggestion? >>>>>=20 >>>>=20 >>>> the best i understand is the nic or pmd needs to be configured to spli= t up packets between mbufs in the rx ring. i look in the docs for the nic a= nd see if it supports splitting up packets or ask the maintainer from the m= aintainers file. >>>=20 >>> I can capture jumbo packets with Wireshark on the same card (same port, >>> same setup), which let me think the problem is purely on my DPDK card >>> configuration. >>>=20 >>> According to ethtools, the jumbo packet (from now on JF, Jumbo Frame) i= s >>> detected at phy level, the couters rx_packets_phy, rx_bytes_phy, >>> rx_8192_to_10239_bytes_phy are properly increased. >>>=20 >>> There was an option to setup manually the support for JF but was remove >>> from DPDK after version 16.07: CONFIG_RTE_LIBRTE_MLX5_SGE_WR_N. >>> According to the release note: >>>=20 >>> . >>> Improved jumbo frames support, by dynamically setting RX scatter gather >>> elements according to the MTU and mbuf size, no need for compilation >>> parameter ``MLX5_PMD_SGE_WR_N`` >>> . >>>=20 >>> Not quire sure where to look for.. >>>=20 >>=20 >> maintainer is your best bet now. >>>>> Thanks >>>>>=20 >>>>> --=20 >>>>> BR, Filip >>>>> +48 666 369 823 >>>>=20 >>>> Regards, >>>> Keith >>>>=20 >>>=20 >>> --=20 >>> BR, Filip >>> +48 666 369 823 >>=20 >> Regards, >> Keith >>=20 >=20 > --=20 > BR, Filip > +48 666 369 823 Regards, Keith