DPDK usage discussions
 help / color / mirror / Atom feed
* [dpdk-users] RX of multi-segment jumbo frames
@ 2019-02-09 11:11 Filip Janiszewski
  2019-02-09 13:51 ` Wiles, Keith
  0 siblings, 1 reply; 6+ messages in thread
From: Filip Janiszewski @ 2019-02-09 11:11 UTC (permalink / raw)
  To: users

Hi,

I'm attempting to receive jumbo frames (~9000 bytes) on a Mellonox card
using DPDK, I've configured the DEV_RX_OFFLOAD_JUMBO_FRAME offload for
rte_eth_conf and rte_eth_rxconf (per RX Queue), but I can capture jumbo
frames only if the mbuf is large enough to contain the whole packet, is
there a way to enable DPDK to chain the incoming data in mbufs smaller
than the actual packet?

We don't have many of those big packets coming in, so would be optimal
to leave the mbuf size to RTE_MBUF_DEFAULT_BUF_SIZE and then configure
the RX device to chain those bufs for larger packets, but can't find a
way to do it, any suggestion?

Thanks

-- 
BR, Filip
+48 666 369 823

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [dpdk-users] RX of multi-segment jumbo frames
  2019-02-09 11:11 [dpdk-users] RX of multi-segment jumbo frames Filip Janiszewski
@ 2019-02-09 13:51 ` Wiles, Keith
  2019-02-09 15:27   ` Filip Janiszewski
  0 siblings, 1 reply; 6+ messages in thread
From: Wiles, Keith @ 2019-02-09 13:51 UTC (permalink / raw)
  To: Filip Janiszewski; +Cc: users



> On Feb 9, 2019, at 5:11 AM, Filip Janiszewski <contact@filipjaniszewski.com> wrote:
> 
> Hi,
> 
> I'm attempting to receive jumbo frames (~9000 bytes) on a Mellonox card
> using DPDK, I've configured the DEV_RX_OFFLOAD_JUMBO_FRAME offload for
> rte_eth_conf and rte_eth_rxconf (per RX Queue), but I can capture jumbo
> frames only if the mbuf is large enough to contain the whole packet, is
> there a way to enable DPDK to chain the incoming data in mbufs smaller
> than the actual packet?
> 
> We don't have many of those big packets coming in, so would be optimal
> to leave the mbuf size to RTE_MBUF_DEFAULT_BUF_SIZE and then configure
> the RX device to chain those bufs for larger packets, but can't find a
> way to do it, any suggestion?
> 

the best i understand is the nic or pmd needs to be configured to split up packets between mbufs in the rx ring. i look in the docs for the nic and see if it supports splitting up packets or ask the maintainer from the maintainers file.
> Thanks
> 
> -- 
> BR, Filip
> +48 666 369 823

Regards,
Keith

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [dpdk-users] RX of multi-segment jumbo frames
  2019-02-09 13:51 ` Wiles, Keith
@ 2019-02-09 15:27   ` Filip Janiszewski
  2019-02-09 15:36     ` Wiles, Keith
  0 siblings, 1 reply; 6+ messages in thread
From: Filip Janiszewski @ 2019-02-09 15:27 UTC (permalink / raw)
  To: Wiles, Keith; +Cc: users



Il 09/02/19 14:51, Wiles, Keith ha scritto:
> 
> 
>> On Feb 9, 2019, at 5:11 AM, Filip Janiszewski <contact@filipjaniszewski.com> wrote:
>>
>> Hi,
>>
>> I'm attempting to receive jumbo frames (~9000 bytes) on a Mellonox card
>> using DPDK, I've configured the DEV_RX_OFFLOAD_JUMBO_FRAME offload for
>> rte_eth_conf and rte_eth_rxconf (per RX Queue), but I can capture jumbo
>> frames only if the mbuf is large enough to contain the whole packet, is
>> there a way to enable DPDK to chain the incoming data in mbufs smaller
>> than the actual packet?
>>
>> We don't have many of those big packets coming in, so would be optimal
>> to leave the mbuf size to RTE_MBUF_DEFAULT_BUF_SIZE and then configure
>> the RX device to chain those bufs for larger packets, but can't find a
>> way to do it, any suggestion?
>>
> 
> the best i understand is the nic or pmd needs to be configured to split up packets between mbufs in the rx ring. i look in the docs for the nic and see if it supports splitting up packets or ask the maintainer from the maintainers file.

I can capture jumbo packets with Wireshark on the same card (same port,
same setup), which let me think the problem is purely on my DPDK card
configuration.

According to ethtools, the jumbo packet (from now on JF, Jumbo Frame) is
detected at phy level, the couters rx_packets_phy, rx_bytes_phy,
rx_8192_to_10239_bytes_phy are properly increased.

There was an option to setup manually the support for JF but was remove
from DPDK after version 16.07: CONFIG_RTE_LIBRTE_MLX5_SGE_WR_N.
According to the release note:

.
Improved jumbo frames support, by dynamically setting RX scatter gather
elements according to the MTU and mbuf size, no need for compilation
parameter ``MLX5_PMD_SGE_WR_N``
.

Not quire sure where to look for..

>> Thanks
>>
>> -- 
>> BR, Filip
>> +48 666 369 823
> 
> Regards,
> Keith
> 

-- 
BR, Filip
+48 666 369 823

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [dpdk-users] RX of multi-segment jumbo frames
  2019-02-09 15:27   ` Filip Janiszewski
@ 2019-02-09 15:36     ` Wiles, Keith
  2019-02-15  5:59       ` Filip Janiszewski
  0 siblings, 1 reply; 6+ messages in thread
From: Wiles, Keith @ 2019-02-09 15:36 UTC (permalink / raw)
  To: Filip Janiszewski; +Cc: users



> On Feb 9, 2019, at 9:27 AM, Filip Janiszewski <contact@filipjaniszewski.com> wrote:
> 
> 
> 
> Il 09/02/19 14:51, Wiles, Keith ha scritto:
>> 
>> 
>>> On Feb 9, 2019, at 5:11 AM, Filip Janiszewski <contact@filipjaniszewski.com> wrote:
>>> 
>>> Hi,
>>> 
>>> I'm attempting to receive jumbo frames (~9000 bytes) on a Mellonox card
>>> using DPDK, I've configured the DEV_RX_OFFLOAD_JUMBO_FRAME offload for
>>> rte_eth_conf and rte_eth_rxconf (per RX Queue), but I can capture jumbo
>>> frames only if the mbuf is large enough to contain the whole packet, is
>>> there a way to enable DPDK to chain the incoming data in mbufs smaller
>>> than the actual packet?
>>> 
>>> We don't have many of those big packets coming in, so would be optimal
>>> to leave the mbuf size to RTE_MBUF_DEFAULT_BUF_SIZE and then configure
>>> the RX device to chain those bufs for larger packets, but can't find a
>>> way to do it, any suggestion?
>>> 
>> 
>> the best i understand is the nic or pmd needs to be configured to split up packets between mbufs in the rx ring. i look in the docs for the nic and see if it supports splitting up packets or ask the maintainer from the maintainers file.
> 
> I can capture jumbo packets with Wireshark on the same card (same port,
> same setup), which let me think the problem is purely on my DPDK card
> configuration.
> 
> According to ethtools, the jumbo packet (from now on JF, Jumbo Frame) is
> detected at phy level, the couters rx_packets_phy, rx_bytes_phy,
> rx_8192_to_10239_bytes_phy are properly increased.
> 
> There was an option to setup manually the support for JF but was remove
> from DPDK after version 16.07: CONFIG_RTE_LIBRTE_MLX5_SGE_WR_N.
> According to the release note:
> 
> .
> Improved jumbo frames support, by dynamically setting RX scatter gather
> elements according to the MTU and mbuf size, no need for compilation
> parameter ``MLX5_PMD_SGE_WR_N``
> .
> 
> Not quire sure where to look for..
> 

maintainer is your best bet now.
>>> Thanks
>>> 
>>> -- 
>>> BR, Filip
>>> +48 666 369 823
>> 
>> Regards,
>> Keith
>> 
> 
> -- 
> BR, Filip
> +48 666 369 823

Regards,
Keith

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [dpdk-users] RX of multi-segment jumbo frames
  2019-02-09 15:36     ` Wiles, Keith
@ 2019-02-15  5:59       ` Filip Janiszewski
  2019-02-15 13:30         ` Wiles, Keith
  0 siblings, 1 reply; 6+ messages in thread
From: Filip Janiszewski @ 2019-02-15  5:59 UTC (permalink / raw)
  To: users; +Cc: Wiles, Keith

Unfortunately I didn't get much help from the maintainers at Mellanox,
but I discovered that with DPDK 18.05 there's the flag
ignore_offload_bitfield which once toggled to 1 along with the offloads
set to DEV_RX_OFFLOAD_JUMBO_FRAME|DEV_RX_OFFLOAD_SCATTER allows DPDK to
capture Jumbo on Mellanox:

https://doc.dpdk.org/api-18.05/structrte__eth__rxmode.html

In DPDK 19.02 this flag is missing and I can't capture Jumbos with my
current configuration.

Sadly, even if setting ignore_offload_bitfield to 1 fix my problem it
creates a bunch more, the packets coming in are not timestamped for
example (setting hw_timestamp to 1 does not fix the issue as the
timestamp are still EPOCH + some ms.).

Not sure if this can trigger any idea, for me it is not completely clear
what was the purpose of ignore_offload_bitfield (removed later) and how
to enable Jumbos properly.

What I've attempted so far (apart from the ignore_offload_bitfield):

1) Set mtu to 9600 (rte_eth_dev_set_mtu)
2) Configure port with offloads DEV_RX_OFFLOAD_SCATTER |
DEV_RX_OFFLOAD_JUMBO_FRAME, max_rx_pkt_len set to 9600
3) Configure RX queue with default_rxconf (from rte_eth_dev_info) adding
the offloads from the port configuration (DEV_RX_OFFLOAD_SCATTER |
DEV_RX_OFFLOAD_JUMBO_FRAME)

The JF are reported as ierror in rte_eth_stats.

Thanks

Il 09/02/19 16:36, Wiles, Keith ha scritto:
> 
> 
>> On Feb 9, 2019, at 9:27 AM, Filip Janiszewski <contact@filipjaniszewski.com> wrote:
>>
>>
>>
>> Il 09/02/19 14:51, Wiles, Keith ha scritto:
>>>
>>>
>>>> On Feb 9, 2019, at 5:11 AM, Filip Janiszewski <contact@filipjaniszewski.com> wrote:
>>>>
>>>> Hi,
>>>>
>>>> I'm attempting to receive jumbo frames (~9000 bytes) on a Mellonox card
>>>> using DPDK, I've configured the DEV_RX_OFFLOAD_JUMBO_FRAME offload for
>>>> rte_eth_conf and rte_eth_rxconf (per RX Queue), but I can capture jumbo
>>>> frames only if the mbuf is large enough to contain the whole packet, is
>>>> there a way to enable DPDK to chain the incoming data in mbufs smaller
>>>> than the actual packet?
>>>>
>>>> We don't have many of those big packets coming in, so would be optimal
>>>> to leave the mbuf size to RTE_MBUF_DEFAULT_BUF_SIZE and then configure
>>>> the RX device to chain those bufs for larger packets, but can't find a
>>>> way to do it, any suggestion?
>>>>
>>>
>>> the best i understand is the nic or pmd needs to be configured to split up packets between mbufs in the rx ring. i look in the docs for the nic and see if it supports splitting up packets or ask the maintainer from the maintainers file.
>>
>> I can capture jumbo packets with Wireshark on the same card (same port,
>> same setup), which let me think the problem is purely on my DPDK card
>> configuration.
>>
>> According to ethtools, the jumbo packet (from now on JF, Jumbo Frame) is
>> detected at phy level, the couters rx_packets_phy, rx_bytes_phy,
>> rx_8192_to_10239_bytes_phy are properly increased.
>>
>> There was an option to setup manually the support for JF but was remove
>> from DPDK after version 16.07: CONFIG_RTE_LIBRTE_MLX5_SGE_WR_N.
>> According to the release note:
>>
>> .
>> Improved jumbo frames support, by dynamically setting RX scatter gather
>> elements according to the MTU and mbuf size, no need for compilation
>> parameter ``MLX5_PMD_SGE_WR_N``
>> .
>>
>> Not quire sure where to look for..
>>
> 
> maintainer is your best bet now.
>>>> Thanks
>>>>
>>>> -- 
>>>> BR, Filip
>>>> +48 666 369 823
>>>
>>> Regards,
>>> Keith
>>>
>>
>> -- 
>> BR, Filip
>> +48 666 369 823
> 
> Regards,
> Keith
> 

-- 
BR, Filip
+48 666 369 823

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [dpdk-users] RX of multi-segment jumbo frames
  2019-02-15  5:59       ` Filip Janiszewski
@ 2019-02-15 13:30         ` Wiles, Keith
  0 siblings, 0 replies; 6+ messages in thread
From: Wiles, Keith @ 2019-02-15 13:30 UTC (permalink / raw)
  To: Filip Janiszewski; +Cc: users



> On Feb 14, 2019, at 11:59 PM, Filip Janiszewski <contact@filipjaniszewski.com> wrote:
> 
> Unfortunately I didn't get much help from the maintainers at Mellanox,
> but I discovered that with DPDK 18.05 there's the flag
> ignore_offload_bitfield which once toggled to 1 along with the offloads
> set to DEV_RX_OFFLOAD_JUMBO_FRAME|DEV_RX_OFFLOAD_SCATTER allows DPDK to
> capture Jumbo on Mellanox:
> 
> https://doc.dpdk.org/api-18.05/structrte__eth__rxmode.html
> 
> In DPDK 19.02 this flag is missing and I can't capture Jumbos with my
> current configuration.
> 
> Sadly, even if setting ignore_offload_bitfield to 1 fix my problem it
> creates a bunch more, the packets coming in are not timestamped for
> example (setting hw_timestamp to 1 does not fix the issue as the
> timestamp are still EPOCH + some ms.).
> 
> Not sure if this can trigger any idea, for me it is not completely clear
> what was the purpose of ignore_offload_bitfield (removed later) and how
> to enable Jumbos properly.
> 
> What I've attempted so far (apart from the ignore_offload_bitfield):
> 
> 1) Set mtu to 9600 (rte_eth_dev_set_mtu)
> 2) Configure port with offloads DEV_RX_OFFLOAD_SCATTER |
> DEV_RX_OFFLOAD_JUMBO_FRAME, max_rx_pkt_len set to 9600
> 3) Configure RX queue with default_rxconf (from rte_eth_dev_info) adding
> the offloads from the port configuration (DEV_RX_OFFLOAD_SCATTER |
> DEV_RX_OFFLOAD_JUMBO_FRAME)
> 
> The JF are reported as ierror in rte_eth_stats.

sorry, the last time i had any dealings with mellanox i was not able to get it to work. so not going to be much help here.
> 
> Thanks
> 
> Il 09/02/19 16:36, Wiles, Keith ha scritto:
>> 
>> 
>>> On Feb 9, 2019, at 9:27 AM, Filip Janiszewski <contact@filipjaniszewski.com> wrote:
>>> 
>>> 
>>> 
>>> Il 09/02/19 14:51, Wiles, Keith ha scritto:
>>>> 
>>>> 
>>>>> On Feb 9, 2019, at 5:11 AM, Filip Janiszewski <contact@filipjaniszewski.com> wrote:
>>>>> 
>>>>> Hi,
>>>>> 
>>>>> I'm attempting to receive jumbo frames (~9000 bytes) on a Mellonox card
>>>>> using DPDK, I've configured the DEV_RX_OFFLOAD_JUMBO_FRAME offload for
>>>>> rte_eth_conf and rte_eth_rxconf (per RX Queue), but I can capture jumbo
>>>>> frames only if the mbuf is large enough to contain the whole packet, is
>>>>> there a way to enable DPDK to chain the incoming data in mbufs smaller
>>>>> than the actual packet?
>>>>> 
>>>>> We don't have many of those big packets coming in, so would be optimal
>>>>> to leave the mbuf size to RTE_MBUF_DEFAULT_BUF_SIZE and then configure
>>>>> the RX device to chain those bufs for larger packets, but can't find a
>>>>> way to do it, any suggestion?
>>>>> 
>>>> 
>>>> the best i understand is the nic or pmd needs to be configured to split up packets between mbufs in the rx ring. i look in the docs for the nic and see if it supports splitting up packets or ask the maintainer from the maintainers file.
>>> 
>>> I can capture jumbo packets with Wireshark on the same card (same port,
>>> same setup), which let me think the problem is purely on my DPDK card
>>> configuration.
>>> 
>>> According to ethtools, the jumbo packet (from now on JF, Jumbo Frame) is
>>> detected at phy level, the couters rx_packets_phy, rx_bytes_phy,
>>> rx_8192_to_10239_bytes_phy are properly increased.
>>> 
>>> There was an option to setup manually the support for JF but was remove
>>> from DPDK after version 16.07: CONFIG_RTE_LIBRTE_MLX5_SGE_WR_N.
>>> According to the release note:
>>> 
>>> .
>>> Improved jumbo frames support, by dynamically setting RX scatter gather
>>> elements according to the MTU and mbuf size, no need for compilation
>>> parameter ``MLX5_PMD_SGE_WR_N``
>>> .
>>> 
>>> Not quire sure where to look for..
>>> 
>> 
>> maintainer is your best bet now.
>>>>> Thanks
>>>>> 
>>>>> -- 
>>>>> BR, Filip
>>>>> +48 666 369 823
>>>> 
>>>> Regards,
>>>> Keith
>>>> 
>>> 
>>> -- 
>>> BR, Filip
>>> +48 666 369 823
>> 
>> Regards,
>> Keith
>> 
> 
> -- 
> BR, Filip
> +48 666 369 823

Regards,
Keith

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2019-02-15 13:30 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-02-09 11:11 [dpdk-users] RX of multi-segment jumbo frames Filip Janiszewski
2019-02-09 13:51 ` Wiles, Keith
2019-02-09 15:27   ` Filip Janiszewski
2019-02-09 15:36     ` Wiles, Keith
2019-02-15  5:59       ` Filip Janiszewski
2019-02-15 13:30         ` Wiles, Keith

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).