DPDK usage discussions
 help / color / mirror / Atom feed
From: Filip Janiszewski <contact@filipjaniszewski.com>
To: "Wiles, Keith" <keith.wiles@intel.com>
Cc: "users@dpdk.org" <users@dpdk.org>
Subject: Re: [dpdk-users] RX of multi-segment jumbo frames
Date: Sat, 9 Feb 2019 16:27:24 +0100	[thread overview]
Message-ID: <dd7b9dc7-a73d-0d8a-3626-d0f7275417ec@filipjaniszewski.com> (raw)
In-Reply-To: <95B2277E-2E64-4703-97C3-022967A7F175@intel.com>



Il 09/02/19 14:51, Wiles, Keith ha scritto:
> 
> 
>> On Feb 9, 2019, at 5:11 AM, Filip Janiszewski <contact@filipjaniszewski.com> wrote:
>>
>> Hi,
>>
>> I'm attempting to receive jumbo frames (~9000 bytes) on a Mellonox card
>> using DPDK, I've configured the DEV_RX_OFFLOAD_JUMBO_FRAME offload for
>> rte_eth_conf and rte_eth_rxconf (per RX Queue), but I can capture jumbo
>> frames only if the mbuf is large enough to contain the whole packet, is
>> there a way to enable DPDK to chain the incoming data in mbufs smaller
>> than the actual packet?
>>
>> We don't have many of those big packets coming in, so would be optimal
>> to leave the mbuf size to RTE_MBUF_DEFAULT_BUF_SIZE and then configure
>> the RX device to chain those bufs for larger packets, but can't find a
>> way to do it, any suggestion?
>>
> 
> the best i understand is the nic or pmd needs to be configured to split up packets between mbufs in the rx ring. i look in the docs for the nic and see if it supports splitting up packets or ask the maintainer from the maintainers file.

I can capture jumbo packets with Wireshark on the same card (same port,
same setup), which let me think the problem is purely on my DPDK card
configuration.

According to ethtools, the jumbo packet (from now on JF, Jumbo Frame) is
detected at phy level, the couters rx_packets_phy, rx_bytes_phy,
rx_8192_to_10239_bytes_phy are properly increased.

There was an option to setup manually the support for JF but was remove
from DPDK after version 16.07: CONFIG_RTE_LIBRTE_MLX5_SGE_WR_N.
According to the release note:

.
Improved jumbo frames support, by dynamically setting RX scatter gather
elements according to the MTU and mbuf size, no need for compilation
parameter ``MLX5_PMD_SGE_WR_N``
.

Not quire sure where to look for..

>> Thanks
>>
>> -- 
>> BR, Filip
>> +48 666 369 823
> 
> Regards,
> Keith
> 

-- 
BR, Filip
+48 666 369 823

  reply	other threads:[~2019-02-09 15:27 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-02-09 11:11 Filip Janiszewski
2019-02-09 13:51 ` Wiles, Keith
2019-02-09 15:27   ` Filip Janiszewski [this message]
2019-02-09 15:36     ` Wiles, Keith
2019-02-15  5:59       ` Filip Janiszewski
2019-02-15 13:30         ` Wiles, Keith

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=dd7b9dc7-a73d-0d8a-3626-d0f7275417ec@filipjaniszewski.com \
    --to=contact@filipjaniszewski.com \
    --cc=keith.wiles@intel.com \
    --cc=users@dpdk.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).