DPDK patches and discussions
 help / color / mirror / Atom feed
From: Ferruh Yigit <ferruh.yigit@intel.com>
To: Shahaf Shuler <shahafs@mellanox.com>,
	Olivier Matz <olivier.matz@6wind.com>,
	"dev@dpdk.org" <dev@dpdk.org>,
	"Ananyev, Konstantin" <konstantin.ananyev@intel.com>,
	Thomas Monjalon <thomas@monjalon.net>,
	Bruce Richardson <bruce.richardson@intel.com>,
	Andrew Rybchenko <arybchenko@solarflare.com>
Cc: Matan Azrad <matan@mellanox.com>,
	Jerin Jacob Kollanukkaran <jerinj@marvell.com>
Subject: Re: [dpdk-dev] questions about new offload ethdev api
Date: Tue, 10 Dec 2019 18:07:19 +0000	[thread overview]
Message-ID: <65f5f247-15e7-ac0a-183e-8a66193f426f@intel.com> (raw)
In-Reply-To: <VI1PR05MB31493CB772F5C787680B910FC3E30@VI1PR05MB3149.eurprd05.prod.outlook.com>

On 1/23/2018 2:34 PM, Shahaf Shuler wrote:
> Tuesday, January 23, 2018 3:53 PM, Olivier Matz:

<...>

>> 
>> 2/ meaning of rxmode.jumbo_frame, rxmode.enable_scatter, 
>> rxmode.max_rx_pkt_len
>> 
>> While it's not related to the new API, it is probably a good opportunity
>> to clarify the meaning of these flags. I'm not able to find a good
>> documentation about them.
>> 
>> Here is my understanding, the configuration only depends on: - the maximum
>> rx frame length - the amount of data available in a mbuf (minus headroom)
>> 
>> Flags to set in rxmode (example): 
>> +---------------+----------------+----------------+-----------------+ |
>> |mbuf_data_len=1K|mbuf_data_len=2K|mbuf_data_len=16K| 
>> +---------------+----------------+----------------+-----------------+ 
>> |max_rx_len=1500|enable_scatter  |                |                 | 
>> +---------------+----------------+----------------+-----------------+ 
>> |max_rx_len=9000|enable_scatter, |enable_scatter, |jumbo_frame      | |
>> |jumbo_frame     |jumbo_frame     |                 | 
>> +---------------+----------------+----------------+-----------------+
>> 
>> If this table is correct, the flag jumbo_frame would be equivalent to check
>> if max_rx_pkt_len is above a threshold.
>> 
>> And enable_scatter could be deduced from the mbuf size of the given rxq 
>> (which is a bit harder but maybe doable).
> 
> I glad you raised this subject. We had a lot of discussion on it internally
> in Mellanox.
> 
> I fully agree. All application needs is to specify the maximum packet size it
> wants to receive.
> 
> I think also the lack of documentation is causing PMDs to use those flags
> wrongly. For example - some PMDs set the jumbo_frame flag internally without
> it being set by the application.
> 
> I would like to add one more item : MTU. What is the relation (if any)
> between setting MTU and the max_rx_len ? I know MTU stands for Max Transmit
> Unit, however at least in Linux it is the same for the Send and the receive.
> 
> 

(Resurrecting the thread after two years, I will reply again with latest
understanding.)

Thanks Olivier for above summary and table, and unfortunately usage still not
consistent between PMDs. According my understanding:

'max_rx_pkt_len' is user configuration value, to limit the size packet that is
shared with host, but this doesn't limit the size of packet that NIC receives.

Like if the mbuf size of the mempool used by a queue is 1024 bytes, we don't
want packets bigger than buffer size, but if NIC supports it is possible receive
6000 bytes packet and split data into multiple buffers, and we can use multi
segment packets to represent it.
So what we need is NIC ability to limit the size of data to share to host and
scattered Rx support (device + driver).

But MTU limits the size of the packet that NIC receives.


Assuming above are correct J,

Using mbuf data size as 'max_rx_pkt_len' without asking from user is an option,
but perhaps user has different reason to limit packet size, so I think better to
keep as different config option.

I think PMD itself enabling "jumbo frame" offload is not too bad, and
acceptable, since providing a large MTU already implies it.

But not sure about PMD enabling scattered Rx, application may want to force to
receive single segment mbufs, for that case PMD enabling this config on its own
looks like a problem.
But user really needs this when a packet doesn't fit to the mbuf, so providing a
MTU larger than 'max_rx_pkt_len' _may_ imply enabling scattered Rx, I assume
this is the logic in some PMDs which looks acceptable.


And PMD behavior should be according for mentioned configs:

1) Don't change user provided 'max_rx_pkt_len' value

2) If jumbo frame is not enabled, don't limit the size of packets to the host (I
think this is based on assumption that mbuf size always will be > 1514)

3) When user request to set the MTU bigger than ETH_MAX, PMD enable jumbo frame
support (if it is not enabled by user already and supported by HW). If HW
doesn't support if of course it should fail.

4) When user request to set MTU bigger than 'max_rx_pkt_len'
4a) if "scattered Rx" is enabled, configure the MTU and limit packet size to
host to 'max_rx_pkt_len'

4b) if "scattered Rx" is not enabled but HW supports it, enable "scattered Rx"
by PMD, configure the MTU and limit packet size to host to 'max_rx_pkt_len'

4c) if "scattered Rx" is not enabled and not supported by HW, fail MTU set.

4d) if HW doesn't support to limit the packet size to host, but requested MTU
bigger than 'max_rx_pkt_len' it should fail.


Btw, I am aware of that some PMDs have a larger MTU by default and can't limit
the packet size to host to 'max_rx_pkt_len' value, I don't know what to do in
that case, fail in configure? Or at least be sure configured mempool's mbuf size
is big enough?


Thanks,
ferruh

  parent reply	other threads:[~2019-12-10 18:07 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-01-23 13:53 Olivier Matz
2018-01-23 14:34 ` Shahaf Shuler
2018-03-08 15:45   ` Ferruh Yigit
2019-12-10 18:07   ` Ferruh Yigit [this message]
2019-12-16  8:39     ` Andrew Rybchenko
2019-12-27 13:54       ` Olivier Matz
2019-12-27 14:23         ` Ananyev, Konstantin

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=65f5f247-15e7-ac0a-183e-8a66193f426f@intel.com \
    --to=ferruh.yigit@intel.com \
    --cc=arybchenko@solarflare.com \
    --cc=bruce.richardson@intel.com \
    --cc=dev@dpdk.org \
    --cc=jerinj@marvell.com \
    --cc=konstantin.ananyev@intel.com \
    --cc=matan@mellanox.com \
    --cc=olivier.matz@6wind.com \
    --cc=shahafs@mellanox.com \
    --cc=thomas@monjalon.net \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).