From: Ferruh Yigit <ferruh.yigit@intel.com>
To: Matan Azrad <matan@mellanox.com>,
Dekel Peled <dekelp@mellanox.com>,
"john.mcnamara@intel.com" <john.mcnamara@intel.com>,
"marko.kovacevic@intel.com" <marko.kovacevic@intel.com>,
"nhorman@tuxdriver.com" <nhorman@tuxdriver.com>,
"ajit.khaparde@broadcom.com" <ajit.khaparde@broadcom.com>,
"somnath.kotur@broadcom.com" <somnath.kotur@broadcom.com>,
"anatoly.burakov@intel.com" <anatoly.burakov@intel.com>,
"xuanziyang2@huawei.com" <xuanziyang2@huawei.com>,
"cloud.wangxiaoyun@huawei.com" <cloud.wangxiaoyun@huawei.com>,
"zhouguoyang@huawei.com" <zhouguoyang@huawei.com>,
"wenzhuo.lu@intel.com" <wenzhuo.lu@intel.com>,
"konstantin.ananyev@intel.com" <konstantin.ananyev@intel.com>,
Shahaf Shuler <shahafs@mellanox.com>,
Slava Ovsiienko <viacheslavo@mellanox.com>,
"rmody@marvell.com" <rmody@marvell.com>,
"shshaikh@marvell.com" <shshaikh@marvell.com>,
"maxime.coquelin@redhat.com" <maxime.coquelin@redhat.com>,
"tiwei.bie@intel.com" <tiwei.bie@intel.com>,
"zhihong.wang@intel.com" <zhihong.wang@intel.com>,
"yongwang@vmware.com" <yongwang@vmware.com>,
Thomas Monjalon <thomas@monjalon.net>,
"arybchenko@solarflare.com" <arybchenko@solarflare.com>,
"jingjing.wu@intel.com" <jingjing.wu@intel.com>,
"bernard.iremonger@intel.com" <bernard.iremonger@intel.com>
Cc: "dev@dpdk.org" <dev@dpdk.org>
Subject: Re: [dpdk-dev] [PATCH v4 1/3] ethdev: support API to set max LRO packet size
Date: Fri, 8 Nov 2019 12:51:48 +0000 [thread overview]
Message-ID: <60dc4ef1-7e9a-5073-c534-e3b7a42a9abf@intel.com> (raw)
In-Reply-To: <AM0PR0502MB40197D18E5F48633075AADBBD27B0@AM0PR0502MB4019.eurprd05.prod.outlook.com>
On 11/8/2019 11:56 AM, Matan Azrad wrote:
>
>
> From: Ferruh Yigit
>> On 11/8/2019 10:10 AM, Matan Azrad wrote:
>>>
>>>
>>> From: Ferruh Yigit
>>>> On 11/8/2019 6:54 AM, Matan Azrad wrote:
>>>>> Hi
>>>>>
>>>>> From: Ferruh Yigit
>>>>>> On 11/7/2019 12:35 PM, Dekel Peled wrote:
>>>>>>> @@ -1266,6 +1286,18 @@ struct rte_eth_dev *
>>>>>>>
>>>>>> RTE_ETHER_MAX_LEN;
>>>>>>> }
>>>>>>>
>>>>>>> + /*
>>>>>>> + * If LRO is enabled, check that the maximum aggregated
>> packet
>>>>>>> + * size is supported by the configured device.
>>>>>>> + */
>>>>>>> + if (dev_conf->rxmode.offloads &
>> DEV_RX_OFFLOAD_TCP_LRO) {
>>>>>>> + ret = check_lro_pkt_size(
>>>>>>> + port_id, dev_conf-
>>>>>>> rxmode.max_lro_pkt_size,
>>>>>>> + dev_info.max_lro_pkt_size);
>>>>>>> + if (ret != 0)
>>>>>>> + goto rollback;
>>>>>>> + }
>>>>>>> +
>>>>>>
>>>>>> This check forces applications that enable LRO to provide
>>>> 'max_lro_pkt_size'
>>>>>> config value.
>>>>>
>>>>> Yes.(we can break an API, we noticed it)
>>>>
>>>> I am not talking about API/ABI breakage, that part is OK.
>>>> With this check, if the application requested LRO offload but not
>>>> provided 'max_lro_pkt_size' value, device configuration will fail.
>>>>
>>> Yes
>>>> Can there be a case application is good with whatever the PMD can
>>>> support as max?
>>> Yes can be - you know, we can do everything we want but it is better to be
>> consistent:
>>> Due to the fact of Max rx pkt len field is mandatory for JUMBO offload, max
>> lro pkt len should be mandatory for LRO offload.
>>>
>>> So your question is actually why both, non-lro packets and LRO packets max
>> size are mandatory...
>>>
>>>
>>> I think it should be important values for net applications management.
>>> Also good for mbuf size managements.
>>>
>>>>>
>>>>>> - Why it is mandatory now, how it was working before if it is
>>>>>> mandatory value?
>>>>>
>>>>> It is the same as max_rx_pkt_len which is mandatory for jumbo frame
>>>> offload.
>>>>> So now, when the user configures a LRO offload he must to set max
>>>>> lro pkt
>>>> len.
>>>>> We don't want to confuse the user here with the max rx pkt len
>>>> configurations and behaviors, they should be with same logic.
>>>>>
>>>>> This parameter defines well the LRO behavior.
>>>>> Before this, each PMD took its own interpretation to what should be
>>>>> the
>>>> maximum size for LRO aggregated packets.
>>>>> Now, the user must say what is his intension, and the ethdev can
>>>>> limit it
>>>> according to the device capability.
>>>>> By this way, also, the PMD can organize\optimize its data-path more.
>>>>> Also, the application can create different mempools for LRO queues
>>>>> to
>>>> allow bigger packet receiving for LRO traffic.
>>>>>
>>>>>> - What happens if PMD doesn't provide 'max_lro_pkt_size', so it is '0'?
>>>>> Yes, you can see the feature description Dekel added.
>>>>> This patch also updates all the PMDs support an LRO for non-0 value.
>>>>
>>>> Of course I can see the updates Matan, my point is "What happens if
>>>> PMD doesn't provide 'max_lro_pkt_size'",
>>>> 1) There is no check for it right, so it is acceptable?
>>>
>>> There is check.
>>> If the capability is 0, any non-zero configuration will fail.
>>>
>>>> 2) Are we making this filed mandatory to provide for PMDs, it is easy
>>>> to make new fields mandatory for PMDs but is this really necessary?
>>>
>>> Yes, for consistence.
>>>
>>>>>
>>>>> as same as max rx pkt len, no?
>>>>>
>>>>>> - What do you think setting 'max_lro_pkt_size' config value to what
>>>>>> PMD provided if application doesn't provide it?
>>>>> Same answers as above.
>>>>>
>>>>
>>>> If application doesn't care the value, as it has been till now, and
>>>> not provided explicit 'max_lro_pkt_size', why not ethdev level use
>>>> the value provided by PMD instead of failing?
>>>
>>> Again, same question we can ask on max rx pkt len.
>>>
>>> Looks like the packet size is very important value which should be set by
>> the application.
>>>
>>> Previous applications have no option to configure it, so they haven't
>> configure it, (probably cover it somehow) I think it is our miss to supply this
>> info.
>>>
>>> Let's do it in same way as we do max rx pkt len (as this patch main idea).
>>> Later, we can change both to other meaning.
>>>
>>
>> I think it is not a good reason to introduce a new mandatory config option for
>> application because of 'max_rx_pkt_len' does it.
>
> It is mandatory only if LRO offload is configured.
>
>> Will it work, if:
>> - If application doesn't provide this value, use the PMD max
>
> May cause a problem if the mbuf size is not enough for the PMD maximum.
OK, this is what I was missing, for this case I was thinking max_rx_pkt_len will
be used but you already explained that application may want to use different
mempools for LRO queues.
For this case shouldn't PMDs take the 'rxmode.max_lro_pkt_size' into account and
program the device accordingly (of course in LRO enabled case) ?
This part seems missing and should be highlighted to other PMD maintainers.
>
>> - If both application and PMD doesn't provide this value, fail on configure()?
>
> It will work.
> In my opinion - not ideal.
>
> Matan
>
>
next prev parent reply other threads:[~2019-11-08 12:52 UTC|newest]
Thread overview: 79+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-11-05 8:40 [dpdk-dev] [PATCH 0/3] " Dekel Peled
2019-11-05 8:40 ` [dpdk-dev] [PATCH 1/3] ethdev: " Dekel Peled
2019-11-05 12:39 ` Andrew Rybchenko
2019-11-05 13:09 ` Thomas Monjalon
2019-11-05 14:18 ` Dekel Peled
2019-11-05 14:27 ` Andrew Rybchenko
2019-11-05 14:51 ` Dekel Peled
2019-11-05 8:40 ` [dpdk-dev] [PATCH 2/3] net/mlx5: use " Dekel Peled
2019-11-05 8:40 ` [dpdk-dev] [PATCH 3/3] app/testpmd: " Dekel Peled
2019-11-05 9:35 ` [dpdk-dev] [PATCH 0/3] support " Matan Azrad
2019-11-06 11:34 ` [dpdk-dev] [PATCH v2 " Dekel Peled
2019-11-06 11:34 ` [dpdk-dev] [PATCH v2 1/3] ethdev: " Dekel Peled
2019-11-06 12:26 ` Thomas Monjalon
2019-11-06 12:39 ` Dekel Peled
2019-11-06 11:34 ` [dpdk-dev] [PATCH v2 2/3] net/mlx5: use " Dekel Peled
2019-11-06 11:34 ` [dpdk-dev] [PATCH v2 3/3] app/testpmd: " Dekel Peled
2019-11-06 12:35 ` Iremonger, Bernard
2019-11-06 13:14 ` Dekel Peled
2019-11-06 14:28 ` [dpdk-dev] [PATCH v3 0/3] support " Dekel Peled
2019-11-06 14:28 ` [dpdk-dev] [PATCH v3 1/3] ethdev: " Dekel Peled
2019-11-07 11:57 ` [dpdk-dev] [EXT] " Shahed Shaikh
2019-11-07 12:18 ` Dekel Peled
2019-11-06 14:28 ` [dpdk-dev] [PATCH v3 2/3] net/mlx5: use " Dekel Peled
2019-11-06 14:28 ` [dpdk-dev] [PATCH v3 3/3] app/testpmd: " Dekel Peled
2019-11-06 16:41 ` [dpdk-dev] [PATCH v3 0/3] support " Iremonger, Bernard
2019-11-07 6:10 ` Dekel Peled
2019-11-07 12:35 ` [dpdk-dev] [PATCH v4 " Dekel Peled
2019-11-07 12:35 ` [dpdk-dev] [PATCH v4 1/3] ethdev: " Dekel Peled
2019-11-07 20:15 ` Ferruh Yigit
2019-11-08 6:54 ` Matan Azrad
2019-11-08 9:19 ` Ferruh Yigit
2019-11-08 10:10 ` Matan Azrad
2019-11-08 11:37 ` Ferruh Yigit
2019-11-08 11:56 ` Matan Azrad
2019-11-08 12:51 ` Ferruh Yigit [this message]
2019-11-08 16:11 ` Dekel Peled
2019-11-08 16:53 ` Ferruh Yigit
2019-11-09 18:20 ` Matan Azrad
2019-11-10 23:40 ` Ananyev, Konstantin
2019-11-11 8:01 ` Matan Azrad
2019-11-12 18:31 ` Ananyev, Konstantin
2019-11-11 11:15 ` Ferruh Yigit
2019-11-11 11:33 ` Matan Azrad
2019-11-11 12:21 ` Ferruh Yigit
2019-11-11 13:32 ` Matan Azrad
2019-11-08 13:11 ` Ananyev, Konstantin
2019-11-08 14:10 ` Dekel Peled
2019-11-08 14:52 ` Ananyev, Konstantin
2019-11-08 16:08 ` Dekel Peled
2019-11-08 16:28 ` Ananyev, Konstantin
2019-11-09 18:26 ` Matan Azrad
2019-11-10 22:51 ` Ananyev, Konstantin
2019-11-11 6:53 ` Matan Azrad
2019-11-07 12:35 ` [dpdk-dev] [PATCH v4 2/3] net/mlx5: use " Dekel Peled
2019-11-08 9:12 ` Slava Ovsiienko
2019-11-08 9:23 ` Ferruh Yigit
2019-11-07 12:35 ` [dpdk-dev] [PATCH v4 3/3] app/testpmd: " Dekel Peled
2019-11-07 14:20 ` Iremonger, Bernard
2019-11-07 20:25 ` Ferruh Yigit
2019-11-08 6:56 ` Matan Azrad
2019-11-08 13:58 ` Dekel Peled
2019-11-08 6:28 ` [dpdk-dev] [PATCH v4 0/3] support " Matan Azrad
2019-11-08 16:42 ` [dpdk-dev] [PATCH v5 " Dekel Peled
2019-11-08 16:42 ` [dpdk-dev] [PATCH v5 1/3] ethdev: " Dekel Peled
2019-11-10 23:07 ` Ananyev, Konstantin
2019-11-11 7:40 ` Dekel Peled
2019-11-08 16:42 ` [dpdk-dev] [PATCH v5 2/3] net/mlx5: use " Dekel Peled
2019-11-08 16:42 ` [dpdk-dev] [PATCH v5 3/3] app/testpmd: " Dekel Peled
2019-11-10 23:11 ` Ananyev, Konstantin
2019-11-11 7:40 ` Dekel Peled
2019-11-08 23:07 ` [dpdk-dev] [PATCH v6] ethdev: add " Thomas Monjalon
2019-11-10 22:47 ` Ananyev, Konstantin
2019-11-11 17:47 ` [dpdk-dev] [PATCH v7 0/3] support API to set " Dekel Peled
2019-11-11 17:47 ` [dpdk-dev] [PATCH v7 1/3] ethdev: " Dekel Peled
2019-11-12 0:46 ` Ferruh Yigit
2019-11-11 17:47 ` [dpdk-dev] [PATCH v7 2/3] net/mlx5: use " Dekel Peled
2019-11-11 17:47 ` [dpdk-dev] [PATCH v7 3/3] app/testpmd: " Dekel Peled
2019-11-12 0:46 ` Ferruh Yigit
2019-11-12 0:47 ` [dpdk-dev] [PATCH v7 0/3] support " Ferruh Yigit
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=60dc4ef1-7e9a-5073-c534-e3b7a42a9abf@intel.com \
--to=ferruh.yigit@intel.com \
--cc=ajit.khaparde@broadcom.com \
--cc=anatoly.burakov@intel.com \
--cc=arybchenko@solarflare.com \
--cc=bernard.iremonger@intel.com \
--cc=cloud.wangxiaoyun@huawei.com \
--cc=dekelp@mellanox.com \
--cc=dev@dpdk.org \
--cc=jingjing.wu@intel.com \
--cc=john.mcnamara@intel.com \
--cc=konstantin.ananyev@intel.com \
--cc=marko.kovacevic@intel.com \
--cc=matan@mellanox.com \
--cc=maxime.coquelin@redhat.com \
--cc=nhorman@tuxdriver.com \
--cc=rmody@marvell.com \
--cc=shahafs@mellanox.com \
--cc=shshaikh@marvell.com \
--cc=somnath.kotur@broadcom.com \
--cc=thomas@monjalon.net \
--cc=tiwei.bie@intel.com \
--cc=viacheslavo@mellanox.com \
--cc=wenzhuo.lu@intel.com \
--cc=xuanziyang2@huawei.com \
--cc=yongwang@vmware.com \
--cc=zhihong.wang@intel.com \
--cc=zhouguoyang@huawei.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).