From: "Wei Hu (Xavier)" <xavier_huwei@163.com>
To: Ferruh Yigit <ferruh.yigit@intel.com>, dev@dpdk.org
Cc: Andrew Rybchenko <arybchenko@solarflare.com>,
Thomas Monjalon <thomas@monjalon.net>,
Matan Azrad <matan@mellanox.com>
Subject: Re: [dpdk-dev] [PATCH 1/3] app/testpmd: update Rx offload after setting MTU sccessfully
Date: Thu, 13 Feb 2020 09:52:14 +0800 [thread overview]
Message-ID: <69811939-e556-8b2d-37de-b9791e55f7f3@163.com> (raw)
In-Reply-To: <a680d099-6675-54c5-7466-a25efea684b7@chinasoftinc.com>
Hi, Ferruh Yigit
On 2020/2/12 8:25, Wei Hu (Xavier) wrote:
> Hi, Ferruh Yigit
>
> On 2020/1/28 19:27, Ferruh Yigit wrote:
>> On 1/21/2020 11:44 AM, Wei Hu (Xavier) wrote:
>>> From: "Wei Hu (Xavier)" <xavier.huwei@huawei.com>
>>>
>>> Currently, Rx offload capabilities and max_rx_pkt_len in the struct
>>> variable named rte_port are not updated after setting mtu successfully
>>> in port_mtu_set function by 'port config mtu <port_id> <value>' command.
>>> This may lead to reconfig mtu to the initial value in the driver when
>>> recalling rte_eth_dev_configure API interface.
>>>
>>> This patch updates Rx offload capabilities and max_rx_pkt_len after
>>> setting mtu successfully when configuring mtu.
>>>
>>> Fixes: ae03d0d18adf ("app/testpmd: command to configure MTU")
>>> Cc: stable@dpdk.org
>>>
>>> Signed-off-by: Huisong Li <lihuisong@huawei.com>
>>> Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
>>> ---
>>> app/test-pmd/config.c | 18 +++++++++++++++++-
>>> 1 file changed, 17 insertions(+), 1 deletion(-)
>>>
>>> diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c
>>> index 9669cbd4c..09a1579f5 100644
>>> --- a/app/test-pmd/config.c
>>> +++ b/app/test-pmd/config.c
>>> @@ -1216,7 +1216,9 @@ void
>>> port_mtu_set(portid_t port_id, uint16_t mtu)
>>> {
>>> int diag;
>>> + struct rte_port *rte_port = &ports[port_id];
>>> struct rte_eth_dev_info dev_info;
>>> + uint16_t eth_overhead;
>>> int ret;
>>> if (port_id_is_invalid(port_id, ENABLED_WARN))
>>> @@ -1232,8 +1234,22 @@ port_mtu_set(portid_t port_id, uint16_t mtu)
>>> return;
>>> }
>>> diag = rte_eth_dev_set_mtu(port_id, mtu);
>>> - if (diag == 0)
>>> + if (diag == 0) {
>>> + /*
>>> + * Ether overhead in driver is equal to the difference of
>>> + * max_rx_pktlen and max_mtu in rte_eth_dev_info.
>>> + */
>>> + eth_overhead = dev_info.max_rx_pktlen - dev_info.max_mtu;
>>> + if (mtu > RTE_ETHER_MAX_LEN - eth_overhead)
>>> + rte_port->dev_conf.rxmode.offloads |=
>>> + DEV_RX_OFFLOAD_JUMBO_FRAME;
>>> + else
>>> + rte_port->dev_conf.rxmode.offloads &=
>>> + ~DEV_RX_OFFLOAD_JUMBO_FRAME;
>>
>> If the jumbo frame capability supported or not should be tested before
>> setting it.
>>
>>> + rte_port->dev_conf.rxmode.max_rx_pkt_len = mtu + eth_overhead;
>>
>> May need to check against 'dev_info.max_rx_pktlen', if the
>> 'max_rx_pkt_len' is
>> bigger than this, it will fail in next configure.
>>
>> Also some divers already does this in PMD code, should we clean that
>> code or not
>> is a question.
>>
> The snippset is adjusted as follows:
>
> if (mtu > RTE_ETHER_MAX_LEN - eth_overhead && dev_info.rx_offload_capa &
> DEV_RX_OFFLOAD_JUMBO_FRAME) {
> rte_port->dev_conf.rxmode.offloads |=
> DEV_RX_OFFLOAD_JUMBO_FRAME;
> rte_port->dev_conf.rxmode.max_rx_pkt_len = mtu + eth_overhead;
> } else
> rte_port->dev_conf.rxmode.offloads &=
> ~DEV_RX_OFFLOAD_JUMBO_FRAME;
>
> We only modifies the internal variables of testpmd, don't impact on the
> implementation of the driver.
>
> Thanks for more suggestions.
>
> XavierThe code is modified as follows:
diag = rte_eth_dev_set_mtu(port_id, mtu);
if (diag == 0 &&
dev_info.rx_offload_capa & DEV_RX_OFFLOAD_JUMBO_FRAME) {
/*
* Ether overhead in driver is equal to the difference of
* max_rx_pktlen and max_mtu in rte_eth_dev_info when the
* device supports jumbo frame.
*/
eth_overhead = dev_info.max_rx_pktlen - dev_info.max_mtu;
if (mtu > RTE_ETHER_MAX_LEN - eth_overhead) {
rte_port->dev_conf.rxmode.offloads |=
DEV_RX_OFFLOAD_JUMBO_FRAME;
rte_port->dev_conf.rxmode.max_rx_pkt_len =
mtu + eth_overhead;
} else
rte_port->dev_conf.rxmode.offloads &=
~DEV_RX_OFFLOAD_JUMBO_FRAME;
return;
}
We will send V2. Thanks
Xavier
next prev parent reply other threads:[~2020-02-13 1:52 UTC|newest]
Thread overview: 12+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-01-21 11:44 [dpdk-dev] [PATCH 0/3] app/testpmd: fixes for testpmd application Wei Hu (Xavier)
2020-01-21 11:44 ` [dpdk-dev] [PATCH 1/3] app/testpmd: update Rx offload after setting MTU sccessfully Wei Hu (Xavier)
2020-01-28 11:27 ` Ferruh Yigit
2020-02-12 0:25 ` Wei Hu (Xavier)
2020-02-13 1:52 ` Wei Hu (Xavier) [this message]
2020-01-21 11:44 ` [dpdk-dev] [PATCH 2/3] app/testpmd: fix the initial value when setting PFC Wei Hu (Xavier)
2020-01-28 11:21 ` Ferruh Yigit
2020-02-04 18:25 ` Ferruh Yigit
2020-01-21 11:44 ` [dpdk-dev] [PATCH 3/3] app/testpmd: fix uninitialized members " Wei Hu (Xavier)
2020-01-28 11:21 ` Ferruh Yigit
2020-02-04 18:25 ` Ferruh Yigit
2020-02-04 18:24 ` [dpdk-dev] [PATCH 0/3] app/testpmd: fixes for testpmd application Ferruh Yigit
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=69811939-e556-8b2d-37de-b9791e55f7f3@163.com \
--to=xavier_huwei@163.com \
--cc=arybchenko@solarflare.com \
--cc=dev@dpdk.org \
--cc=ferruh.yigit@intel.com \
--cc=matan@mellanox.com \
--cc=thomas@monjalon.net \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).