* [dpdk-dev] [PATCH v2] app/testpmd: update Rx offload after setting MTU sccessfully
@ 2020-02-13 1:57 Wei Hu (Xavier)
2020-02-13 19:00 ` Ferruh Yigit
0 siblings, 1 reply; 2+ messages in thread
From: Wei Hu (Xavier) @ 2020-02-13 1:57 UTC (permalink / raw)
To: dev
From: "Wei Hu (Xavier)" <xavier.huwei@huawei.com>
Currently, Rx offload capabilities and max_rx_pkt_len in the struct
variable named rte_port are not updated after setting mtu successfully
in port_mtu_set function by 'port config mtu <port_id> <value>' command.
This may lead to reconfig mtu to the initial value in the driver when
recalling rte_eth_dev_configure API interface.
This patch updates Rx offload capabilities and max_rx_pkt_len after
setting mtu successfully when configuring mtu.
Fixes: ae03d0d18adf ("app/testpmd: command to configure MTU")
Cc: stable@dpdk.org
Signed-off-by: Huisong Li <lihuisong@huawei.com>
Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
---
v1 -> v2:
Address the comments form Ferruh Yigit, the related link as below:
http://patches.dpdk.org/patch/65007/
---
app/test-pmd/config.c | 21 ++++++++++++++++++++-
1 file changed, 20 insertions(+), 1 deletion(-)
diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c
index 9669cbd4c..409c1327a 100644
--- a/app/test-pmd/config.c
+++ b/app/test-pmd/config.c
@@ -1216,7 +1216,9 @@ void
port_mtu_set(portid_t port_id, uint16_t mtu)
{
int diag;
+ struct rte_port *rte_port = &ports[port_id];
struct rte_eth_dev_info dev_info;
+ uint16_t eth_overhead;
int ret;
if (port_id_is_invalid(port_id, ENABLED_WARN))
@@ -1232,8 +1234,25 @@ port_mtu_set(portid_t port_id, uint16_t mtu)
return;
}
diag = rte_eth_dev_set_mtu(port_id, mtu);
- if (diag == 0)
+ if (diag == 0 &&
+ dev_info.rx_offload_capa & DEV_RX_OFFLOAD_JUMBO_FRAME) {
+ /*
+ * Ether overhead in driver is equal to the difference of
+ * max_rx_pktlen and max_mtu in rte_eth_dev_info when the
+ * device supports jumbo frame.
+ */
+ eth_overhead = dev_info.max_rx_pktlen - dev_info.max_mtu;
+ if (mtu > RTE_ETHER_MAX_LEN - eth_overhead) {
+ rte_port->dev_conf.rxmode.offloads |=
+ DEV_RX_OFFLOAD_JUMBO_FRAME;
+ rte_port->dev_conf.rxmode.max_rx_pkt_len =
+ mtu + eth_overhead;
+ } else
+ rte_port->dev_conf.rxmode.offloads &=
+ ~DEV_RX_OFFLOAD_JUMBO_FRAME;
+
return;
+ }
printf("Set MTU failed. diag=%d\n", diag);
}
--
2.23.0
^ permalink raw reply [flat|nested] 2+ messages in thread
* Re: [dpdk-dev] [PATCH v2] app/testpmd: update Rx offload after setting MTU sccessfully
2020-02-13 1:57 [dpdk-dev] [PATCH v2] app/testpmd: update Rx offload after setting MTU sccessfully Wei Hu (Xavier)
@ 2020-02-13 19:00 ` Ferruh Yigit
0 siblings, 0 replies; 2+ messages in thread
From: Ferruh Yigit @ 2020-02-13 19:00 UTC (permalink / raw)
To: Wei Hu (Xavier), dev
On 2/13/2020 1:57 AM, Wei Hu (Xavier) wrote:
> From: "Wei Hu (Xavier)" <xavier.huwei@huawei.com>
>
> Currently, Rx offload capabilities and max_rx_pkt_len in the struct
> variable named rte_port are not updated after setting mtu successfully
> in port_mtu_set function by 'port config mtu <port_id> <value>' command.
> This may lead to reconfig mtu to the initial value in the driver when
> recalling rte_eth_dev_configure API interface.
>
> This patch updates Rx offload capabilities and max_rx_pkt_len after
> setting mtu successfully when configuring mtu.
>
> Fixes: ae03d0d18adf ("app/testpmd: command to configure MTU")
> Cc: stable@dpdk.org
>
> Signed-off-by: Huisong Li <lihuisong@huawei.com>
> Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Applied to dpdk-next-net/master, thanks.
^ permalink raw reply [flat|nested] 2+ messages in thread
end of thread, other threads:[~2020-02-13 19:00 UTC | newest]
Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-02-13 1:57 [dpdk-dev] [PATCH v2] app/testpmd: update Rx offload after setting MTU sccessfully Wei Hu (Xavier)
2020-02-13 19:00 ` Ferruh Yigit
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).